EU’s AI Act: A Citizen-Centric Approach to Building Trust in Artificial Intelligence

The European Union’s recently adopted Artificial Intelligence (AI) Act marks a significant milestone in the global landscape of AI regulation. Unlike many other existing regulatory frameworks, the EU’s AI Act takes a distinctly citizen-centric approach, prioritizing the protection of fundamental rights and liberties while fostering the development of trustworthy and beneficial AI technologies.

Why a Citizen-Centric Approach:

The pervasiveness of AI in modern society raises numerous concerns regarding its potential impact on individuals and communities. Biases in algorithms, opaque decision-making processes, and the misuse of AI for surveillance and discrimination are just a few examples of potential threats. Recognizing these risks, the EU has opted for a regulatory framework that prioritizes human rights and ethical considerations.

This citizen-centric approach aims to achieve several key objectives:

  • Protect fundamental rights: The AI Act prohibits the use of AI for practices deemed harmful to society, such as social scoring and emotion recognition in law enforcement. It also ensures transparency and explainability of AI decisions, empowering individuals to understand how AI systems affect them.
  • Promote fairness and non-discrimination: The Act requires AI systems to be developed and used in a way that avoids discrimination based on sensitive characteristics such as race, gender, and religion. This addresses the inherent risks of bias creeping into algorithms and ensures AI benefits all members of society equally.
  • Empower individuals: The Act grants citizens rights to access information about AI systems and contest decisions made by them. It also establishes mechanisms for redressal in case of harm caused by AI. This empowers individuals to hold developers and deployers of AI accountable.
  • Build trust and confidence: By prioritizing human rights and ethical considerations, the EU aims to build public trust in AI technologies. This fosters a more open and inclusive environment for the development and adoption of AI solutions.

Key Features of the AI Act:

The AI Act adopts a risk-based approach, categorizing AI systems based on the level of risk they pose to individuals and society. High-risk systems, such as those used for credit scoring and facial recognition, are subject to stricter requirements, including:

  • Ex-ante conformity assessment: Developers of high-risk AI systems must undergo a rigorous assessment process before their systems can be placed on the market.
  • Transparency and explainability: Individuals have the right to understand how high-risk AI systems reach their decisions and access relevant information about the data used to train them.
  • Human oversight: High-risk AI systems must be subject to meaningful human oversight to ensure accountability and prevent harm.

Furthermore, the Act explicitly prohibits certain harmful uses of AI, including:

  • Social scoring: The use of AI to assign individuals social ratings or scores based on their social behavior or personal characteristics.
  • Real-time remote biometric identification in public spaces: The use of facial recognition or other biometric identification technologies to identify individuals in public without their consent.
  • Predictive policing systems: The use of AI to predict criminal activity and target individuals for potential intervention based solely on profiling or past behavior.

Implications for the Future of AI:

The EU’s AI Act is likely to have a significant impact on the global development and deployment of AI technologies. By establishing a comprehensive regulatory framework that prioritizes human rights and ethical considerations, the EU is setting a new standard for responsible AI governance. This approach is expected to encourage other countries and regions to develop similar regulatory frameworks, creating a more consistent and coordinated approach to managing the risks and promoting the benefits of AI.

Furthermore, the Act is expected to stimulate the development and adoption of trustworthy AI technologies. By requiring developers to comply with stringent requirements regarding transparency, explainability, and fairness, the Act will encourage the development of AI systems that are more human-centric and less susceptible to bias and misuse. This will ultimately lead to a more positive and beneficial future for AI.

Indian AI Laws: A Landscape in Flux:

India, a rapidly developing nation with a burgeoning tech sector, is currently navigating the complex landscape of regulating artificial intelligence (AI). Unlike the European Union, which has implemented the comprehensive AI Act, India’s approach to AI governance is still evolving.

Current Regulatory Landscape:

  • No dedicated AI law: India currently lacks a single, comprehensive law specifically governing AI. Instead, existing regulations from various sectors, such as data privacy and cybersecurity, are applied to AI activities to varying degrees.
  • The Information Technology Act, 2000 (IT Act): This act forms the backbone of India’s cyber law framework and includes provisions related to data privacy, cybercrime, and electronic signatures. While not explicitly addressing AI, some sections can be interpreted to apply to certain AI practices.
  • Personal Data Protection Bill, 2019 (PDP Bill): This Bill, currently under discussion, aims to establish a comprehensive framework for data privacy in India. It includes provisions related to algorithmic bias, data minimization, and individual rights over their data, which have significant implications for AI development and deployment.
  • Draft National AI Strategy: Released in 2018, this document outlines a vision for India to become a global leader in AI. It emphasizes responsible AI development and calls for the creation of an appropriate regulatory framework. However, the strategy itself is not legally binding.

Challenges and Opportunities:

The evolving AI landscape in India presents both challenges and opportunities:


  • Fragmented regulatory landscape: The lack of a single, dedicated AI law can create uncertainty and inconsistency in the application of regulations.
  • Data privacy concerns: Balancing innovation with data privacy remains a critical challenge, particularly with the PDP Bill still under discussion.
  • Lack of awareness and expertise: Building an ecosystem for responsible AI development requires addressing the knowledge gap and training a skilled workforce.


  • First mover advantage: India has a unique opportunity to shape the future of AI by establishing a robust and forward-looking regulatory framework.
  • Focus on ethical AI: India’s emphasis on responsible AI development can attract global investments and partnerships.
  • Leveraging AI for social good: AI can be effectively utilized to address critical challenges in areas like healthcare, education, and agriculture.


India’s AI landscape is still evolving, with various stakeholders actively shaping its future. While challenges remain, the current initiatives and growing focus on responsible AI development offer promising opportunities for India to become a leader in this transformative technology.

Overall, India’s approach to AI governance is gradually taking shape, with a focus on responsible innovation and harnessing the potential of AI for societal benefit. The next few years will be critical in determining how India navigates the evolving AI landscape and establishes itself as a responsible leader in this emerging field.

Aditya Pratap is a lawyer and founder of Aditya Pratap Law Offices. He practices in the realm of real estate, corporate, and criminal law. His website is and his media interviews can be accessed at Views expressed are personal.

This article has been assisted by Sridhan Tiwari, a 3rd year law student pursuing B.A., LL.B. from Lloyd Law College Greater Noida.