On May 11th, the European Parliament Committees, specifically the Internal Market Committee and the Civil Liberties Committee, voted in favor of safeguarding fundamental rights in the Artificial Intelligence (AI) Act. The entire Parliament is anticipated to endorse these decisions during the session on June 15th.
The future we inhabit will be shaped by our approach to Artificial Intelligence (AI). The European regulations governing AI will become the world’s first of their kind. By enacting this pioneering legislation, Europe can lead the way in ensuring that AI is centered around a human dimension, trustworthy and safe.
Artificial intelligence (AI) refers to the field of computer science that focuses on creating intelligent machines capable of performing tasks that typically require human intelligence. It involves developing computer systems and algorithms that can analyze and interpret data, learn from experience, make decisions, and solve problems.
AI encompasses a wide range of techniques and approaches, including machine learning, natural language processing, computer vision, robotics, and expert systems. These technologies enable AI systems to perceive and understand the world, reason and make inferences, and interact with humans or their environment.
While the development and deployment of AI bring numerous benefits and advancements, there are also ethical and societal considerations to address, such as privacy, bias, transparency, and the impact on employment and social dynamics.
In order to ensure a development of Artificial Intelligence (AI) in Europe that is human-centered and ethical, the European Parliament has called for new regulations to prohibit the use of AI systems for biometric surveillance, emotion recognition, and predictive policing (profiling).
Since 2021 To harness the full potential of AI and its benefits while ensuring safety and protecting fundamental rights, the European Union has discussed actions for governing AI. Specifically, Europe is currently near to approve the AI Act and the Coordinated Plan on AI, together they aim to accelerate investment in AI prioritizing excellence and trust and boost research and industrial capacity.
The European Parliament has also endorsed new rules concerning transparency and risk management for AI systems. These rules will adopt a risk-based approach and impose obligations on providers and users based on the level of risk posed by the AI system. AI systems deemed to have an unacceptable level of risk to people’s safety will be strictly prohibited. This includes systems employing subliminal or manipulative techniques, exploiting people’s vulnerabilities, or being used for social scoring (classifying individuals based on their social behavior, socioeconomic status, or personal characteristics).
More precisely, the EP voted to ban intrusive and discriminatory uses of AI systems, such as:
- Biometric identification systems in publicly accessible spaces (with the exception of serious crimes);
- Biometric categorization systems using sensitive characteristics such as gender, race, ethnicity, citizenship status, religion, or political orientation;
- Predictive policing systems based on profiling, location, or past criminal behaviour;
- Emotion recognition systems in law enforcement, border management, workplace, and educational institutions;
- Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases, as this violates human rights and the right to privacy.
Establishing trust among citizens is crucial for the development of AI in Europe, not only to address the significant changes already taking place but also to steer the global political discourse on AI. The final text is expected to strike a balance between protecting fundamental rights, providing legal certainty to businesses, and fostering innovation in Europe.