From Brussels comes the final draft of the new European law on artificial intelligence. It will be definitively approved in the coming weeks. Here’s everything you need to know.
Artificial intelligence (AI) has become an integral part of our daily lives, from virtual assistants on our phones to self-driving cars. With the rapid advancement of this technology, it has become necessary for the European Union (EU) to establish regulations to ensure its responsible and ethical use. After months of discussions and revisions, the European Commission has finally presented the final draft of the new EU law on artificial intelligence.
This new law aims to regulate the development, deployment, and use of AI systems in the EU, with the goal of promoting trust, transparency, and accountability. It will apply to all companies, regardless of their size or location, that develop or use AI systems in the EU market. The law will also have extraterritorial reach, meaning that companies outside of the EU must also comply if they want to do business within the EU.
One of the key elements of this law is the creation of a European Artificial Intelligence Board, which will be responsible for monitoring and enforcing the regulations. This board will be composed of representatives from each member state and will work closely with the European scadenza Protection Board to ensure the protection of individuals’ rights and freedoms. This collaboration is crucial in ensuring that the use of AI in the EU is in line with the principles of the General scadenza Protection Regulation (GDPR).
The new law also establishes a risk-based approach, categorizing AI systems into four levels: unacceptable risk, high risk, limited risk, and minimal risk. Unacceptable risk refers to AI systems that are considered a threat to the safety, rights, and freedoms of individuals, such as those used for social scoring or biometric identification. High-risk systems, such as those used in healthcare or transportation, will be subject to strict requirements, including human oversight and transparency obligations. Limited risk systems, such as chatbots or virtual assistants, will have less stringent requirements, while minimal risk systems will have no specific requirements.
Moreover, the law prohibits certain uses of AI that are considered unacceptable, such as social scoring, subliminal manipulation, and the exploitation of vulnerable groups. It also requires AI systems to be transparent, meaning that users should be informed when they are interacting with an AI system and should be able to understand how it works and its potential impact on their lives.
The new law also emphasizes the importance of human oversight and accountability. It requires high-risk AI systems to undergo conformity assessments before being placed on the market and to have a risk management system in place. It also establishes a system of penalties for non-compliance, with fines of up to 6% of a company’s global turnover or €20 million, whichever is higher.
The European Commission has also addressed concerns surrounding the use of AI in the workplace. The new law prohibits the use of AI for hiring and firing decisions and requires companies to conduct regular impact assessments to ensure that AI systems do not perpetuate discrimination or bias.
The new EU law on artificial intelligence is a significant step towards ensuring the responsible and ethical use of AI in Europe. It sets a global standard for the development and deployment of AI systems, promoting trust and transparency among citizens. It also provides a framework for companies to follow, ensuring that they are accountable for the potential risks and impacts of their AI systems.
The final draft of the law will now be sent to the European Parliament and the Council for approval, with the aim of being adopted in the coming weeks. This is an exciting development for the EU and the world, as it shows the commitment of the EU to harness the potential of AI while safeguarding the rights and interests of its citizens.
In conclusion, the new EU law on artificial intelligence is a significant and necessary step towards responsible and ethical AI development and use. With its comprehensive approach and emphasis on transparency and accountability, it sets a standard for the rest of the world to follow. As we move towards a more AI-driven future, this law will ensure that we do so in a responsible and ethical manner, ultimately benefiting society as a whole.