What is Europe's AI act and how it will affect the world

0

Have you heard about the groundbreaking Artificial Intelligence Act (AI Act) passed by the European Union on 13th March 2024? It's an absolute game-changer in the world of AI! This regulation aims to ensure EU citizens' safety, rights, and livelihoods by banning any AI applications that pose unacceptable risks. Imagine that - no more cognitive behavioral manipulation or biometric identification! This is a significant step towards a safer and more secure future 

1. Artificial intelligence act

The act was first introduced in European parliament in 2021 and has been passed after 3 years of negotiation in March 2024. The main objective of this act is to categorize the use of artificial intelligence (AI) into three risk levels. The first level strictly prohibits the use of AI that creates unacceptable risk, while the second level requires legal requirements for high-risk applications. On the other hand, for applications that do not fall under the banned or high-risk category, there are fewer regulations to follow.

The main goal of the AI Act is to provide clear guidelines for AI developers and deployers to follow while minimizing the burden on businesses. The EU aims to make AI safe and trustworthy while also protecting people's rights. The act will regulate the providers of AI and entities that use AI professionally, ensuring that they adhere to the guidelines and safety measures implemented. 

2. Classification of Risks under the Act

Unacceptable risks - AI systems that pose a threat to individuals will be banned as they are considered an unacceptable risk. Such systems include those that manipulate people's cognitive behavior, especially vulnerable groups, such as voice-activated toys that encourage dangerous behavior in children. Also, social scoring, which classifies people based on their behavior, personal characteristics, or socio-economic status, will be banned. Additionally, the use of biometric identification and categorization of individuals will be prohibited. Lastly, the real-time and remote use of AI systems will also be considered an unacceptable risk.

High risks - AI systems that pose a threat to safety or fundamental rights are classified as high-risk and are divided into two categories:
1. AI systems used in products covered by the EU's product safety legislation, including aviation, cars, lifts, medical devices, and toys.
2. AI systems that fall under specific areas such as access to essential private services, access to self-employment, assistance in legal interpretation, and application of the law, education and vocational training, employment, law enforcement, management and operation of critical infrastructure, migration, asylum, and border control management, public services, and benefits. These systems must be registered in a database maintained by the EU.

Generative AI, such as ChatGPT, is not considered high-risk but still has to adhere to certain transparency requirements and comply with EU copyright laws. This includes acknowledging that the content has been generated by AI, ensuring the model doesn't produce illegal content, and providing summaries of copyrighted data used for training.

3. Global Impact

The 2016 General Data Protection Regulation (GDPR) had a global impact, leading to changes in global platforms. The new AI Act will raise awareness of the risks associated with AI applications and will help lead global change. 
The EU's AI Act may not necessarily establish global norms, as China has already made significant efforts to regulate AI and set standards that are globally significant.
In a world captivated by the promises of AI, it's evident that nations like the UK and US are embracing its potential for growth and innovation. Meanwhile, the EU has already proven its regulatory prowess with GDPR, setting a gold standard for digital privacy. Now, with the AI Act, the EU has a golden opportunity to solidify its position as a global leader in shaping the responsible use of artificial intelligence. 


Resources - 
Tags

Post a Comment

0Comments
Post a Comment (0)