AI Gets Regulated: What the EU AI Act means for AI Safety & HSE Professionals

AI Gets Regulated: What the EU AI Act means for AI Safety & HSE Professionals

AI Gets Regulated! The EU’s landmark Artificial Intelligence Act aims to make AI systems safer, fairer, and more transparent. Discover how this legislation will reshape workplace safety, risk management, and employee well-being.

1. AI Gets Regulated: What is the EU Artificial Intelligence Act and Why You Should Care?

Artificial Intelligence (AI) has changed our world a lot lately. You see it when you wake up and your smart assistant tells you the news, or when it suggests what to watch next. AI is now a big part of our lives. But as AI gets better, people worry more about whether it’s safe, fair, and if it might be used in bad ways.

The European Union (EU) did something important about this. They made a law called the Artificial Intelligence Act which is the first big set of rules to control this powerful technology. Its goal is to ensure AI systems used within the EU are trustworthy and respect fundamental human rights. 

In this blog post, we’ll talk about what the EU AI Act says, what it means for real life and why people are debating about it. The goal is to understand how this law will make AI safer in the future.

2. What is Artificial Intelligence?

Before we jump into the Act itself, let’s understand what Artificial Intelligence or AI actually is. The Technical Definitions of Artificial Intelligence is “The capability of a machine to imitate intelligent human behavior”.

In the simplest terms, AI  refers to computer systems that can perform tasks that normally require human intelligence. This could be:

  • Understanding and responding to language: Think of chatbots or virtual assistants
  • Analyzing images and videos: Like facial recognition or self-driving vehicles
  • Making predictions and decisions: Such as in credit scoring or medical diagnosis

3. What is EU AI Act?

The EU AI Act or generally referred as AI Act is the first-ever legal framework on AI, which addresses the risks of AI. The EU AI Act is the first-ever comprehensive legal framework on AI worldwide, thus,  positions Europe to play a leading role globally.

The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI.

4. Why Do We Need an AI Act or Rules?

As AI becomes more sophisticated and integrated into our lives, there are a range of potential benefits but also significant risks. 

4.1 Benefits: 

4.1.1 Increased efficiency: 

AI can streamline processes and free people up for more creative tasks

4.1.2 Improved decision-making: 

AI can analyze huge amounts of data to offer insights humans might miss

4.1.3 Personalized experiences: 

AI can tailor products and services to individual needs

4.2 Risks:

4.2.1 Bias and discrimination: 

AI systems can perpetuate societal biases if trained on unfair data

4.2.2 Privacy intrusions: 

AI can be used for surveillance and manipulation

4.2.3 Lack of accountability:

It can be difficult to pinpoint who is responsible when AI systems go wrong

4.2.4 Job displacement: 

AI-powered automation may lead to job losses.

The EU Artificial Intelligence Act aims to strike a balance between encouraging responsible AI development and protecting people from potential negative consequences. The proposed rules in this will:

  • Address risks specifically created by AI applications;
  • Prohibit AI practices that pose unacceptable risks;
  • Determine a list of high-risk applications;
  • Set clear requirements for AI systems for high-risk applications;
  • Define specific obligations deployers and providers of high-risk AI applications;
  • Require a conformity assessment before a given AI system is put into service or placed on the market;
  • Put enforcement in place after a given AI system is placed into the market.

5. Decoding the EU Artificial Intelligence Act: Key Features

The AI Act uses a risk-based approach. Not all AI systems are created equal. For example: a chatbot for ordering pizza presents a very different level of risk compared to AI controlling self-driving cars. 

The EU Regulatory Framework defines 4 levels of risk for AI systems called as “Pyramid of Risks”. The Act classifies AI systems into these categories:

5.1 Category 1 — Unacceptable Risk AI Systems: 

The proposed rule (Article 5) explicitly prohibits those AI systems that are/will be  posing the most serious threats to human rights and safety. This includes things like:

  • Social scoring systems that rank citizens based on behavior. 
  • AI systems that exploit vulnerabilities of children or the mentally disabled
  • AI systems that deploy harmful manipulative ‘subliminal techniques’
  • “Real-time” remote biometric identification (like AI-powered facial recognition) in public spaces by law enforcement (with limited exceptions)

5.2 Category 2 — High-Risk AI Systems: 

High Risk AI systems are those used in sectors where the consequences of errors or misuse are particularly severe.  The EU AI Act imposes strict standards on AI in areas like:

  • Critical infrastructure: Protecting transport, energy and other systems essential to society’s functioning, that could put the life and health of citizens at risk if it malfunctions.
  • Educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • Healthcare: Safety components of products or AI used in medical devices (e.g. AI application in robot-assisted surgery);
  • Employment: AI used in hiring shouldn’t perpetuate biases and lead to unfair discrimination.
  • All types of Remote Biometric Identification Systems
  • Migration, Asylum and Border control management (e.g. automated examination of visa applications);
  • Administration of Justice and democratic processes (e.g. AI solutions to search for court rulings).

High-risk systems must meet standards around:

  • Adequate risk assessment and mitigation systems
  • Quality of the datasets used for training to minimise risks and discriminatory outcomes
  • Transparency and explainability (knowing how the AI reached decisions)
  • Appropriate human oversight measures to minimise risk
  • High level of robustness, suber security and accuracy.

5.3 Category 3 — Limited Risk AI Systems

Limited risk refers to the risks associated with lack of transparency in AI usage. The AI Act introduces specific transparency obligations to ensure that humans are informed when necessary, fostering trust. 

For example when users are interacting with an AI system like a chatbot, they should be informed to decide whether to continue or not. Deepfakes also fall within this category, requiring clear labelling when AI generates deceptive content.

5.4 Category 4 — Minimal or no risk AI Systems 

Most AI systems fall under this category, like AI-powered email filters or video games. There are no specific regulations under the Act for these systems

6. How Does the EU AI Act Affect You?

The AI Act aims to provide a safety net, protecting you even if you don’t understand the technical details of how AI works. Here are some ways the Act might impact your life:

6.1 Safer Products and Services: 

With requirements for high-risk systems, this could mean fewer errors in AI-powered medical diagnoses or a reduction in accidents caused by self-driving cars.

6.2 Fairer Treatment: 

The Act seeks to reduce biases that could creep into AI used in recruitment, loan assessments, or access to services.

6.3 Increased Transparency: 

You’ll have a clearer understanding of when you’re interacting with AI systems in customer service or online recommendations.

6.4 Right to Recourse: 

If you believe you’ve been negatively affected by an AI-based decision, the Act provides avenues for redress.

7. The Workplace and HSE – How Will Things Change?

The AI Act has direct or indirect implications for health, safety, and environmental (HSE) professionals. Here’s what you need to know:

7.1 New Hazards and Risks: 

The introduction of AI in the workplace can bring new risks that need to be managed:

7.2 Mental health: 

Chatbots used for employee support might not spot serious distress.

7.3 Physical safety: 

AI in production processes could lead to accidents if not thoroughly risk assessed

7.4 Cybersecurity: 

AI-powered systems can be new targets for cyberattacks.

7.5 Proactive Risk Management:  

HSE professionals will need to update risk assessments and control measures to account for AI-specific hazards.

7.6 Responsible AI Procurement:  

If your organization procures or uses high-risk AI systems, the AI Act sets new standards for what is considered acceptable. You’ll need to be involved in verifying compliance.

7.7 Training and Upskilling:  

Employees using AI systems will need training to understand their capabilities, limitations, and potential risks.  HSE professionals might need to upskill their own knowledge of AI to remain effective.

8. Lessons for the World: The Global Impact of the EU AI Act

The EU AI Act is setting a global precedent. While focused on the EU market, its ripple effects are likely to be felt worldwide.  Here’s why:

8.1 Benchmark for Regulation: 

Other countries will be looking to the EU model as they develop their own AI regulations.

8.2 Competitive Pressure: 

Companies selling high-risk AI systems into the EU market will need to meet its standards, influencing how AI products are designed globally.

8.3 Fostering Global Dialogue: 

The Act is sparking crucial conversations about how to balance the potential of AI with ethical and safety considerations.

9. Final Thoughts on AI Gets Regulated: What the EU AI Act means for AI Safety?

The EU Artificial Intelligence Act is a complex and ambitious piece of legislation. Its success will depend on effective implementation, continuous review and adaptation as AI technology evolves. While it’s not perfect, it’s a significant step towards ensuring AI works  for us, not against us.

As an HSE professional, you have a critical role to play. Embrace this challenge by:

  • staying informed,
  • collaborating with tech teams and proactively managing AI-related risks, 
  • helping to shape a safe, healthy and ethical future for both workplaces and society at large.

I hope this blog post has been helpful in explaining this ground-breaking piece of legislation. Feel free to leave comments and questions below!

Important Note: I’m not a legal expert. This blog post provides an overview but should not be taken as formal legal advice on the EU AI Act.

Let me know if you’d like me to add anything else!


Click the link to read more topics on 2024 Calendar and Important Days for HSE Professionals.


Join me on Facebook, Linkedin, Youtube, WhatsApp & Telegram for the latest updates.


My name is Brijesh Kumar and I am a freelance HSE professional, who is committed to helping organizations to cultivate a proactive safety culture and ensure compliance with industry standards. With a Master’s degree, NEBOSH qualification, a mechanical engineering background and over two decades of hands-on experience, we offer tailored solutions to address your unique HSE challenges. To know more about our HSE Services, please visit the About Us or HSE Services webpage.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.