Wordpress Blog Art (3)

How Organizations Can Manage AI Risks With Guidelines From The EU’s AI Act

The EU’s AI Act outlines a new legal framework that aims to significantly bolster regulations on the development and use of artificial intelligence, posing ethical questions in various sectors such as healthcare, education, and energy. It includes increased rules around data quality, transparency, human oversight, and accountability. Any non-compliance can result in penalties of up to €30 million or 6% of a company’s income.

It encompasses a classification system that assesses the level of risk AI technology could pose to health, safety, or humans – within four risk tiers:

  • Limited risk: These AI systems can be used with little requirements but there should be a degree of transparency. An example is spam filters and video games.
  • Unacceptable risk: These are prohibited or not allowed, without exception. This could include government social scoring and real-time biometric identification systems in public spaces.
  • High-risk systems: The AI model used is permitted but developers and users must adhere to regulations that require rigorous testing, proper documentation of data quality, and an accountability framework that details human oversight. Such examples include autonomous vehicles, medical devices, and critical infrastructure machinery, to name a few.
  • General purpose AI: These are used for varying purposes under varying degrees of risk. An example here is the latest language model, ChatGPT.

Key lessons from the EU’s Act to manage AI risks: 

  1. Risk-based approach: The level of regulatory oversight depends on the potential risks associated with the AI application. The higher the risk, the more stringent the regulation, which will ensure that the regulation is proportionate to the level of risk.
  2. Transparency and accountability: The users must be able to understand how the AI system makes decisions and that the provider of the AI system is responsible for any harm caused by the system.
    Ethical and social principles: There is an emphasis on the importance of ethical and social principles in the design and use of AI systems. This includes the principles of human dignity, non-discrimination, fairness, and the protection of privacy and personal data.
    Prohibition of certain AI practices: Certain AI practices that are considered to be high-risk will be prohibited.

The full adoption of the act is still in the process. However, in the meantime, companies can assess how their AI practice is regulated and risk compliant, within ethical organizational standards, and can decide on what measures to implement from the EU AI Act.

Share this post

You might also be interested in:

Scroll to Top