EU’s AI Act, first-ever law on AI, Put in Motion  

European Commission’s AI Act is the first comprehensive legal framework to guide the future use of Artificial Intelligence in the bloc and foster use of trustworthy AI worldwide.

The European Commission today set the ball rolling on the AI Act, the first-ever comprehensive framework guiding Artificial Intelligence usage in the bloc, with the ambition to foster trustworthy AI worldwide.   

The law – agreed upon in 2023 and a first globally – addresses the risks of AI and stipulates clear requirements and obligations for developers and deployers in relation to specific uses of Artificial Intelligence. Additionally, the regulation aims to resolve and streamline any administrative and financial bottlenecks faced by businesses, especially Small and Medium Enterprises (SMEs).  Do read our blog on Trustworthy AI from RECLAIM open training here for deeper insights.

In this regard, the EC had announced a package of policy measures called AI Innovation Package to support SMEs and startups in Europe to develop trustworthy AI in consonance with EU’s rules. Together with Coordinated Plan on AI, the AI Act forms a wider package on AI-related policy measures guiding the development and deployment trajectory of AI in the bloc.  

To foster trust, the primary goal of the Act is to ensure Europeans feel safe while working with AI. But there are multiple risks. For example, an AI system might take a decision or action that might be difficult to assess or review, especially in cases where a person(s) might be in a disadvantageous position. 

Thus, the regulation will ensure the following set of rules:  

  • 🔴 Unacceptable risk AI, such as social scoring, is banned.  
    🟠 High-risk AI, including recruitment and applications in medical devices, must meet strict requirements. 
    🟡 Limited-risk AI, like chatbots, must inform users they are interacting with AI. 
    🟢 Minimal risk AI, for example, spam filters, follows existing rules with no additional obligations.

The Act itself works on a risk-based approach with four tiers from top of the pyramid to the bottom: Unacceptable risk, high risk, limited risk and minimal risk. 

You can find more info here EU AI ACT.

In case of large AI models, such as general-purpose AI, the EU has put in place transparency obligations to foster a nuanced understanding and integrate additional risk management strategies, like self-assessment and mitigation of systemic risks, and performing evaluations, among others. Provisions to modify the regulation in sync with changes in AI technology have also been accounted for to not only ensure it is future-ready but that it continues to be trustworthy after it has been put on the market.  

For the enforcement and monitoring of the regulation, the European AI office, established earlier this year, will ensure the same, while creating an environment in which human rights, and trust are upheld.  

In so far as RECLAIM is concerned, AI is an essential project component. To ensure RECLAIM’s activities in AI and gaming adhere to ethical standards and don’t negatively impact society, a comprehensive set of ethical principles will be established by partners IRIS, RBNS, and UoM. These principles will ensure compliance with EU regulations and GDPR. Additionally, a risk assessment and data protection strategy will be implemented throughout the project. FORTH, as the coordinator, will actively monitor the process to safeguard rights and safety, in line with the ethical guidelines from the Montreal Declaration for Responsible AI Development.

In addition to complying with ethical standards on AI, RECLAIM employs an AI Identification, Localisation and Categorisation module (AI-ILC) for enhanced material recovery. For instance, RECLAIM Objective 3 clearly outlines the combination of state-of-the-art hyperspectral imaging with RGB-based computer vision and Convolutional Neural Networks (CNN) technology to implement an efficient AI module for recyclable Identification, Categorisation and Localization, known as the AI-ILC model.  

During the project, extensive data will be collected from RGB and Hyperspectral cameras that will be used to develop CNN solutions for efficient and effective ILC. Initial data annotation will be performed using a new, fast annotation tool for recyclables, allowing experienced recycling workers to quickly gather a large volume of data. Further data annotation will involve citizen participation through the Recycling Date Game (RDG) platform. 

Subsequently, this collected data will be used to evaluate different approaches for the optimal implementation of the AI-ILC module.  

To enhance AI-ILC performance, RGB and hyperspectral data will be recorded for items with low categorization confidence. These poorly categorized items will be sent to the RDG for citizen-powered annotation and then added to the training data to continually improve AI-ILC accuracy. For more info, read our blogpost on AI and Waste Sorting in RECLAIM.

Conclusion

Overall, the AI Act is a welcome step, especially for EU-funded projects, such as RECLAIM, which uses AI for enhanced material identification, localisation and categorisation. The Act will provide the necessary guidance to ethically implement AI-driven technologies that uphold human rights, and dignity of all, while iteratively monitoring the regulation to keep pace with evolving AI technologies.  

Leave a Comment

Your email address will not be published. Required fields are marked *

Skip to content