European Commission proposes amendments to the AI Act to simplify European Union digital regulation
Don’t miss our content
SubscribeAs part of the Digital Package presented on November 19 by the European Commission, the proposed Digital Omnibus Regulation on AI (the “Proposal”) seeks to amend Regulation (EU) 2024/1689 (“AI Act”). Specifically, its aims are to facilitate gradual implementation, strengthen legal certainty and reduce compliance burdens, particularly for small and medium-sized enterprises (“SMEs”) and companies classified as small mid-caps (“SMCs”).
The reform reflects the need to align the AI Act’s rollout with technical standards, interpretative guidance, and the designation of national authorities responsible for enforcement. The proposed changes affect core aspects of the regime. These include application timelines, transparency obligations, the role of the AI Office, requirements for high-risk systems, and proportionality measures for certain economic operators.
Key aspects of the Digital Omnibus on AI
Extension of application deadlines under the AI Act (Article 113 AI Act)
A first set of amendments adjusts the implementation timelines. Specifically, revised Article 113 introduces specific extensions for key obligations affecting both providers and users of high-risk AI systems.
Under the original resolution, obligations for high-risk systems apply from (i) August 2, 2026, for AI systems listed in Article 6.2 and Annex III; and (ii) August 2, 2027, for AI systems intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonization legislation listed in Annex I under Article 6.1.
The Proposal allows the Commission to extend these deadlines. Specifically, extension may reach 6 months for the first group and 12 months for the second. However, under no circumstances may deadlines extend beyond December 2, 2027, and August 2, 2028, respectively.
This change addresses practical implementation challenges, as many key technical standards remain pending. These include standards on data management, lifecycle governance, technical documentation, or system robustness, which are yet to be adopted by the pertinent European standardization bodies.
Providers of generative AI systems placed on the market before August 2, 2026, will receive an additional six months to comply with Article 50.2. That provision concerns content labeling and machine-readable signals for artificially generated or manipulated content.
Through this measure, the Commission seeks to prevent competitive distortions between established players and new market entrants.
Removal of direct obligation to ensure AI literacy (Article 4 AI Act)
One of the most significant amendments concerns Article 4. The current provision requires providers and deployers to ensure AI literacy among their employees.
The Proposal removes this direct obligation and replaces it with an institutional mandate. The Commission and Member States will be required to “promote AI literacy” and “encourage” providers and users to adopt proportionate training measures.
This change aims to reduce administrative burdens and avoid overlap with existing obligations on employee competence for operating high-risk systems. Those requirements, set out in Article 9 et seq. of the AI Act, remain unchanged.
Rebalancing classification and registration regime for high-risk systems (Articles 6 and 49 AI Act)
Article 6.3 currently allows providers to assess whether a system, despite operating in an area included in Annex III, should be considered not high-risk due to its intended use or purpose.
The Proposal removes the obligation to register such systems in the European database provided in Article 49 of the AI Act. However, providers must still document and justify their assessment to the competent authorities upon request.
This change reduces bureaucracy and expedites market entry.
Strengthening the AI Office’s role (Article 75 AI Act)
The Proposal amends Article 75 to expand the supervisory remit of the Commission’s AI Office. The AI Office is designated as the competent authority for overseeing and enforcing obligations related to AI systems based on general-purpose AI (GPAI) models. This competence applies when the same provider develops both the model and the AI system built upon it.
Also, Article 75, according to the proposed reform, authorizes the AI Office to supervise AI systems integrated into very large online platforms and very large online search engines. These services fall under Regulation (EU) 2022/2065 (Digital Services Act [“DSA”]).
In this context, the AI Office must coordinate enforcement of the AI Act with the authorities responsible for DSA supervision, thereby avoiding duplication of documentation, information requests and sanctions.
Proportionality measures for SMEs and SMCs (Article 99 AI Act)
The Proposal expands Article 99 to cover SMCs in addition to SMEs. Recommendation EU 2025/199 defines SMCs.
According to this definition, SMCs are companies that do not meet the SME thresholds set out in Recommendation 2003/361/EC but employ fewer than 750 people. Also, their annual turnover does not exceed €150 million, or their annual balance sheet total does not exceed €129 million.
The amendments introduce proportionate sanctions for SMCs. These include reduced administrative fines, simplified technical documentation and proportionate conformity assessment procedures. They also provide for adapted quality management systems, which mirror those already available to SMEs.
This reform aligns with a central objective of the Digital Package: ensuring that regulation does not create unjustified barriers for mid-sized European innovators.
Strengthening regulatory sandboxes and real-world testing (Articles 57–58 and 60 AI Act)
The Proposal reinforces regulatory sandboxes and real-world testing frameworks.
Specifically, it establishes an EU-wide sandbox managed by the AI Office. Operational from 2028, the sandbox will include enhanced cooperation mechanisms between authorities and priority access for SMEs.
When projects require real-world testing within the sandbox, authorities will consolidate testing into a single sandbox plan. This approach will avoid duplicate procedures, simplify procedures and improve coordination between authorities.
Article 60 extends real-world testing to include high-risk AI systems listed in Annex I, Section A. Previously, testing focused only on systems listed in Annex III.
Also, a new article (Article 60.a)) will enable testing for products listed in Annex I, Section B, through agreements between the Commission and Member States.
Data protection exception for bias detection and mitigation (new Article 4.a) AI Act)
The Digital Omnibus introduces a legal basis for limited data processing within the AI Act. Specifically, on an exceptional basis, providers and deployers of high-risk AI systems may process special categories of personal data (as defined under Regulation (EU) 2016/679 (GDPR)) to detect and correct bias.
This exception is permitted only when strictly necessary. Also, strict safeguards apply, including data minimization, appropriate security measures, no further access, and data deletion once the objective is achieved.
The provision aims to clarify and streamline lawful processing for bias mitigation, and it reduces legal uncertainty for providers while safeguarding fundamental rights.
The amendments proposed in the Digital Omnibus on AI are, for the time being, merely proposals. Before entering into force, the Proposal must first proceed through the ordinary legislative procedure and be approved by the Council of the EU and the European Parliament. Approval will follow trilogue negotiations before the amendments can enter into force.
Don’t miss our content
Subscribe