The European Union is reportedly contemplating weakening its key AI Act due to opposition from major technology firms and the US government, as detailed in a Financial Times report that referenced a draft document outlining the Proposed modifications it had reviewed and a conversation with a high-ranking EU representative who wished to remain anonymous.
TL;DR
- EU may weaken its AI Act due to pressure from US and tech firms.
- Proposed changes include a one-year grace period for high-risk AI systems.
- Penalties for transparency violations might be postponed until August 2027.
- Tech companies and startups argue the Act is too complex and stifles innovation.
These proposed adjustments are included in the European Commission's recently unveiled “simplification agenda” and “efforts to create a more favorable business environment” initiatives for the region. In September, the European Commission opened a call for evidence in an effort to collect research on how to simplify its legislation around data, cybersecurity, and artificial intelligence (AI).
The unnamed senior EU official told the Financial Times that Brussels has been “engaging” with the Trump administration on potential adjustments to the AI Act and other digital regulations as part of a broader effort to Streamline the laws.
The European Commission's representatives informed Coins2Day that the commission “will always remain fully behind the AI Act and its objectives.”
“When it comes to potentially delaying the implementation of targeted parts of the AI Act, a reflection is still ongoing within the Commission,” Thomas Regnier, a Commission spokesperson, said in a statement. “Various options are being considered, but no formal decision has been taken at this stage.”
The EU's landmark AI Act, recognized as one of the world's most stringent AI regulations, is slated to undergo several proposed modifications. Passed in 2024, the act bans certain uses of AI, such as social scoring and real-time facial recognition, and imposes strict rules on the use of AI in areas deemed “high-risk” such as healthcare, policing, and employment. This regulation extends beyond EU-based companies to encompass any business that provides AI products or services to individuals in Europe. Furthermore, it mandates rigorous transparency standards for international companies and penalizes legal infringements with substantial monetary penalties.
Under a draft proposal reviewed by the Financial Times, companies that have deployed so-called high-risk AI systems could receive a one-year “grace period” before enforcement begins. The delay would allow firms in these high-risk domains that are already deploying AI to make adjustments “without disrupting the market,” according to the draft document.
The proposal, currently being discussed internally by The Commission and EU member states, may still undergo changes before its anticipated approval on November 19. Even once finalized, it would need approval from a majority of EU countries and the European Parliament before being put into practice.
The Commission is also considering postponing the start date for penalties related to transparency violations under the new AI Act. If approved, fines for non-compliance would not take effect until August 2027, giving companies and AI developers “sufficient time” to adjust to the new obligations.
The Act has been criticized by tech companies and startups, which argue that its rules are overly complex and risk stifling innovation in Europe by creating high compliance costs and bureaucratic hurdles. Global tech firms, including Meta and Alphabet, have warned that the Act’s broad definitions of “high-risk” AI could discourage experimentation and make it harder for smaller developers to compete.
The Trump administration has also been critical of Europe’s regulatory approach to AI. At the Paris AI Summit earlier this year, U.S. Vice President J.D. Vance publicly warned that “excessive regulation” of AI in Europe could cripple the emerging industry, in a rebuke to European efforts, including the AI Act. In contrast, the Trump administration has taken a relatively light-touch approach to AI regulation, arguing instead that innovation should be prioritized amid a global AI arms race with China. Most U.S. AI regulation is being passed at the state level, with California adopting some of the strictest rules for the emerging tech.
