Anticipation lies like fog over the Brussels AI bubble, as there is still no published third draft of the guidelines for how providers of GPAI systems like ChatGPT, originally scheduled for the week of 17 February.
Tensions are high, with Big Tech threatening not to sign the CoP, the Commission pushing for simplification, and some civil society organisations feeling so ignored that they want to withdraw in order to not legitimise the process.
“The real battle will likely begin once the third draft is published,” one civil society representative told Euractiv.
Several participants have expressed this sense of anticipation to Euractiv, but some pointed out that the tensions had already been very high over the last couple of months.
A global fight, version three
It’s not the first time that industry, civil society, rightsholders organisations and regulators will clash over how to regulate GPAI.
The AI Act was designed before the advent of ChatGPT, with no targeted provisions for the suddenly ubiquitous GPAI models. The Act focused on specific use-cases, but GPAI models are flexible, so one model could be integrated across use-cases possibly providing systemic risks.
After the introduction of GPAI rules, a Franco-German-Italian coalition spearheaded by French challenger Mistral AI nearly succeeded in removing them over fears that they would prevent the EU from competing.
It was not well-received then, when Mistral later announced its partnership with Microsoft, spawning questions on exactly who would benefit from weaker GPAI requirements. The partnership continued, but Mistral is still seen as the main European GPAI contender.
In August, Euractiv reported on the fierce fight over a controversial California AI bill that experts said could complement and enhance the EU’s AI regulation for GPAI models.
Despite a solid majority in both California’s state senate chambers and the public, the bill was in the end killed by Governor Gavin Newsom following massive pressure from industry, Congressional Democrats and the US Chamber of Commerce.
In both cases, industry ramped up pressure in the crucial final phase of the political process.
The EU’s CoP is now placed to become the most detailed legally-backed guidelines for GPAI providers so far, and could set the global standard for best practices for GPAI. It is entering its final phase.
Timeline
The third draft is the last draft before a final version of the Code is circulated, itself due by 2 May. After that, the Code is approved by the Commission’s AI Office, member states via the AI board, and then implemented by the Commission before 2 August.
The Commission’s timeline now says “February-March” for the third draft, and working group meetings, AI Board subgroup meetings and summary plenary with the chairs leading the drafting expected in “March,” with an AI Board meeting on the 11th.
On Thursday, the AI Office informed participants that a workshop titled “CSO perspectives on the Code of Practice” planned for 7 March will be rescheduled to allow sufficient time to review the draft to provide meaningful feedback.
“We will inform you of the third draft’s publication date as soon as possible,” the mail to the participants said.
The delay also affects the AI Office’s template for summarising GPAI training data, with a presentation of the template and draft GPAI guidelines for working group three originally planned for the end of February.
Battlegrounds
Three major battle lines have emerged throughout the process: Mandatory third-party testing, the Code’s risk taxonomy and copyright-transparency related issues.
An “AI Safety”-focused coalition focused on future large scale AI risks is arguing that no real compliance is ensured if companies do their own testing.
A human rights-focused coalition says risks to fundamental rights are watered-down in the Code, especially due to lacking inclusion in the risk taxonomy. They support third-party testing but care about other risks than the safety-coalition.
Rightsholders fight for strict application of copyright rules and requiring GPAI companies to publish detailed summaries of their training data.
Industry argues that third-party testing and copyright requirements both go beyond the AI Act, and the CoP is only a compliance tool to follow the act. They worry the risk taxonomy is too vague and not well-grounded in real-world documented risks.
While these concerns are often championed by Big Tech lobbies representing large US companies in Brussels, a Polish AI cluster recently sent a letter to the EU’s tech chief Henna Virkkunen, saying the EU “risks creating a bureaucratic monster” with the current CoP.
All sides are complaining that they are not sufficiently heard in the drafting process, with industry complaining GPAI providers are only 5% of the involved stakeholders, and others saying industry is too influential.
13 primarily academic experts are attempting to balance these concerns. They also bring their own views, with Yoshua Bengio having experienced a sour defeat over the California bill, and Marietje Schaake having recently authored a book titled The Tech Coup: How to Save Democracy from Silicon Valley.
Still, the Commission and member states have the final word. And companies can decide to lawyer up instead of signing up for the Code. Meta has already said it won’t sign on in the Code’s current form and Google is threatening the same.
The third CoP draft, due any day now, will shape the next wave of controversy.