EU ‘final’ talks to fix AI rules to run into second day — but deal on foundational models is on the table

EU ‘final’ talks to fix AI rules to run into second day — but deal on foundational models is on the table

As European Union lawmakers clock up 20+ hours of negotiating time in a marathon attempt to reach agreement on how to regulate artificial intelligence a preliminary accord on how to handle one sticky element — rules for foundational models/general purpose AIs (GPAIs) — has been agreed, according to a leaked proposal TechCrunch has reviewed.

In recent weeks there has been a concerted push, led by French AI startup Mistral for a total regulatory carve out for foundational models/GPAIs. But EU lawmakers appear to have resisted the full throttle push to let the market get things right as the proposal retains elements of the tiered approach to regulating these advanced AIs that the parliament proposed earlier this year.

That said, there is a partial carve out from some obligations for GPAI systems that are provided under free and open source licences (which is stipulated to mean that their weights, information on model architecture, and on model usage made publicly available) — with some exceptions, including for “high risk” models.

Reuters has also reported on partial exceptions for open source advanced AIs.

Per our source, the open source exception is further bounded by commercial deployment — so if/when such an open source model is made available on the market or otherwise put into service the carve out would no longer stand. “So there is a chance the law would apply to Mistral, depending on how ‘make available on the market’ or ‘putting into service’ are interpreted,” our source suggested.

The preliminary agreement we’ve seen retains classification of GPAIs with so-called “systemic risk” — with criteria for a model getting this designation being that it has “high impact capabilities”, including when the cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10^25.

At that level very few current models would appear to meet the systemic risk threshold — suggesting few cutting edge GPAIs would have to meet upfront obligations to proactively assess and mitigate systemic risks. So Mistral’s lobbying appears to have softened the regulatory blow.

Under the preliminary agreement other obligations for providers of GPAIs with systemic risk include undertaking evaluation with standardized protocols and state of the art tools; documenting and reporting serious incidents “without undue delay”;  conducting and documenting adversarial testing; ensuring an adequate level of cybersecurity; and reporting actual or estimated energy consumption of the model.

Elsewhere there are some general obligations for providers of GPAIs, including testing and evaluation of the model and drawing up and retaining technical documentation, which would need to be provided to regulatory authorities and oversight bodies on request.

They would also need to provide downstream deployers of their models (aka AI app makers) with an overview of the model’s capabilities and limitations so support their ability to comply with the AI Act.

The text of the proposal also requires foundational model makers to put in place a policy to respect EU copyright law, including with regard to limitations copyright holders have placed on text and data mining. Plus they myst provide a “sufficiently detailed” summary of training data used to build the model and make it public — with a template for the disclosure being provided by the AI Office, an AI governance body the regulation proposes to set up.

We understand this copyright disclosure summary would still apply to open source models — standing as another of the exceptions to their carve out from rules.

The text we’ve seen contains a reference to codes of practice, which the proposal says GPAIs — and GPAIs with systemic risk — may rely on to demonstrate compliance, until a “harmonized standard” is published.

It envisages the AI Office being involved in drawing up such Codes. While the Commission is envisaged issuing standardization requests starting from six months after the regulation enters into force on GPAIs — such as asking for deliverables on reporting and documentation on ways to improve the energy and resource use of AI systems — with regular reporting on its progress on developing these standardized elements also included (two years after the date of application; and then every four years).

Today’s trilogue on the AI Act actually started yesterday afternoon but the European Commission has looked determined it will be the final knocking together of heads between the European Council, Parliament and its own staffers on this contested file. (If not, as we’ve reported before, there is a risk of the regulation getting put back on the shelf as EU elections and fresh Commission appointments loom next year.)

At the time of writing talks to resolve several other contested elements of the file remain ongoing and there are still plenty of extremely sensitive issues on the table (such as biometric surveillance for law enforcement purposes). So whether the file makes it over the line remains unclear.

Without agreement on all components there can be no deal to secure the law so the fate of the AI Act remains up in the air. But for those keen to understand where co-legislators have landed when it comes to responsibilities for advanced AI models, such as the large language model underpinning the viral AI chatbot ChatGPT, the preliminary deal offers some steerage on where lawmakers look to be headed.

In the past few minutes the EU’s internal market commissioner, Thierry Breton, has tweeted to confirm the talks have finally broken up — but only until tomorrow. The epic trilogue is slated to resume at 9am Brussels’ time so the Commission still looks set on getting the risk-based AI rulebook it proposed all the way back in April 2021 over the line this week. Of course that will depend on finding compromises that are acceptable to its other co-legislators, the Council and the Parliament. And with such high stakes, and such a highly sensitive file, success is by no means certain.



Source link

author

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *