EU Commission Consultation on Draft General-Purpose AI Model Guidelines

Our Artificial Intelligence team outlines key points in the EU’s draft guidelines on general-purpose AI models, offering key insights on definitions, downstream provider obligations, and enforcement under the AI Act. With obligations taking effect from 2 August 2025, these draft guidelines are essential reading for providers and downstream providers preparing for regulatory compliance.
The AI Office launched a multi-stakeholder consultation in April 2025 on the proposed general-purpose AI guidelines. We provide an overview of the proposed guidelines.
Overview
The draft guidelines currently cover the following topics:
- What is a general-purpose AI model, or “GPAIM”, including what is a new distinct model versus a modified version
- Who is the provider of a GPAIM, including when a downstream provider becomes subject to GPAIM provider obligations
- What constitutes placing on the market, and the criteria for the open-source exemptions
- Methods for estimating the computational resources used to train or modify a model
- Transitional rules, grandfathering, and retroactive compliance and
- Supervision and enforcement of the GPAIM obligations
The final GPAIM guidelines and Code of Practice (COP) were expected to be published in May or June 2025, however, there have already been reports of a delay until July. In addition, the AI Office will also publish separate guidelines containing the template for the summary of the content used for training.
The draft guidelines on GPAIM will be non-binding. This is because authoritative interpretation may only be given by the Court of Justice of the European Union. Despite their non-binding nature, they will provide important clarifications on how the AI Office will interpret and apply the obligations under the AI Act. In particular, the AI Office notes its exclusive responsibility for the supervision and enforcement of the obligations of providers of GPAIMs. The guidelines are expected to evolve over time and will be updated as necessary, particularly in light of evolving technological developments.
What is a GPAIM?
The AI Office’s preliminary approach is to assess a GPAIM on the basis of the amount of computational resources used to train it, which combines model and data size. Where a model can generate text and/or images and if the amount of computing power used to train it exceeds 10²² FLOPs, it is presumed to be a GPAIM.
The draft guidelines do not provide a specific list of tasks to help providers classify their models. In addition, while benchmarks and other tools to evaluate model capabilities can be used in “some cases”, the AI Office’s view is that they are “too immature” to provide a “reliable criteria” to classify GPAIMs.
The AI Office notes that the assessment of a model depends not only on the training computing power threshold but also on the modality and other characteristics of the data used for training. As a result, even if the model meets the FLOP threshold, the presumption of being a GPAIM may be rebutted based on the modality or other specific characteristics of the training data. For example, where a model is only usable for transcription of speech, it should not be considered capable of competently performing a wide range of tasks, even if its training computing power meets the threshold. Equally, if a model does not meet the training computing power threshold, then the model is presumed to lack sufficient generality and capabilities to be a GPAIM, unless there is evidence to the contrary.
The draft guidelines also consider when a GPAIM becomes a new distinct model and when it is simply a modified version of the same GPAIM. The AI Office’s preliminary approach is to consider the large pre-training run as the beginning of the lifecycle of a GPAIM. It also clarifies that where the same entity carries out another large pre-training run, this will result in the model being a distinct model.
The draft guidelines confirm that fine tuning is just one method of modifying a model. If the same organisation or entity makes changes to a GPAIM using more than a third of the original model’s training compute - roughly 3 × 10²¹ FLOPs - the modified model will be treated as a new, distinct GPAIM rather than just a model version. For GPAIMs with systemic risk, a modified model will be treated as a distinct new model if the changes lead to a significant change in systemic risk. This is presumed to occur if the modifications involve more than a third of the computing power originally used to train the GPAIM, roughly 3 × 10²⁴ FLOPs. All other modifications of GPAIMs by the same entity, that is based on the same large pre-training run, will result in a new model version.
In practice, this means:
- New distinct models: each distinct model requires its own documentation under Article 53(1)(a), (b) and (d)
- New versions: require existing documentation under Article 53(1)(a), (b) and (d) to be updated
- GPAIMs with systemic risk: systemic risk assessment is required for each distinct model
- Copyright policy: only a single copyright policy is required irrespective of whether there are new distinct models or new versions.
Downstream providers
The draft guidelines provide examples of who will be the provider of a GPAIM. It confirms that third parties who develop GPAIMs for customers who ultimately place the GPAIM on the market are not the provider of those GPAIMs.
In the context of downstream providers that modify a GPAIM, the AI Office’s preliminary approach is to use thresholds of computational resources used for modification to determine whether a downstream provider should be presumed to be the provider of that modified GPAIM. A downstream provider will be considered the provider of a GPAIM if their modifications involve significant computational effort. Specifically, this applies where the compute used to modify the model exceeds one-third of the original training threshold, currently around 3 × 10²¹ FLOPs. In these cases, the downstream provider is responsible for the GPAIM obligations for those modifications. Specifically, these include that the documentation updates only concern the modification and the copyright policy and summary of training data only needs to take into account data used as part of the modification.
For GPAIMs with systemic risk, the threshold is also one third of the computational resources used, roughly 3 × 10²⁴ FLOPs.The downstream provider may also become subject to the provider obligations for GPAIMs with systemic risk if the original GPAIM does not have systemic risk but the downstream provider knows or can reasonably expect to know the cumulative amount of computational resources used to train and modify the model is greater than the threshold set in Article 51(2). This is currently 10²⁵ FLOPs. Importantly, downstream providers who become the provider of a GPAIM with systemic risk are not limited to complying with those modifications. Instead, there must be a new systemic risk assessment and modification and the modification obligation also applies in this instance.
If the downstream provider does not modify the GPAIM but only integrates the GPAIM into their AI system, then that downstream provider is only required to comply with the AI system obligations.
Placing on the market
The draft guidelines provide examples of placing on the market of GPAIMs. Examples include:
- Making a GPAIM available via a cloud computing service
- Copying a GPAIM onto a customer’s own infrastructure
- Integrating a GPAIM into a chatbot or mobile application, and
- Integrating a GPAIM into the provider’s own products or services that are made available.
The AI Office notes that these examples should be interpreted in accordance with the Blue Guide.
Open source models
The draft guidelines contain three conditions to benefit from the open-source exemption:
- The GPAIM is released under a free and open-source licence that allows for the access, usage, modification, and distribution of the model. Notably, the draft guidelines provide some context and clarity on the foregoing emphasised concepts.
- The parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available.
- The GPAIM is not a GPAIM with systemic risk.
Systemic risk
As set out, certain thresholds are used to classify GPAIMs including when a GPAIM is modified into a distinct new GPAIM. The draft guidelines outline two widely-used approaches that may be used to estimate the computing power used for training or modifying an AI model.
The details of these methodologies and their selection over alternatives should be reviewed by technical teams.
Grandfathering provision
The draft guidelines confirm that GPAIMs that benefit from the grandfathering provisions do not require re-training or unlearning. This exemption applies where:
- Copyright compliance is not possible for past actions, or where information on training data is not available, or
- Retrieval would be disproportionate for the provider.
In these cases, this must be clearly justified and disclosed in the copyright policy and content used for training.
The AI Office suggests that model providers who foresee difficulties with complying with their obligations should proactively inform the AI Office how and when they will take the necessary steps to comply with their obligations.
Importantly, the draft guidelines suggest that providers who have trained or are training a GPAIM with systemic risk with a view to launching after 2 August 2025 are required to notify the AI Office within two weeks after the 2 August 2025.
General-Purpose AI Code of practice
The draft guidelines confirm that the AI Office is expected to focus on monitoring whether signatories are adhering with the general-purpose AI code of practice (COP). The commitments under the COP only become relevant for assessing compliance once the GPAIM obligations come into force on 2 August 2025.
For non-signatories, the draft guidelines note that these providers will be expected to explain how they comply with their obligations under the AI Act via other adequate, effective and proportionate means - such as by carrying out a gap analysis. In addition, non-signatories may be subject to more requests for information and access to conduct model evaluations.
Enforcement
The draft guidelines confirm that the AI Office will supervise and enforce the obligations for GPAIM providers. Additional clarifications are also provided as follows:
- The AI Office will take a “collaborative and proportionate approach to enforcement”. It expects close informal cooperation with all providers during the training phase of the GPAIM to streamline compliance and ensure market placement without delays.
- The AI Office expects proactive reporting without requests by GPAIM providers with systemic risks.
- More detailed acts will follow to specify the implementation of the AI Office’s enforcement powers under the AI Act, such as the powers to conduct model evaluations and impose fines.
Next steps
We recommend that all GPAIM providers and potential downstream providers / modifiers should carefully consider these draft guidelines. While these draft guidelines provide helpful clarifications, they may in some instances go beyond what is necessary or required under the AI Act.
Contact our Artificial Intelligence team
The content of this article is provided for information purposes only and does not constitute legal or other advice.
Share this: