A Quick Insight: The EU’s General-Purpose AI Code of Practice – Governance Tool or Innovation Barrier?

On July 10, 2025, the European Commission unveiled the first-of-its-kind General-Purpose AI Code of Practice. The Code is designed as a voluntary but strategic instrument that aims to help AI systems developers and providers – especially of large language models and foundation models – align their operations with the transparency, copyright, and safety obligations of the newly enacted EU Artificial Intelligence Act.  

While not legally binding, the Code serves as a compliance facilitator offering industry-informed guidance, enabling general-purpose AI providers to build trust, navigate compliance requirements, and future-proof their technologies. 

While intended to foster transparency, copyright compliance, and safety, the Code has also become a lightning rod in the clash between European regulators and global tech giants. At its best, the Code reflects the EU’s ambition to lead global AI governance. At its most controversial, it exemplifies what critics call overregulation that may weaken innovation. 

What the Code Aims to Do 

The overarching objective of the Code is to improve the functioning of the EU’s internal market, to promote the uptake of human-centric and trustworthy artificial intelligence while ensuring a high level of protection of health, safety, and fundamental rights against harmful effects of AI, while also supporting innovation. 

At its core, the Code is intended to serve as a bridge between legislation and practice. Under the AI Act, which came into force in 2024 and begins phased enforcement in August 2025, GPAI developers must meet specific obligations concerning transparency of model development, responsible data use, and mitigation of systemic risks. The Code helps to comply with these requirements. 

As stated in the Code itself, the main objectives are the following: 

a) To serve as a guiding document for demonstrating compliance with the obligations provided for in Articles 53 and 55 of the AI Act. Therefore, it is important to note that the Code clearly focuses on practical application of the aforementioned Articles, not as a thorough guideline for the AI Act in its entirety. 

b) To ensure providers of general-purpose AI models comply with their obligations under the AI Act and to enable the AI Office to assess compliance of providers of general-purpose AI models who choose to rely on the Code to demonstrate compliance with their obligations under the AI Act. 

It is emphasized that it is important for the signatories to recognise the particular role and responsibility of providers of general-purpose AI models along the AI value chain, as the models they provide may form the basis for a range of downstream AI systems, often provided by downstream providers that need a good understanding of the models and their capabilities, both to enable the integration of such models into their products and to fulfil their obligations under the AI Act. This essentially means that the Code aims to make sure that the AI development process and the model’s lifecycle are set on the right regulatory path from the very beginning. 

The Code was developed through a wide-ranging consultation process involving industry leaders, academia, as well as various other organizations and national authorities. Over 400 contributors helped shape the Code, emphasizing the importance of cross-sector collaboration in governing AI responsibly. 

The Code includes three core chapters: 

1. Transparency – One of the most emphasized areas of the AI Act is transparency. The transparency chapter of the Code describes measures which signatories must commit to implementing to comply with their transparency obligations. The chapter’s main value is the included Model Documentation Form, which allows signatories to easily compile the information required about a GPAI model in the AI Act in a single document. The Model Documentation Form also indicates for each item whether the information is intended for downstream providers, the AI Office, or national competent authorities. The Form includes detailed disclosures on the data sources and provenance used for model training, the methods applied to training data, intended and unintended use cases, the licensing status of training materials, identifiers of model versions, etc. 

At first glance, a documentation form providing insight into a specific AI model and its development poses obvious risks of privacy regarding trade secrets of the developers. This issue has also been addressed in the Code, which states that „In accordance with Article 78 AI Act, the recipients of any of the information contained in the Model Documentation Form are obliged to respect the confidentiality of the information obtained, in particular intellectual property rights and confidential business information or trade secrets, and to put in place adequate and effective cybersecurity measures to protect the security and confidentiality of the information obtained.“ Additionally, these forms are not public by default to ensure that proprietary information and trade secrets are respected while maintaining regulatory accountability. However, it should be noted that the documents must be shared with downstream deployers and relevant authorities upon request. The aforementioned indicates that model developers should take into account that such forms will not remain entirely private over time and with billions of dollars at stake, these vague claims of protection might not be sufficient. 

Signatories are also required to update the Model Documentation to reflect relevant changes in the information, including in relation to updated versions of the same model, while keeping previous versions of the Model Documentation for a period of 10 years after the model has been placed on the market. This means that the potential trade-secret-related privacy issues are not only reserved for the launch phase of the model but will continue throughout its lifecycle. 

2. Copyright – EU’s strong tradition of intellectual property protection is reflected in the Code’s second chapter, which also aims to contribute to the proper application of Article 53 of the AI Act. This section offers guidance for model developers to ensure that data used in training as well as the output material of the model, complies with EU copyright law. 

However, it is explicitly noted that the chapter does not affect the application and enforcement of EU law on copyright and related rights, which is for the courts of the member states and ultimately the Court of Justice of the EU to interpret. Therefore, it can be concluded that the subject of the Chapter is again limited to the practical application of the AI Act while refraining from taking a definitive stance on the important copyright issues regarding the use and development of AI. 

It is also noted that the commitments in this chapter should be proportionate to the size of the providers, taking due account of the interests of SMEs, including startups. 

When it comes to model training, the most important measures the Chapter encourages developers to implement are as follows: 

a) To draw up, keep up-to-date, and implement a policy to comply with EU law on copyright and related rights for all general-purpose AI models they place on the EU market. Signatories commit to describe that policy in a single document, incorporating the measures set out in the chapter, and will assign responsibilities within their organisation for the implementation and overseeing of this policy. 

 b) To make publicly available and keep up-to-date a summary of their copyright policy. 

 c) To reproduce and extract only lawfully accessible copyright-protected content when crawling the World Wide Web. This means not to circumvent effective technological measures that are designed to prevent or restrict unauthorised acts in respect of works and other protected subject matter, in particular by respecting any technological denial or restriction of access imposed by subscription models or paywalls.  

Additionally, the chapter encourages to exclude web-crawling websites that make available to the public content and which are, at the time of web-crawling, recognised as persistently and repeatedly infringing copyright and related rights on a commercial scale by courts or public authorities in the EU. For the purpose of compliance with the latter measure, a list of hyperlinks that lists these websites is issued by the relevant bodies in the EU and the EEA and made publicly available. 

d) To employ web-crawlers that read and follow instructions expressed in accordance with the Robot Exclusion Protocol (robots.txt). This would enable websites to track, opt out of, and generally stay compatible with crawling robots. 

When it comes to outputs provided by the model, the Chapter encourages developers to:  

a) Mitigate the risk of copyright-infringing outputs by implementing appropriate and proportionate technical safeguards to prevent their models from generating outputs that reproduce training content in an infringing manner. 

b) To prohibit copyright-infringing uses of a model in their acceptable use policy, terms and conditions, or other equivalent documents. In case of general-purpose AI models released under free and open source licenses users should be alerted to the prohibition of copyright-infringing uses of the model in the documentation accompanying the model. This measure applies irrespective of whether a signatory vertically integrates the model into its own AI system(s) or whether the model is provided to another entity based on contractual relations. 

 c) Signatories also commit to designate a point of contact for electronic communication with affected rightsholders and provide easily accessible information about it. This means that any rightsholder should have the ability to submit, by electronic means, copyright complaints. However it is again emphasized that this commitment does not affect the measures, remedies and sanctions available to enforce copyright and related rights under EU and national law. 

This chapter has been a point of contention among some rightsholders, who argue that the Code does not go far enough in mandating proof of data provenance. It is not a secret that many models have been trained by illegally using large amounts of pirated content, and the Code does not state any new measures to counter that. 

3. Safety and Security – The third chapter is the lengthiest and applies specifically to GPAI models that pose systemic risks, such as advanced foundation models with broad capabilities that could impact society broadly, requiring adversarial testing, cybersecurity controls, and misuse safeguards. There are various commitments stated in this chapter. 

Firstly the signatories commit to adopting a state-of-the-art Safety and Security Framework. The purpose of the Framework is to outline the systemic risk management processes and measures that Signatories implement to ensure the systemic risks stemming from their models are acceptable. The Framework adoption process involves three steps: creating the Framework; implementing the Framework; and updating the Framework. Each step is specifically described in the chapter. The signatories commit to notifying the AI Office of their Framework. 

Secondly, the signatories commit to identifying the systemic risks stemming from the model. The purpose of systemic risk identification includes facilitating systemic risk analysis. Thirdly, the signatories commit to analysing each identified systemic risk. The purpose of systemic risk analysis includes facilitating systemic risk acceptance determination. This leads to the fourth commitment which is the systemic risk acceptance determination whereby the signatories commit to specifying systemic risk acceptance criteria and determining whether the systemic risks stemming from the model are acceptable. Signatories commit to deciding whether or not to proceed with the development, the making available on the market, and/or the use of the model based on the systemic risk acceptance determination. 

The fifth commitment regards safety mitigations. Signatories commit to implementing appropriate safety mitigations along the entire model lifecycle to ensure the systemic risks stemming from the model are acceptable (pursuant to Commitment 4). The sixth commitment involves security, whereby Signatories commit to implementing an adequate level of cybersecurity protection for their models and their physical infrastructure along the entire model lifecycle. On an interesting note – a model is exempt from this commitment if the model’s capabilities are inferior to the capabilities of at least one model for which the parameters are publicly available for download. 

According to the seventh commitment, similarly to the transparency chapter of the Code, the signatories commit to reporting to the AI Office information about their model and their systemic risk assessment and mitigation processes and measures by creating a Safety and Security Model Report before placing a model on the market. The eighth commitment asks the Signatories to (i) define clear responsibilities for managing the systemic risks stemming from their models across all levels of the organisation, (ii) allocate appropriate resources to actors who have been assigned responsibilities for managing systemic risk, and (iii) promote a healthy risk culture. Serious incidents must be reported to the AI Office and national competent authorities according to the ninth commitment, while the tenth commitment focuses on documenting the implementation of the Safety and Security chapter. For example, the signatories are expected to publish summarised versions of their Framework and Model Reports as necessary. 

Though not currently applicable to many GPAI models, this section mainly anticipates a future where powerful models have even bigger influence on digital ecosystems and public trust. 

Why Signing the Code Matters 

The Code remains voluntary, but the AI Act is binding. Providers who do not sign the Code must independently demonstrate compliance with the Act’s transparency, copyright, and safety requirements, opening themselves to greater scrutiny from the EU’s new AI Office. 

The enforcement timeline is tight: by August 2, 2025, New GPAI models must comply with transparency and copyright obligations. By August 2, 2026 the enforcement powers kick in, including fines up to 7% of global annual turnover. By August 2, 2027, the obligations apply retroactively to existing GPAI models. 

Signing the Code may serve as presumption of compliance with relevant sections of the AI Act, giving signatories reduced administrative overhead and enhanced legal certainty. In practice, this means that non-signatories like Meta may face a higher regulatory burden.  

Those who have signed by the first of August 2025 will also be publicly listed as compliant participants in the EU’s trustworthy AI ecosystem. However, it must also be noted that according to the Code itself, adherence to the Code does not constitute conclusive evidence of compliance with the obligations under the AI Act.  

Tech Industry Pushback – Meta has Publicly Rejected the Code 

Meta’s global affairs head Joel Kaplan stated that “Europe is heading in the wrong direction with AI.” According to Meta, the Code introduces legal uncertainty and regulatory conditions that exceed the original scope of the AI Act. This stance highlights a growing divide between US-based tech companies and European regulators. Meta’s rejection of the Code isn’t just a legal disagreement, it is essentially a symbolic line in the sand. Undoubtedly, Meta, being one of the most important players in the AI field, has leverage and their public refusal is not something to be taken lightly.  The next few months will show whether others follow suit or whether the EU succeeds in setting the boundaries of AI development on (mostly) its own terms. 

Meta isn’t alone. Several European enterprises including Mistral AI have also expressed reservations. In an open letter to the European Commission, they called for a two-year delay in implementing the AI Act, warning that overly aggressive regulation could weaken Europe’s technological competitiveness. 

On the other hand, Google, OpenAI and Microsoft have announced their intention to sign the Code. However, they have not yet signed it. Anthropic AI has released a clearer statement – „After review, Anthropic intends to sign the European Union’s General-Purpose AI Code of Practice. We believe the Code advances the principles of transparency, safety and accountability—values that have long been championed by Anthropic for frontier AI development. If thoughtfully implemented, the EU AI Act and Code will enable Europe to harness the most significant technology of our time to power innovation and competitiveness. 

The Innovation Dilemma: Safety vs Speed 

The Code reflects a noble and necessary goal: to ensure that the future of AI aligns with European values – openness, transparency, safety, and respect for creators. But the friction it has generated reveals deeper tensions between governance and growth, protection and innovation. Supporters of the Code argue that trustworthy AI requires clear rules and accountability. European Commissioner Thierry Breton has even described the Code as “a roadmap to responsible innovation” as it offers guidance without micromanaging developers’ internal architectures. 

Critics, however, warn that Europe is overcorrecting. The requirements to document and disclose extensive model details, while crucial for public safety, could deter open-source collaboration, slow down iteration cycles, pose privacy issues and discourage smaller players from entering the market. 

A particular issue is the expectation for AI developers to track and justify training data licensing – a process that can be immensely costly and legally ambiguous, especially for large datasets scraped from the web. Overall, a notable point of criticism is also the fact that the liability burden is again put on model developers. The liability of the other parties in the model’s lifecycle such as downstream providers or even end-users, is still not entirely clear. 

What Lies Ahead 

The Code will undergo regular reviews at least every two years, allowing it to evolve alongside the AI landscape. But whether it becomes a blueprint for the tech world or a cautionary tale of regulatory overreach remains to be seen. 

Contact