AI Act’s Code of Practice – second draft: Transparency & Copyright
January 28, 2025
The European AI Office has released the second draft of the General-Purpose AI Code of Practice, an essential milestone in implementing the EU’s Artificial Intelligence Act. This draft builds on feedback from over 1,000 stakeholders, including civil society groups like the Michael Culture Association (MCA) and Culture Action Europe (CAE). The document aims to refine the obligations of AI providers concerning transparency, copyright, and ethical AI practices. Here is a short explanation of the new draft and MCA replies to the consultation process.
The Road So Far: AI Office’s Role and the First Draft
The EU AI Office was established to oversee the implementation of the AI Act, ensuring that AI development aligns with European values and regulations – CAE talked about in this article. One of its major tasks has been drafting the General-Purpose AI Codes of Practice, which will guide AI providers in compliance with the Act. MCA and CAE started in April 2024 an Action Group on Digital & AI to specifically advocate in digital and AI policies at EU and are active together in the AI office consultations.
The first draft of the Code of Practice was crafted by expert Chairs and Vice-Chairs within four thematic working groups: Transparency and Copyright (WG1), Risk Assessment for Systemic Risks (WG2), Technical Risk Mitigation (WG3), Governance and Oversight (WG4). It should be kept in mind that the Codes of Practice will not be legally binding and represent a roadmap for good conduct by the signatories (AI providers and developers).
The transparency and copyright section—of particular importance to MCA community—was initially broad, setting general obligations for AI model providers without providing clear enforcement mechanisms. Stakeholders raised concerns over vague guidelines on transparency and weak commitments to copyright compliance – read here our considerations paper on the issue.
The Second Draft: Key Developments in Copyright and Transparency

AI Office official banner Ⓒ EU Commission
The second draft of the Code of Practice includes several refinements aimed at addressing prior criticisms. The draft is characterized by many more details on obligations and copyright enforcement. The 5 main changes and evolution respect the first draft are:
- Greater Detail on Copyright Compliance: The second draft provides clearer obligations for AI providers to document and verify that their training data respects copyright laws – the first draft broadly acknowledged copyright obligations but lacked concrete mechanisms for ensuring compliance.
- Stronger Transparency Requirements: the new draft expands the list of required disclosures, including provenance of training data, environmental impact reporting, and human labor conditions in data annotations.
- Introduction of Upstream and Downstream Copyright Safeguards: the second draft introduces upstream copyright verification (ensuring datasets are legally obtained before training) and downstream safeguards (preventing AI from generating copyrighted content). The first draft mainly focused on general copyright policies without defining these two layers of protection.
- Enhanced KPIs for Monitoring Compliance: the second draft includes specific Key Performance Indicators (KPIs) to track AI providers’ adherence to copyright rules, exempting small and medium enterprises (SME) for the benefit of growth – a notion that has been contested by civil society since it could create a regulatory loophole.
- Explicit Measures on Text and Data Mining (TDM) Rights: The second draft clarifies that AI models must respect opt-out rights for TDM, using the enforcement mechanism robots.txt – in the previous draft the specifics on the mechanism were completely absent. Civil society organizations, including MCA and CAE, find that limiting to just one type of enforcement mechanism for the opt-out remains weak excluding.
Stakeholder Reactions: Civil Society Calls for Stronger Commitments
The revised draft received mixed reactions from stakeholders. While AI providers generally welcomed the increased clarity on compliance measures, civil society organizations—including MCA and CAE—highlighted several ongoing issues.
MCA’s Main Concerns:
- Transparency for everyone: the current draft asks AI providers and developers to provide the “transparency documentation” only to the AI Office and national competent authorities upon request. It only encourages “to consider whether the documented information can be disclosed, in whole or in part, to the public to promote public transparency” – the draft should be stronger, not only encouraging, but putting on the same level of transparency public authorities and the society at large. The elements described in the code are of public interest as they can (and should) influence consumer choices.
- Clarification on Copyright Violations: The draft lacks a precise definition of “manifestly unfounded” copyright complaints. Without transparency in training data, AI providers could easily dismiss legitimate concerns. A clear framework for handling overfitting and copyright-infringing outputs is also necessary.
- Fair Compensation Mechanisms: The draft continues to neglect a concrete framework for compensating creators whose intellectual property is used in AI training, whether legally acquired or not. MCA (together with CAE) advocates for a structured licensing system for rightholders and an automated compensation mechanism for infringements, bypassing lengthy legal disputes. Immediate monetary restitution should be mandated once unauthorized use is verified.
- Environmental Impact Reporting: The Code should address the environmental impact of both AI model training and result generation, including energy consumption (already present in the draft) and water consumption (surprisingly not present). It is essential to consider location-based factors affecting cooling needs and to link these estimates to the overall energy footprint of AI-generated outputs.
- Human Labor Transparency: The role of human labor in AI training and data validation remains unclear and should be more specified by the signatories. MCA demands detailed reporting on labor conditions. AI providers should disclose working conditions, including square meters allocated per worker in annotation facilities, to prevent exploitative labor practices.

Second Draft vs our propositions
What’s Next for the AI Code of Practice?
The second draft remains a work in progress, with final negotiations and revisions set to take place in the coming months. The AI Office will incorporate stakeholder feedback into a third and potentially final draft, expected by May 2025. As MCA and CAE continue advocating for stronger protections and transparency, the coming months will determine whether the AI Act’s implementation truly aligns with European values—or whether it tilts in favor of industry interests over cultural rights.



