AI In The Creative Industries: Misuse, Controversy, And The Push For Use-Focused Regulation
In Brief
AI misuse is sparking high-profile controversies, prompting regulators worldwide to pursue use-focused transparency, consent, and accountability measures while debates continue over whether current frameworks can keep pace with quickly evolving technology.
AI is quickly reshaping creative practice, but its misuse is proliferating just as fast. Undisclosed AI-assisted writing, voice and likeness cloning, and AI-generated imagery are repeatedly appearing after being published or even awarded, sparking high-profile controversies and eroding trust in cultural institutions.
Regulators and platforms are scrambling to respond with a mix of disclosure requirements, content-labeling proposals, provenance and watermarking standards, and targeted enforcement. Yet the current framework remains patchy, slow, and often unclear. How can lawmakers protect creators and consumers without stifling innovation? Are existing rules even capable of keeping pace with the fast-evolving AI landscape? These questions lie at the heart of one of the most urgent debates in technology and creativity today.
Among the most notable AI controversies of the past few years is Rie Qudan’s Sympathy Tower Tokyo, winner of the 2024 Akutagawa Prize. The author disclosed that roughly 5% of the novel—primarily the responses of an in-story chatbot—was generated using ChatGPT. The revelation ignited debate about authorship and transparency in literature. Critics were divided: some praised the work as an innovative use of AI to explore language and technology, while others viewed it as a challenge to traditional norms of original authorship and literary integrity. Coverage in major outlets emphasized the book’s themes—justice, empathy, and the social effects of AI—and the procedural questions raised by incorporating generative models in prize-winning work, prompting calls for clearer disclosure standards and reconsideration of award criteria. The case has become a touchstone in broader conversations about creative agency, copyright, and the ethical limits of AI assistance in the arts, with lasting implications for publishers, prize committees, and authorship norms.
Another high-profile incident involved Lena McDonald’s Darkhollow Academy: Year Two, where readers discovered an AI prompt and editing note embedded in chapter three. This accidental disclosure revealed that the author had used an AI tool to mimic another writer’s style, sparking immediate backlash and widespread coverage. The occurance highlighted the limits of current publishing workflows and the need for clear norms around AI-assisted writing. It intensified calls for transparency, provoked discussions about editorial oversight and quality control, and fueled broader debates over attribution, stylistic mimicry, and intellectual-property risks in commercial fiction.
In visual arts, German photographer Boris Eldagsen sparked controversy lately when an image he submitted to the Sony World Photography Awards was revealed to be entirely AI-generated. The work initially won the Creative Open category, prompting debates about the boundaries between AI-generated content and traditional photography. The photographer ultimately declined the prize, while critics and industry figures questioned how competitions should treat AI-assisted or AI-generated entries.
The music industry has faced similar challenges. The British EDM track “I Run” by Haven became a high-profile AI controversy in 2025 after it was revealed that the song’s lead vocals had been generated using synthetic-voice technology resembling a real artist. Major streaming platforms removed the track for violating impersonation and copyright rules, provoking widespread condemnation, renewed calls for explicit consent and attribution when AI mimics living performers, and accelerated policy and legal debates over how streaming services, rights holders, and regulators should manage AI-assisted music to protect artists, enforce copyright, and preserve trust in creative attribution.
Regulators Grapple With AI Harms: EU, US, UK, And Italy Roll Out Risk-Based Frameworks
The problem of harms from AI use—including cases where creatives pass off AI-generated work as human-made—has become a pressing issue, and emerging regulatory frameworks are beginning to address it.
The European Union’s AI Act establishes a risk-based legal framework that entered into force in 2024, with phased obligations running through 2026–2027. The law requires transparency for generative systems, including labelling AI-generated content in certain contexts, risk assessments and governance for high-risk applications, and empowers both the EU AI Office and national regulators to enforce compliance. These provisions directly target challenges such as undisclosed AI-generated media and opaque model training.
National legislators are also moving quickly in some areas. Italy, for example, advanced a comprehensive national AI law in 2025, imposing stricter penalties for harmful uses such as deepfake crimes, and codifying transparency and human oversight requirements—demonstrating how local lawmaking can supplement EU-level rules. The EU Commission is simultaneously developing non-binding instruments and industry codes of practice, particularly for General Purpose AI, though rollout has faced delays and industry pushback, reflecting the difficulty of producing timely, practical rules for rapidly evolving technologies.
The UK has adopted a “pro-innovation” regulatory approach, combining government white papers, sector-specific guidance from regulators such as Ofcom and the ICO, and principles-based oversight emphasizing safety, transparency, fairness, and accountability. Rather than imposing a single EU-style code, UK authorities are focusing on guidance and gradually building oversight capacity.
In the United States, policymakers have pursued a sectoral, agency-led strategy anchored by Executive Order 14110 from October 2023, which coordinates federal action on safe, secure, and trustworthy AI. This approach emphasizes risk management, safety testing, and targeted rulemaking, with interagency documents such as America’s AI Action Plan providing guidance, standards development, and procurement rules rather than a single comprehensive statute.
Martin Casado Advocates Use-Focused AI Regulation To Protect Creatives Without Stifling Innovation
For creatives and platforms, the practical implications are clear. Regulators are pushing for stronger disclosure requirements, including clear labelling of AI-generated content, consent rules for voice and likeness cloning, provenance and watermarking standards for generated media, and tighter copyright and derivative-use regulations. These measures aim to prevent impersonation, protect performers and authors, and improve accountability for platforms hosting potentially misleading content—essentially implementing the “use-focused” regulatory approach recommended by Andreessen Horowitz’s general partner Martin Casado in the a16z podcast episode.
He argues that policy should prioritize how AI is deployed and the concrete harms it can cause, rather than attempting to police AI model development itself, which is fast-moving, difficult to define, and easy to evade. The venture capitalist warns that overbroad, development-focused rules could chill open research and weaken innovation.
Martin Casado emphasizes that illegal or harmful activities carried out using AI should remain prosecutable under existing law, and that regulation should first ensure that criminal, consumer-protection, civil-rights, and antitrust statutes are enforced effectively. Where gaps remain, he advocates for new legislation grounded in empirical evidence and narrowly targeted at specific risks, rather than broad, speculative mandates that could stifle technological progress.
According to the expert, it is important to maintain openness in AI development, such as supporting open-source models, to preserve long-term innovation and competitiveness while ensuring that regulatory measures remain precise, practical, and focused on real-world harms.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.
More articles
Alisa, a dedicated journalist at the MPost, specializes in cryptocurrency, zero-knowledge proofs, investments, and the expansive realm of Web3. With a keen eye for emerging trends and technologies, she delivers comprehensive coverage to inform and engage readers in the ever-evolving landscape of digital finance.