Who writes the rules for AI and who should?

Daniel de Vos
3 min readSep 15, 2023

As the development of artificial intelligence (AI) accelerates, a critical question emerges: who’s writing its governing rules? New initiatives like the Frontier Model Forum, reveal Big Tech has a dominant voice in shaping the rulebook. In this article I explore the current regulatory landscape and contrasting approaches in the European Union and the United States.

The debate over AI regulation has caused the world to race to draft policies, with the European Union (EU) leading the way with its AI Act. Meanwhile, the United States is also shaping its own AI policy frameworks. Yet the viewpoints are significantly different, putting companies in a regulatory Catch-22: Compliance with one might mean violating another.

Exposure to trade secrets

One of the EU Act’s most debated rules for AI companies is the requirement to disclose how their system is designed and what data has been used in training. While OpenAI’s Sam Altman has explicitly noted that while they ‘will try to comply’ with the EU’s regulations, failure to do so would force them to pull out of the European market. Sharing could not only expose sensitive business intelligence but also expose the company to many legal challenges.

Copyright tension

The tension around training data already caused legal fights to start happening against platforms like Midjourney and Stable Diffusion. Also major news organisations like the Associated Press and Getty Images publicly raised concerns through an open letter. They’re calling for ‘transparency into training datasets and consent of rights holders before using data for training’.

Google, in a counter-proposal, pushed for a big change in copyright law. In a submission to the Australian government’s AI review, it asks for an ‘opt-out’ system for publishers, reversing the current ‘opt-in’ nature of copyright. Such a change would place the onus onto content creators, not AI companies, to prevent copyrighted material from being used.

Big Tech dominance

Dr. Manwaring, a senior lecturer at UNSW Law and Justice, warns that copyright could break down if not sorted out, hurting smaller content creators, many of whom are already at a considerable disadvantage.

Just last month, OpenAI, Meta, Microsoft, Google, Anthropic, and Amazon signed an agreement promising to invest in responsible AI. Soon after, the companies formed the Frontier Model Forum, a coalition targeted to ‘promote the safe and responsible use of frontier AI systems’. Triveni Gandhi, responsible AI lead at Dataiku, notes that smaller firms and entities are practically ‘not at the table’ during these crucial discussions.

This points to the risks of ‘regulatory capture’, when Big Tech makes rules that protect itself, leaving smaller competitors alienated and perhaps even the actual users as well.

Global governance

So, who should decide the rules for AI? Tech giants, global governments, or should smaller players also have a say? Perhaps what’s needed is a new form of international ‘digital diplomacy’. A multi-stakeholder collaboration that includes tech companies, governments, and civil society, and pilot programs to test and adjust these regulations.

In the end, governance for AI is too big a challenge for any single entity or country to tackle alone. It will require a global village of regulators, innovators, ethicists, and everyday users to steer the future. After all, shouldn’t its governance be as groundbreaking as the technology itself?

--

--