The dangers of jumping the gun on AI regulation

Artificial intelligence is the topic on everyone’s lips. Safety risks are increasing as we approach truly transformative “strong” AI systems, models that could outperform humans in nearly every domain. These models could do tremendous good, like helping develop new cancer drugs or solving fusion, but they could also enable catastrophic harm if misdirected. As we advance this tech, understanding how to control and align it to our goals is vital to the world’s safety. 

Sensible regulation is important, but an excessive focus on competition would put safety at risk. Fundamentally, competition and safety lie on opposite ends of the same spectrum. Prioritising safety often means compromising on competition. For example, car manufacturers are allowed to share research on emissions and safety even when it undermines the competitive integrity of the market.

That’s where the CMA (the Competition and Markets Authority, effectively the UK’s tech regulator) comes into play. After blocking the merger between Activision-Blizzard and Microsoft recently, the CMA has turned its attention to the AI industry. It has announced an initial review of competitive markets for AI foundation models (large, general-purpose models that underpin technologies like ChatGPT), focusing on potential competition and consumer protection risks. While this is well-intentioned, it is also short-sighted. By addressing a complex, cross-cutting, and transformative technology like AI through the narrow lens of competition, we risk compromising a much more important aspect of AI regulation: safety.

The goal is usually to strike a healthy balance between competition and safety; however, in the AI industry, this balance already tilts towards competition as the rapid pace of the AI race forces companies to cut corners or risk falling behind competitors. Figuring out how to control advanced AI is still an open problem in the field. Worse yet, it is getting harder the more capable our AI models become. 

That’s why it’s critical that we get things right the first time round with this tech. Many leading AI labs, like OpenAI, have already committed to “assist clauses”. This clause dictates that if a rival lab is close to achieving strong AI models, OpenAI would halt its own work to instead assist the rival, thereby sacrificing competition to ensure safety. A regulatory approach overly concerned with competition could end up blocking companies like OpenAI from triggering their assist clauses, seeking to avoid encouraging anti-competitive behaviour.

Recent history suggests Britain has also become too forward-thinking in dodging competition, attempting to pre-empt problems before they emerge. The decision to block the Microsoft-Activision deal was based on concerns about Microsoft dominating the cloud gaming market, yet in 2022, cloud gaming’s share of the worldwide gaming market was only 0.7%. While forecasts predict explosive growth for cloud gaming, these are currently only predictions. 

The CMA is set to receive a broad new range of powers from the Digital Markets, Competition and Consumers Bill, which proposes the creation of a new Digital Markets Unit (DMU) within the CMA. The DMU would hold expansive regulatory powers that would allow it to create custom regulations for tech companies deemed to hold “strategic market status.” The Government should reconsider equipping a regulator with such power.

Clearly, we need an AI regulator that can respect the cross-cutting nature of AI and manage the conflicting regulatory challenges it presents. The Government, in its recently published AI regulation whitepaper, committed to creating a Central Risk Function (CRF) to do exactly that, but current plans don’t empower it with enough legal authority or resources to enact its mission. Furthermore, the CRF is only slated for implementation by March 2024 at the earliest. While we wait for the government to deliver the CRF, we should temper the ambitions and reach of the CMA.

To be clear, I’m not calling for the AI review to be abandoned. We should welcome fact-finding missions and endeavours to better understand this dynamic and rapidly evolving field, especially in the absence of similar reviews from a not-yet-established CRF; however, an overzealous and overpowered regulator operating without the regulatory guidance the CRF is supposed to provide leaves me apprehensive about the future of AI safety. 

The government should expedite the delivery of the CRF and other planned central support functions outlined in the AI regulation whitepaper. But while we wait on that, we urgently need to clarify the CMA’s statutory duties and limit its regulatory powers, before it oversteps its boundaries. We need to clip the wings of a rogue regulator before it flies too close to the sun and we all get burned.

 

Alex Petropoulos is a policy commentator with Young Voices. He has an MEng in Discrete Mathematics from the University of Warwick and is pursuing a career in AI Governance and Policy. You can follow him on Twitter @AlexTPet.