University of Cambridge report says policies should focus on hardware to ensure AI safety.
The hunger for computing power is no secret in the artificial intelligence (AI) industry, with Open AI seeking billions for AI chip infrastructure and NVIDIA set to ship 1.5 million AI server units per year by 2027. Meanwhile, the calls for further AI regulation coming from both outside and inside the industry are growing louder all the time.
Wouldn’t it be nice if we could solve both problems at once?
According to a recent report from the University of Cambridge, we might be able to do just that. With more than a dozen authors, “Computing Power and the Governance of Artificial Intelligence” includes an assessment of the current state of AI governance and argues that compute (i.e., computing power) is the best target for future policymaking.
One such policy would involve creating a global registry for tracking the flow of chips for AI supercomputers, perhaps along the lines of the Nuclear Suppliers Group (NSG) or the Missile Technology Control Regime (MTCR). More extreme measures considered in the report include built-in compute caps that would limit the number of chips each AI chip can connect with, as well as a digital veto held by multiple stakeholders to prevent training risky AI. The latter obviously has its origins in nuclear weapons.
Regulating AI with Compute
What makes compute a particularly appealing lever, according to the researchers, is that—unlike the other key inputs to AI: data and algorithms—compute is detectable, excludable and quantifiable. Most importantly, it’s produced through an extremely concentrated supply chain.
Here’s a quick breakdown of why these four traits make compute governance feasible:
- Detectable: Large-scale AI training and deployment is highly resource intensive
- Excludable: The physicality of hardware makes it possible to restrict access to AI chips.
- Quantifiable: Computation power is easily measured, reported and verified.
- Concentrated: The supply chain is complex, inelastic and dominated by few players.
“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, co-lead author of the report, in a press release.
Belfield and his colleagues note that AI compute has grown exponentially since the start of the “deep learning era” in 2010, with the power used to train the largest machine learning models doubling roughly every six months. As a result, the largest models today use approximately 350 million times more compute than they did in 2010.
“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI,” said Belfield.
AI Governance Policies
The report puts policy ideas into three buckets: increasing the global visibility of AI compute, allocating compute resources based on societal benefit and enforcing restrictions on AI compute.
Each type of policy requires buy-in from across the supply chain as well as from regulators. For example, a regularly-audited international AI chip registry would require chip producers, sellers and resellers to report all transfers to ensure precise information on the amount of compute possessed by nations and corporations at any given time.
“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”