The U.S. Shouldn’t Go the Way of Europe on AI

COMMENTARY Big Tech

The U.S. Shouldn’t Go the Way of Europe on AI

May 8, 2024 6 min read
COMMENTARY BY
Jake Denton

Research Associate, Tech Policy Center

Jake is a Research Associate in the Tech Policy Center at The Heritage Foundation.
Open source AI invites the world’s brightest minds to stress-test, scrutinize, and improve upon these models, accelerating the development of safe and responsible technologies. MF3d / Getty Images

Key Takeaways

Governments across the West seem more interested in policing the content and conduct of AI than in advancing its development.

Even Canada has been quietly courting every AI startup and researcher it can find, hoping to attract the best and brightest from Europe’s beleaguered tech scene.

We can either cling to the illusion of control through burdensome regulations or unleash AI’s full potential by letting it breathe in the open air of transparency.

Last month, the U.S. and UK AI Safety Institutes announced a partnership to research AI safety and develop policy recommendations. This venture appears prudent on the surface, but behind the benign facade of “safety” lies a more troubling agenda: a thinly veiled attempt to exert control over what AI systems can say and do. Under the guise of protecting us from potential risks, governments across the West seem more interested in policing the content and conduct of AI than in advancing its development.

The U.S. and UK collaboration is merely the latest manifestation of a paternalistic “AI safety” movement. Under the guise of addressing valid concerns about national security and the potential risks associated with the new technology, governments around the world are launching similar initiatives as a smokescreen for more control.

Take the European Union, for example. In March, they passed the AI Act, a sweeping set of regulations that aim to govern the development and deployment of artificial intelligence across the continent. This may seem like a victory to those who fear AI’s potential, but in reality it has sealed Europe’s fate as a digital vassal. The legislation uses a risk-based approach that calibrates the regulatory burden according to potential harm and purports to provide developers with clear requirements regarding the use of AI. While this concept may sound reasonable, the EU’s expansive definition of “high-risk” could have far-reaching consequences.

>>> Big Tech’s A.I. Power Grab

Under the AI Act, a general-purpose AI model can be classified as having “systemic risk” based on vague criteria such as “high impact capabilities” or even a unilateral decision by the Commission. This ambiguous language granted to the Commission could regulate many foundational models that will serve as the building blocks for tomorrow’s AI systems, even those developed as transparent, open source projects. While demanding greater transparency from certain models developed by private companies is reasonable, subjecting open-source projects to the same scrutiny is a mistake. Open-source development embodies the very values the AI Act purports to uphold—collaboration, transparency, and innovation—and it should be celebrated, not stifled.

By failing to differentiate between closed source and open source AI, the act threatens to ensnare transparent projects in the same web of regulations as their closed source competition. This lack of clarity leaves developers in perpetual uncertainty, never sure if their good-faith efforts will be met with praise or punishment. The result is a chilling effect on open-source innovation, as the brightest minds in AI navigate an ever-shifting labyrinth of compliance requirements. For example, France’s Mistral, an AI company which offers an open-source foundational model, warned that the “AI Act could kill our company.” Leading European companies—like Siemens, Carrefour, Renault, and Airbus—have echoed that message. And Germany-based Aleph Alpha has reportedly been approached by foreign nations interested in luring the company away to more welcoming shores.

Indeed, rival nations are circling like sharks, ready to sink their teeth into the continent’s most promising startups. The UAE, sensing an opportunity to establish itself as the new capital of the AI world, has wasted no time in rolling out the red carpet for Europe’s brightest minds, tempting them with golden visas and access to its own cutting-edge language model, Falcon. Saudi Arabia, not content to play second fiddle, has thrown down the gauntlet with a staggering $40 billion AI investment fund, a sum that makes the paltry offerings of EU member states look like pocket change. Even Canada has been quietly courting every AI startup and researcher it can find, hoping to attract the best and brightest from Europe’s beleaguered tech scene.

>>> The U.S., Not China, Should Take the Lead on AI

What the United States will do remains an open question at this critical juncture. In 2023, AI-related lobbying surged by a staggering 185%, largely driven by the self-serving interests of a handful of tech giants seeking to advance anticompetitive, licensing-based approaches. And with the AI market rapidly consolidating, the future of the United States could look a lot like Europe’s, where the transformative potential of AI is surrendered to bureaucratic overreach.

It doesn’t have to be this way. Unlike closed source systems that claim to be safer but remain unaccountable and hidden from public scrutiny, open source AI invites the world’s brightest minds to stress-test, scrutinize, and improve upon these models, accelerating the development of safe and responsible technologies. This decentralized approach democratizes access to cutting-edge tools, which has historically unleashed a torrent of ideas and breakthroughs. The best way to protect our national security is not by hiding this transformative technology behind closed doors, but rather by ensuring its advancement bears the hallmarks of our distinct values. Locking AI away in walled gardens and proprietary black boxes will be a death sentence for our competitiveness on the global stage, stifling innovation and ceding leadership to nations that embrace the power of open collaboration.

If Congress recognizes this moment as a fragile and fleeting opportunity to chart a different course, one that promotes innovation and protects open-source AI development, it could secure America’s place at the forefront of the AI revolution. The stakes could not be higher—the nation that wins the AI race will shape the 21st century. We can either cling to the illusion of control through burdensome regulations or unleash AI’s full potential by letting it breathe in the open air of transparency.

This piece originally appeared in The European Conservative

Exclusive Offers

5 Shocking Cases of Election Fraud

Read real stories of fraudulent ballots, harvesting schemes, and more in this new eBook.

The Heritage Guide to the Constitution

Receive a clause-by-clause analysis of the Constitution with input from more than 100 scholars and legal experts.

The Real Costs of America’s Border Crisis

Learn the facts and help others understand just how bad illegal immigration is for America.