California’s Gift to Big Tech

COMMENTARY Big Tech

California’s Gift to Big Tech

Jun 12, 2024 5 min read
COMMENTARY BY
Jake Denton

Research Associate, Tech Policy Center

Jake is a Research Associate in the Tech Policy Center at The Heritage Foundation.
In practice, the bill creates insurmountable barriers to entry, pushing potential challengers out of the market and entrenching the dominance of a few tech giants.  Userba011d64_201 / Getty Images

Key Takeaways

While presented as an attempt at responsible AI governance, SB 1047, if enacted, would help powerful firms stifle competition. 

Instead of broadly suppressing powerful models, an alternative, more targeted regulatory approach might focus instead on how the model or tool is actually used.

If the advocates of corporate capture prevail, we face a future in which AI becomes an instrument for entrenching the power of established tech elites.

On May 21, the California State Senate passed SB 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” the first major piece of legislation aiming to regulate AI in the state that hosts much of the tech industry. The bill is now barreling toward a vote in the State Assembly, which is expected to take place sometime this summer. 

The bill was introduced by State Senator Scott Wiener, a San Francisco Democrat known for his progressive views and controversial legislative record. Wiener has touted the bill as a targeted measure aimed at reining in Big Tech, claiming that smaller startups would remain “free to innovate without any new burdens.” However, the bill’s impact extends far beyond its purported scope and threatens to undermine the very progress it claims to protect. While presented as an attempt at responsible AI governance, SB 1047, if enacted, would help powerful firms stifle competition. 

In its original version, the proposed legislation set an arbitrary technical threshold of 10^26 Floating Point Operations Per Second (FLOPS); any company whose AI models fell below this level wouldn’t be subject to oversight. FLOPS, a measure of computational power, has become the go-to metric for legislators seeking to determine which models warrant regulatory oversight. This level of computational power is currently within reach of only a handful of the most advanced AI companies and attendant systems, such as OpenAI’s GPT-4

>>> The U.S. Shouldn’t Go the Way of Europe on AI

In theory, this meant the bill was tailored to affect only well-resourced billion-dollar companies. However, the legislation’s inclusion of a vague and expansive “similar-performance” standard made it likely that SB 1047’s reach would extend beyond these cutting-edge systems. In practice, the bill creates insurmountable barriers to entry, pushing potential challengers out of the market and entrenching the dominance of a few tech giants. 

Recent amendments to SB 1047, aimed at appeasing the bill’s critics, have done little to alleviate the concerns of developers and startups in the industry. While the updated definition of a “covered model” now requires the model to have surpassed an estimated training cost of $100 million and has removed the nebulous “similar-performance” metric, these changes are ultimately insufficient to address core issues with the legislation. The bill still includes reporting requirements that place an undue burden on developers with meager resources. To enforce these standards, SB 1047 also conjures into existence a new bureaucratic Leviathan: the “Frontier Model Division,” which is granted sweeping authority to demand compliance from companies regardless of their size or resources. The end result would be to make it nearly impossible for smaller companies to compete. 

Moreover, SB 1047 is built on the faulty premise that the size or power of an AI model makes it inherently dangerous. An alternative, more targeted regulatory approach might focus instead on how the model or tool is actually used. Instead of broadly suppressing powerful models, such an approach would focus on mitigating specific risks, such as privacy violations. 

The consolidation of AI threatened by the California bill will have profound consequences not just for individual firms, but the trajectory of the technology. In recent years, the most groundbreaking AI advances have emerged not from the proprietary models of tech giants, but from the dynamic ecosystem of open-source innovation. OpenAI’s own trajectory illustrates this. After initially open-sourcing elements of GPT-2, the model produced a number of creative applications, from practical tools like a patent-claim generator to recreational games like AI Dungeon. When OpenAI later pivoted to a closed model and partnered with Microsoft, the result was a concentration of power and control, transforming OpenAI from a boundary-pushing startup to what is effectively a subsidiary focused on proprietary research for Microsoft’s gain. In contrast, open foundational models allow nimble startups and entrepreneurs to wield these tools. Newer projects like Meta’s LLaMA have showcased the potential of open collaboration, often rivaling the performance of closed-source models at a fraction of the computational cost

>>> The U.S., Not China, Should Take the Lead on AI

If the fervor with which tech giants and their legislative allies are moving to regulate open-source AI is any indication, this collaborative, transparent model of development poses a threat to the corporate status quo. Open-source platforms and libraries form the bedrock upon which myriad startups, entrepreneurs, and independent developers construct their ventures. They enable small teams to vie with established players by harnessing shared knowledge and resources. 

The Golden State isn’t the only place where industry incumbents and their allies are attempting to crush upstart competitors. Across the nation, a tidal wave of AI legislation is cresting, with more than 400 bills under consideration in more than 40 states. Some of these bills are good-faith efforts to promote responsible AI development, but the sheer scale of the legislative onslaught and the complexity of the subject matter makes it nearly impossible for lawmakers to make informed judgments. Moreover, closer examination reveals the hidden hand of Big Tech behind many of these initiatives, as industry giants and their proxies—from trade associations to freshly-minted, lavishly funded nonprofits—work to shape the regulatory landscape to their advantage. 

If the advocates of corporate capture prevail, we face a future in which AI becomes an instrument for entrenching the power of established tech elites, allowing them to consolidate their market dominance and shape the future as they see fit. For the United States to maintain its position at the forefront of the AI revolution, we should champion a regulatory framework that favors serendipitous discoveries and unexpected breakthroughs. To this end, we must reject the ploys of tech-industry incumbents to weaponize regulation against open-source challengers, and instead foster an ecosystem in which innovative startups and independent developers can thrive.

This piece originally appeared in Compact Magazine

More on This Issue