Interviews

Why AI Companies Are ‘Modern-Day Empires’

Recent Features

Interviews | Society | East Asia

Why AI Companies Are ‘Modern-Day Empires’

Insights from Karen Hao.

Why AI Companies Are ‘Modern-Day Empires’
Credit: Depositphotos

Trans-Pacific View author Mercy Kuo regularly engages subject-matter experts, policy practitioners, and strategic thinkers across the globe for their diverse insights into U.S. Asia policy. This conversation with Karen Hao – award-winning journalist covering the impacts of artificial intelligence on society and author of newly published “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI” (Penguin Press 2025) – is the 465th in “The Trans-Pacific View Insight Series.” This conversation has been edited and summarized for length and clarity. 

What was the impetus behind your decision to write “Empire of AI”? 

I’ve been covering AI since 2018 and OpenAI since 2019. When everyone had their ChatGPT moment, there was a sudden reset in the public conversation around the technology and the company. It was as if AI were being introduced for the first time, and the predominant narratives about it were coming mainly from OpenAI. There was a lack of context regarding where AI came from and from whom it came, so I wanted to give that history and provide context of AI’s evolution. 

The primary message of my book is that the particular direction of AI today is deeply concerning. This direction is not inevitable. It’s the product of human decisions. We as a society can shape the direction of AI development in the future. 

“OpenAI is now leading our acceleration toward this modern-day colonial world order.” Please explain this statement. 

I call the book “Empire of AI” because these AI companies, such as OpenAI, should be thought of as modern-day empires. These companies check off all the features of empires. First, they lay claim to resources that are not their own, yet act like they are. Companies will take the data of billions of users who never consented to give personal data to be used in AI, then treat this information as fair game and impose their rules on data acquisition. 

The second feature of empires is exploitation of an enormous amount of labor. AI companies contract workers with extremely poor pay to moderate content. OpenAI also defines artificial generative intelligence (AGI) as “highly autonomous systems that outperform humans at most economically valuable work,” so their intent is to build a labor-replacing technology. 

The third feature is control of knowledge production. Over the last decade more and more AI researchers have become affiliated with AI companies rather than with universities. Most AI technology research is completely co-opted by AI companies themselves. The foundations of our understanding of AI are filtered through the lens of the empire. 

The fourth feature is a narrative of fierce competition. Past empires aggressively competed with each other to exploit resources and labor based on the idea that they were the good empire and needed to beat the evil empire. Empires of AI operate in a similar way. Not only do they compete with each other under the idea that they are the superior empire, they often deliberately evoke China as the evil empire to justify their continued consolidation of labor, talent, and intellectual property. 

The fifth feature is that empires act under a civilizing mission. They propagate a rhetoric and mantra of belief that they have the moral and scientific clarity to bring modernity to everyone, in essence to bring everyone to heaven rather than to hell. The AI world has quasi-religious movements that embrace what I call the AGI religion, which has two factions – the boomers and doomers. Boomers believe AGI will bring us to utopia. Doomers believe AGI will devastate humanity. Both conclude therefore that AGI must be controlled by their own adherents. I call these movements quasi-religious because there is no evidentiary basis for either of their claims. This ideological zeal is driving a lot of AI acceleration today. 

Describe the Frontier Model Forum and examine Washington’s concerns over the consequences of what China could do with frontier models. 

The Frontier Model Forum (FMF) was founded in 2023 as a consortium of companies – OpenAI, Microsoft, Google, and Anthropic; Meta joined in 2024 – to advance relevant AI research and influence the policy agenda on AI safety risks. The group argued that highly advanced frontier models could become dangerous, such as by being used to develop biological and chemical nuclear threats or go rogue. 

This rhetoric was a really great way to shift Washington’s attention away from regulating current models, such as their impact on copyright, labor, and environment, to regulating future models and completely theoretical risks, such as models “extricating” themselves from servers or subverting human control. The rhetoric played particularly well because Washington was and still is hyper-sensitive to the idea that China could get access to these technologies being characterized in this way. 

Analyze the trajectory of China’s AI evolution with ResNet, DeepSeek, and emerging innovations. 

China is in a really interesting moment in its AI trajectory now. The country has developed quite a robust AI ecosystem. The U.S. government amplified anti-China rhetoric particularly with the former “China Initiative.” This initiative led many Chinese researchers, including those in the AI field, who were living in the U.S. and excited to contribute to the U.S. ecosystem, to move back to China. The COVID pandemic also led to a lot of AI research talent in China that would have aspired to go to the U.S. to study and work to stay in China. Both these factors resulted in the upskilling of China’s workforce in AI expertise. 

Even though the U.S. government has tried to restrict Chinese access to U.S. AI technology, this Chinese talent has been able to generate workarounds for AI technologies. Chinese companies can still figure out ways to move forward despite U.S. export controls and research restrictions. 

Assess the role and reach of OpenAI in AI competition and cooperation between the U.S. and China. 

Like many companies in Silicon Valley, OpenAI has realized that the best way to remove any political and regulatory obstacles is to evoke the China card. With any scrutiny of AI companies, Sam Altman will wave the China card. Altman’s Washington Post op-ed “Who will control the future of AI?” in 2024 after the OpenAI board crisis framed the future of AI in terms of democratic AI versus authoritarian AI. 

Recently, OpenAI also spoke out about the need to build “democratic AI rails” around the world and that it should be the one to do so. This notion is flawed. Silicon Valley companies are techno-authoritarian in nature. Their decision-making is not democratic but rather based on their self-interest. The framing of AI’s future in terms of democracy versus autocracy is essentially to remove any obstacle to their own techno-authoritarian interests.