Interviews

Elsa B. Kania on Artificial Intelligence and Great Power Competition

Recent Features

Interviews | Security

Elsa B. Kania on Artificial Intelligence and Great Power Competition

On AI’s potential, military uses, and the fallacy of an AI “arms race.”

Elsa B. Kania on Artificial Intelligence and Great Power Competition

The Diplomat’s Franz-Stefan Gady talks to Elsa B. Kania about the potential implications of artificial intelligence (AI) for the military and how the world’s leading military powers — the United States, China, and Russia — are planning to develop and deploy AI-enabled technologies in future warfighting.

Kania is an Adjunct Senior Fellow with the Technology and National Security Program at the Center for a New American Security (CNAS). Her research focuses on Chinese military innovation in emerging technologies. She is also a Research Fellow with the Center for Security and Emerging Technology at Georgetown University and a non-resident fellow with the Australian Strategic Policy Institute (ASPI). Currently, she is a Ph.D. student in Harvard University’s Department of Government.

Kania is the author of numerous articles and reports including Battlefield Singularity: Artificial Intelligence, Military Revolution, and China’s Future Military Power and A New Sino-Russian High-Tech Partnership. Her most recent report is Securing Our 5G Future, and she also recently co-authored a policy brief AI Safety, Security, and Stability Among Great Powers. She can be followed @EBKania.

What is artificial intelligence (AI) and what’s your preferred definition of it? 

As a political scientist who tends to be a realist with contrarian and constructivist inclinations, I suppose I could attempt to answer: AI is what states make of it? Moreover, the utility of any definition of AI depends upon its intended purpose. No definition can fully capture its complexity and continued progression. 

The very idea of “artificial intelligence” can convey as much salience as its technical realities. Of course, the meaning of AI has evolved throughout its history. I like to look back to the objective of the Dartmouth Conference of 1956, which started from “the conjecture that every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.”

Today’s conversations on AI in military affairs concentrate on various variants of “narrow” artificial intelligence, rather than artificial general intelligence or super intelligence, which remain perhaps distant but nonetheless consequential possibilities. 

Current discussions of AI often primarily concentrate on machine learning, which is the process of using algorithms to learn from data. In particular, much of the most exciting progress in recent years has leveraged deep learning, a technique that involves the use of layers of artificial neural networks, which are inspired by the structure of the human brain. 

At a basic level, AI involves software that leverages data for learning but also requires hardware to harness the power of significant computing capabilities to enable that process. As such, AI can be expected to diffuse rapidly but also depends upon that constraint of ‘compute.’ This dynamic elevates the strategic importance of advances in semiconductors that are tailored to enable AI applications, from Graphics Processing Units (GPUs) to new advances in neuromorphic computing that can enable novel architectures for chips. 

It is inherently challenging to define what AI is or can achieve when the field is so dynamic and evolving so rapidly. For the time being, AI/ML techniques are often limited by the availability of data, but that too may change with advances in the use of synthetic data and techniques that leverage reinforcement learning, such as AlphaGo Zero’s capability to learn from self-play alone. AI also suffers at present from issues of reliability and potential vulnerabilities, which the field of AI safety is looking to mitigate. 

In this past year alone, there have been a number of intriguing and concerning breakthroughs in AI that illustrate its potential and limitations. For instance, advances in the use of generative adversarial networks (GAN) have enabled the creation of ‘deep fakes,’ which can range from the uncanny and amusing (e.g., photos of cats that don’t exist) to the greatly concerning (e.g., alterations of videos that could be used to promulgate disinformation). Notably, OpenAI has released a model known as Generative Pre-Training (GPT), which is capable of generating synthetic text, which could enable the automation of the creation of fake content.  

I tend to be most excited by the interdisciplinary applications of AI in scientific research. AI has solved the three-body problem through the use of neural networks. In astronomy, AI has facilitated efforts to find new galaxy clusters. To date, AI has also been applied to drug discovery and to investigations of the human genome. The future possibilities may be limited only by human imagination. For all the fear and concern about AI, we must not neglect to consider that unique potential—and the wonder of it all.  

 The U.S. Department of Defense (DoD) will spend almost $1 billion on researching and developing (R&D) AI and machine learning technologies in the current fiscal year. The Chinese and Russian defense ministries are also allocating significant funds for comparable R&D efforts. What makes AI so significant for militaries? 

Chinese, Russian, and American leaders have recognized that AI is a strategic technology that could prove incredibly consequential for national competitiveness. It is hardly surprising that multiple militaries worldwide are exploring the potential of AI in military affairs. Increasingly, military strategists and researchers anticipate that AI could change the character of conflict, and some leaders have even postulated that AI could change the nature of warfare. 

The future trajectory of these emerging capabilities remain to be seen, yet AI has become a new direction of military competition in the pursuit of operational advantage. The first military to leverage the full potential of AI could achieve an edge over its competitors, potentially disrupting the military balance. 

The U.S. military has looked to maintain technological superiority, while the Chinese military has pursued the opportunity to overtake and render obsolete American advantages, and the Russian military has actively experimented throughout its current engagements. There are some commonalities in the capabilities that these competitors—and major militaries worldwide—are currently pursuing. 

The future trajectory of AI in military affairs is difficult to anticipate. In the near term, the most promising applications may be the most basic and the most practical. Some of the initial applications include intelligence, surveillance, and reconnaissance, from improving the efficiency of imagery analysis to the deployment of ‘AI satellites.’ Already, predictive maintenance for major weapons systems has proven beneficial, and advances in logistics could be quite consequential. 

AI may first transform warfare in virtual domains. In particular, AI will be important to advances in cyber defense, enabling greater speed and automation in cyber operations. The new techniques in cognitive electronic warfare promise improved capabilities in contesting dominance of the spectrum. The application of AI/ML to psychological operations through improved profiling and precision in targeting is also concerning. In an era where data can be decisive, AI could transform the contestation of decision superiority, from intelligence to future command and control. 

At the tip of the spear, increased autonomy in weapons systems is expected to enable operations in denied and heavily contested environments and could provide an advantage on the battlefield. There are intense concerns about the legal and ethical complexities and the potential for risks or accidents that could arise with future Lethal Autonomous Weapons Systems (LAWS), yet debates on LAWS and even definitions remain heatedly contested in international diplomatic engagements. The Chinese military is developing what it characterized as AI weapons or ‘intelligentized’ weapons systems that are more capable of selecting and engaging targets. For instance, advances in swarming raise the possibility of saturation attacks that could overwhelm defenders. 

Who is the world’s leading military power in the field of AI at the moment in your opinion? Then-U.S. Secretary of Defense James Mattis warned in 2018 that the United States is falling behind its so-called peer competitors—principally China–when it comes to AI. Do you agree with that statement?

In some of my current research, I am concentrating on creating better metrics to answer and evaluate just that question. It is difficult to determine what leadership in AI will involve or resemble, since we are talking about very disparate capabilities and applications. So perhaps the core question is not who is leading but what aspects of, which specific competencies in, leadership or comparative advantage will prove most consequential. 

There may be no single answer. For instance, potentially, the U.S. military may become more capable in leveraging AI in cyber operations, but the Chinese military could achieve greater advances in hypersonic weapons systems that can operate autonomously, and the Russian military may possess more experience in integrating unmanned systems in urban warfare. That is, different militaries may lead or possess particular proficiency in different applications, perhaps building upon existing strengths or priorities in investment. 

The impact of AI on the military balance may be dynamic and difficult to ascertain. Unlike more traditional capabilities, advances in AI cannot be readily counted or measured. It is likely that certain capabilities will be revealed and signaled. For instance, the Chinese military has demonstrated advances in swarming and announced intentions to deploy autonomous submarines. Certain Chinese drones and unmanned vessels are described as autonomous. 

However, the features of autonomy cannot be readily verified or evaluated without greater understanding of the underlying technical characteristics and limitations of weapons systems. I’d argue this very ambiguity could increase the risks of accidents or miscalculation, including misestimation of the military balance. 

Of course, for any military, talent and training will be critical determinants of the capacity to leverage AI. The new demands for technical proficiency will require new initiatives to recruit and retain and provide opportunities for continual learning and professional advancements. Inherently, militaries tend to be hierarchical organizations, and the adjustment to advance new priorities may be challenging, requiring not only support from leaders at the top but also a willingness to empower new specialists. 

On this front, I am cautiously optimistic to see new initiatives in the United States like the Armed Forces Digital Advantage Act, recently incorporated into the new National Defense Authorization Act, which is intended to establish digital engineering as a core competency and create new pathways for training and promotion. At the same time, the Chinese military is concentrating on creating new military research institutes to pursue advances in AI and recruiting more civilian technical specialists, while massively expanding initiatives in education and training. 

For the time being, the U.S. military appears to remain the leader and the standard relative to which the Chinese military measures its own advancement. However, American leadership in AI is hardly assured and shouldn’t be assumed, considering Chinese ambitions, investments, and advancements. The PLA is determined to leapfrog ahead of the U.S. military and to surpass it in the course of this revolution in military affairs (AI-RMA?), including through developing capabilities designed to exploit American weaknesses and vulnerabilities. 

Ultimately, I think it is far too soon to call this. At the moment, these trends remain nascent. The long-term trajectory of this competition will evolve and depend on factors that include tolerance of risk, progress in testing and assurance, the vulnerability of AI-enabled capabilities to adversary exploitation, and the willingness of various militaries to embrace these technological transformations, among others. So, ask me again in a couple of years perhaps?    

Can you briefly outline some of the major differences between the United States’ approach to developing and deploying AI-enabled technologies versus Chinese efforts? 

The significant differences between American and Chinese approaches will emerge as a result of their distinct strategic cultures, historical experiences, and organizational characteristics. To start, the U.S. and China have very different political economies that will shape their approaches to AI research and development. Chinese leaders are seeking to establish an innovation ecosystem that combines strong state support and direction a market orientation in which competition among commercial enterprises. The U.S. government has been slow to support and invest in science and research to a degree commensurate with the importance of these technologies.  

The U.S. military has extensive experience in low intensity conflict in its recent history, and the turn to leveraging data has arguably emerged out of these experiences. The U.S. military’s initial initiatives in AI concentrated on tactical utility, including in Project Maven, which concentrated on the use of AI/ML for full-motion video analysis. By contrast, the Chinese military, the People’s Liberation Army (PLA), is fighting to innovate and confronting the particular challenges of peacetime innovation, lacking operational experience but actively seeking to learn from the study of foreign militaries and new initiatives in war-gaming. 

The Chinese military has concentrated on the U.S. as a powerful adversary since the 1990s, whereas the U.S. has more recently reoriented towards great strategic competition. The PLA appears to be prioritizing applications of AI that reflect its priority domains and missions, including in space systems, undersea warfare, and hypersonics. Not unlike the U.S. military, the PLA is concerned with ‘defense big data’ and encountering the bureaucratic challenges of managing and leveraging it, while seeking to expand its acquisition of new sources of data with relevance to military applications. 

The U.S. and Chinese militaries will likewise encounter different bureaucratic obstacles and challenges. The PLA is in the midst of highly disruptive reforms that remain ongoing in its effort to promote greater jointness in operations. Xi Jinping has overseen the implementation of these reforms and elevated the importance of innovation. Since 2014, the PLA has been concerned about the new Revolution in Military Affairs (RMA), concentrating on a range of emerging technologies. The U.S. military is fighting to redirect its attention to strategic competition and to reallocate resources in accordance with evolving priorities.

For the PLA, human capital has been and continues to be an acute concern, so it is ramping up recruitment of technical specialists and new training and educational programming. Meanwhile, the absolute requirement of Party control over the military, from the role of political commissars to expectations of ideological conformity through political work, may also influence the PLA’s approach to these emerging technologies. 

The PLA’s approach to ethics and laws of armed conflict considerations remains unclear. The PLA is concerned with and considering issues that involved human control, including the distinction between humans in, on, and out of the loop. Chinese military is exploring and engaging with these legal issues, but it lacks experience with and has not operationalized or fully institutionalized its application to date, though there is talk of addressing these issues more in future reforms.. 

In that context, why is it not helpful to talk about an AI arms race that is supposedly taking place between the United States and China? 

Honestly, I have been absolutely exhausted by talk of an “AI arms race.” There have been multiple attempts by many of us in the field to counter that discourse and reframe the complex dynamics of cooperation and competition in AI today. Namely, “AI” is not a weapons system. To talk of arms racing misrepresents the complexity of this competition and fails to capture the multifaceted implications of today’s advances in AI. 

In fact, quite extensive research collaboration persists between American and Chinese researchers. There is a curious juxtaposition between the talk of competition in AI relative to the concurrence of collaboration in academia and industry. At the same time, the reality of rivalry is apparent nonetheless when it comes to concern with military competition.

I’m also concerned that talk of arms racing distracts attention from the importance of long-term economic trajectories between the United States and the People’s Republic of China. Indeed, the economic dimension of this rivalry is the foundation of long-term competitiveness. Chinese leaders are pursuing a national strategy for innovation-driven development, concentrating on science and technology as critical enablers of national power. It is imperative for American strategy to prioritize investing in innovation. 

What advantages does the United States have over China and Russia in the development of AI-enabled military technologies? What are some of Washington’s disadvantages vis-à-vis those countries? 

Perhaps the greatest advantage that the United States currently possesses is the dynamism and robustness of its innovation ecosystem. American universities and enterprises have been at the center of recent advances in AI. The United States has continued to attract top talent and remains a favorable environment for entrepreneurship. Despite current challenges, I also believe in the strength of our institutions. Today’s great power rivalry is a systemic competition, in which the vitality and success of our democracy should be a critical metric for success. 

I fear our greatest disadvantage may involve in our politics today, particularly hostility towards scientific expertise and immigration, but I hope these current conditions can be overcome in the future. The ethos that inspired the U.S. focus on science as the ‘endless frontier’ catalyzing American investment in scientific research must be rekindled. Throughout our history, immigration has been critical to the American economy and entrepreneurship, and current policies risk squandering this critical advantage. At the same time, the U.S. education system badly needs investment. For the United States, strategic competition must start at home. 

Do you see AI as an evolutionary or revolutionary technology in future warfare? Do you think that there is a risk that the impact of AI-enabled technology on the modern battlefield is being overhyped? 

I suppose I’d argue that AI will be evolutionary in the near term and potentially revolutionary in the more distant future. The AI hype cycle is not conducive to nuanced discussion of the real potential and serious difficulties that may come into play with the operationalization of AI. I worry that this tendency towards exuberant enthusiasm about the potential of AI-enabled capabilities also concerning since it can distract from careful consideration of the technical complexities, limitations, and vulnerabilities that arise with the development and deployment of AI systems. 

In particular, there are serious reasons for concern about AI safety, security, and reliability. In some cases, AI systems have contributed to unexpected and unintended externalities. Such complex systems can displayed emergence in real-world settings. Already, we’ve seen serious accidents occur as a result of the poor performance of AI in real-world conditions. In one fatal accident, a self-driving car crashed as a result of failing to identify a pedestrian jaywalking. So too, serious issues have arisen with bias in AI systems that reflect patterns of prejudice of distortion in the data that trains it, including a tool for recruitment that manifested bias against women. The dynamics of algorithmic discrimination have also involved disparities in accuracy based on race and gender.  

Perhaps, we have more reasons for concern about artificial ignorance and artificial incompetence for the time being? 

What do you see as the biggest AI-related danger with regard to U.S.-Sino/U.S-Russian great power rivalry? Do you think it can negatively impact strategic stability between these countries? 

AI could disrupt the current military balance, while exacerbating threats to strategic stability. There are reasons for concern that advances in AI could exacerbate the vulnerability of second strikes, which may be particularly concerning to China, which has maintained a more limited nuclear arsenal for a posture characterized as assured retaliation. At the same time, the PLA has been expanding and modernizing its nuclear forces. There are initial indications that may Chinese research might incorporate AI/ML into nuclear-relevant systems, including to improve early warning, targeting, and command and control. 

Current debates on the notion of a new era of counterforce have highlighted how the convergence of interrelated technological developments, from sensors, to computing, data processing, and now artificial intelligence, could erode the foundations of nuclear deterrence. These issues will merit further analytic and academic attention as these technologies are operationalized. At the same time, the incentives for a first strike, particularly in space and the cyber domain, and the imperative of speed that could create momentum for escalation, could heighten the risks.

Not only emerging capabilities but also the risks of accidents that may arise with such nanscent, untested technologies should be cause for concern. In retrospect, it is truly remarkable that humanity survived the Cold War, considering the false alarms and nuclear accidents that nearly caused catastrophe. The increased complexity that AI will introduce into military affairs can increase the risks of an accident or unintended engagement. Given these risks, it is imperative for great powers to be proactive in pursuing pragmatic engagement on issues of AI safety, security, and strategic stability in order to explore options for risk mitigation, as our recent policy brief highlights. 

What is the one myth or often repeated inaccuracy that you are trying to dispel with your research when it comes to AI and the ongoing military competition between the United States, China, and Russia? 

Initially, I started my research on Chinese military innovation because I was concerned that the U.S. national security community was consistently underestimating Chinese ambitions and advances, failing to recognize the full extent of this competitive challenge. At times, I have also been concerned about a tendency towards fatalism or overestimation of Chinese capabilities that neglects to consider its weakness and difficulties, from talent to limited experience and the influence of CCP ideology that may impede innovation. American strategy must be informed by careful assessments and sophisticated understanding of our competitors. This is the core intellectual challenge of strategic competition. 

Dreaming of a career in the Asia-Pacific?
Try The Diplomat's jobs board.
Find your Asia-Pacific job