Trans-Pacific View

A Blueprint for a Functional China-US Working Group on AI

Recent Features

Trans-Pacific View | Security | East Asia

A Blueprint for a Functional China-US Working Group on AI

Provided the United States and China can focus on practical considerations, the promised working group could be a significant source of stability. 

A Blueprint for a Functional China-US Working Group on AI
Credit: Depositphotos

Late last year U.S. President Joe Biden and China’s leader Xi Jinping met in San Francisco in an attempt to restabilize the relationship after a troubled year. The meeting ended without a concrete agreement on AI, despite rumors of one, but both sides committed to form a working group on AI in the future. 

Since then, little progress has been made in defining this working group. Even so, it carries the seeds of potential success. Meaningful talks between the two military, economic, and technology superpowers could even have a galvanizing effect in other international efforts on AI that are currently deadlocked, like the United Nations’ expert group. What is most necessary between the two powers is regularized and structured contact on critical AI safety and stabilities issues. Provided the United States and China can focus on practical considerations, this working group could be a significant source of stability. 

Creating regular, repeated, and reliable contact is going to be the most difficult task, but there are steps the working group members can take to ensure this. Both sides should agree to keep the subjects as focused as possible rather than using it as a forum for grandstanding. There are political incentives to air grievances: China will want to complain about U.S. export controls, and the United States will want to examine the use of AI to enable human rights abuses in China. It will be impossible to avoid these topics entirely, but since they will not be solved by the working group, they should not consume the whole conversation. Instead, the group should move quickly from those topics into those where there is a congruence of interests specific to the bilateral relationship.

The Bletchley Declaration, which was one of the results of the U.K. AI Safety Summit that both the United States and China signed, includes some proposals that this working group could build on. In the context of the China-U.S. relationship, sharing AI testing and evaluation standards and procedures would be both beneficial to AI safety and trust as well as politically achievable. The issue is mostly focused on civilian applications of a new technology: testing, evaluation, safety, risk, and transparency standards affect far more civilian users given the rarity of military AI systems. Additionally, removing the security implications of the conversation will create opportunities for a more candid negotiation.

The working group itself may be a way to develop these standards if the delegations include enough members with technical AI skills, but it is more likely and easier to make it a venue for developing the means of sharing and agreeing to already existing standards. A key consideration is how to ensure that both countries can share information about safety incidents and practices in a way that will be believed and acted on in the other.  

There are also military recommendations within the Bletchley Declaration that the working group could explore, such as codes of conduct. A good start would be an agreement on procedures for when drones become uncommunicative when both navies are in an  area. This is particularly important to work out in advance as they frequently come in contact in the Western Pacific and are likely to have more and more autonomous systems accompanying the fleets in the future.

An agreement solidifying human control of nuclear launch decisions is a more fraught topic, but one that is still possible for the working group to explore as well. Ostensibly both countries have already signaled support, with the United States issuing clear policy that humans will remain in control of nuclear weapons in the Nuclear Posture Review, and the Chinese expressing the need for human control at the United Nations. Given these congruent interests, China and the U.S. should clearly lay out how they will operationalize these principles to each other to reinforce nuclear stability and forestall one of the most dangerous uses of AI. 

Given the recent trajectory of China-U.S. relations, hope of a major breakthrough should always be tempered, but if the working group is given the chance to blossom through a practical focus, there is plenty of opportunity for serious improvements on AI safety. Testing standards and transparency can build trust and create safer AI systems. Codes of conduct can prevent dangerous accidents in areas of military operation. Agreements about human control of nuclear weapons could begin a norm that helps prevent the destruction of all life. 

This working group can have significant calming effects on the nuts and bolts of the China-U.S. relationship if it can avoid falling prey to the same lack of commitment that was seen in the military-to-military contacts.