
The United States and China will discuss artificial intelligence safeguards, including creating a protocol to keep powerful artificial intelligence models out of the hands of non-state actors, Treasury Secretary Scott Bessent said Thursday.
Mr. Bessent, who spoke from Beijing in an interview with CNBChe did not provide more details, including when those talks would take place. However, Chinese leader Xi Jinping and President Trump were expected to discuss AI at their summit in the Chinese capital.
If the talks take place, it would be the first time the two countries have formally addressed the issue during Mr Trump’s second term. The capabilities and uses of artificial intelligence have grown rapidly, so there are fears that the technology could be misused by hackers and terrorists, or that it could escape human control.
“The two AI superpowers are going to start talking,” Mr Bessent said. “We’re going to create a protocol in terms of how to follow best practices for artificial intelligence to ensure that non-state actors don’t get hold of these models.”
Still, Mr. Bessent made clear that the fierce competition between the United States and China for supremacy in AI — which has been a major obstacle to security cooperation — remained front and center for American policymakers. Officials and experts in both countries argued that they could not slow down technological development and risk losing out to their rivals.
Mr. Bessent said the United States is willing to work with China on AI security because “the Chinese are substantially behind us” when it comes to developing the technology.
“I don’t think we’d be having the same discussions if they were that far ahead of us. So we’re going to put US best practices, US values into it and then release it into the world,” Mr Bessent said.
Experts suggest that China’s AI models may be several months behind the leading US models.
Another obstacle for the United States and China to cooperate on AI security is that they generally focus on different potential threats.
American experts generally point to existential risks, such as the possibility of artificial general intelligence or superintelligence that surpasses human intelligence. Chinese scholars and officials have increasingly highlighted risks related to social stability and information control, such as the potential for chatbots to produce content that challenges Chinese leadership and politics.
Still, researchers in both countries highlighted some shared risks, such as the possibility of using artificial intelligence to develop new biological weapons.





