
President Trump, who has championed a hands-off approach to artificial intelligence and given Silicon Valley a free hand to roll out the technology, is considering imposing government oversight of new artificial intelligence models, according to US officials and people briefed on the talks.
The administration is discussing an executive order to create an artificial intelligence task force that would bring together technology executives and government officials to examine potential surveillance practices, according to U.S. officials who declined to be identified to discuss sensitive policies. Among the potential plans is a formal government review process for new AI models.
In meetings last week, White House officials told executives from Anthropic, Google and OpenAI about some of those plans, people briefed on the talks said.
The task force is likely to consider a range of oversight approaches, officials said. But the review process could be similar to the one being developed in Britain, which has tasked several government bodies with making sure AI models meet certain safety standards, people in the tech industry and the administration said.
The discussions signal a sharp turnaround in the Trump administration’s approach to AI Since returning to office last year, Mr. Trump has been a major booster of a technology he says is vital to winning the geopolitical battle against China. Among other things, he quickly rescinded the Biden administration’s regulatory process that required AI developers to conduct security assessments and report on AI models with potential military applications.
“We’re going to make this industry absolutely top because there’s a beautiful baby born right now,” Trump said of AI at an event in July. “We have to grow that child and let that child thrive. We can’t stop it. We can’t stop it with politics. We can’t stop it with stupid rules and even stupid rules.”
Mr Trump left room for some rules, but added that they “must be more brilliant than the technology itself”.
But Mr Trump finds himself increasingly isolated on the issue of artificial intelligence. Because public concerns about the threat this technology poses to jobs, energy prices, education, privacy and mental health have found common ground in this topic. AND Pew Research Center poll last year found that 50 percent of Republicans and 51 percent of Democrats said they were more worried than excited about the increased use of artificial intelligence in everyday life.
The non-interventionist policy also began to change last month after the start-up Anthropic announced a new AI model called Mythos. Mythos is so powerful at identifying security vulnerabilities in software that it could lead to a “cybersecurity showdown,” said Anthropic, which declined to disclose the model.
The White House wants to avoid any political fallout if there were to be a devastating cyberattack involving artificial intelligence, people in the tech industry and the administration said. The administration is also evaluating whether new AI models could yield cyber capabilities that could be useful to the Pentagon and US intelligence agencies, they said. To get ahead of models like Mythos, some officials are pushing for a rating system that would give the government first access to AI models but not prevent their release, people briefed on the talks said.
The shift in artificial intelligence has sown confusion. As conversations between the White House and tech companies continue, some executives have argued that too much government oversight will slow U.S. innovation against China, people briefed on the discussions said. But the companies also disagree on how the United States should proceed with possible regulation.
“Technology is developing extremely quickly and there are few formal procedures, but they also don’t want excessive regulation,” said Dean Ball, who was a senior adviser on AI in the Trump administration before leaving for the Foundation for American Innovation last year. “It’s a tricky balance.
A White House official said discussions of any potential executive order were “speculation” and that Mr. Trump would make any policy announcements himself.
Shifting AI policy coincides with a change in leadership in the White House. In March, David Sacks, the White House AI czar who spearheaded the administration’s deregulation efforts, said he was leaving his role. Susie Wiles, the White House chief of staff, and Treasury Secretary Scott Bessent stepped into Mr. Sacks’ shoes, some of the people said. Ms. Wiles and Mr. Bessent have told people outside the administration that they plan to have a greater say in shaping AI policy.
But Ms. Wiles and Mr. Bessent’s plans were complicated by a bitter dispute between the Pentagon and Anthropic. This year, the startup and the Pentagon have been embroiled in a battle over a $200 million contract and how the military should use AI in war. When the two sides failed to agree on terms, the Pentagon suspended the government’s use of Anthropic technology in March. Anthropic has since sued the government.
The conflict has made things difficult for some government agencies that have come to rely on Anthropic technology, according to military, intelligence and other US officials. Anthropic’s AI is still used by the military in a system known as Maven, which helps analyze intelligence and suggest targets for airstrikes in the Iran war.
The National Security Agency also recently used Anthropic’s Mythos model to assess vulnerabilities in U.S. government software, people familiar with the work said.
Last month, Ms. Wiles and Mr. Bessent met at the White House with Dario Amodei, Anthropic’s chief executive, with a focus on getting the government to use the company’s technology again. Both sides later described the meeting as “productive”.
Officials said that if the administration moves forward with vetting AI models, the task force will help identify agencies to help with the effort. Since no federal agency is responsible for all of the government’s cybersecurity work, some officials said the best way to proceed is to let the NSA, the White House Office of the National Cybersecurity Director and the director of national intelligence oversee a review of the model.
The task force could also look at whether there is a role for the Center for AI Standards and Innovation, an agency the Biden administration set up to review artificial intelligence models that are voluntarily shared with the government. Under Mr. Trump, the organization has been sidelined, industry people said, even though the White House said in an AI policy document that the group should play a role in evaluating the “performance and reliability of AI systems.”
Any of these moves would take the administration far from the regulatory philosophy that Vice President JD Vance outlined in a speech at the International AI Conference in Paris last year. At the time, he warned industry and government officials that “over-regulation of the AI sector could kill a transformative industry just as it’s developing.”
“The future of AI is not going to be won by hand-wringing about security,” he said. “He will win with construction.”
Cade Metz, Kate Conger and Tyler Pager contributed reporting.





