
OpenAI plans to hold a grand opening Wednesday for its first lobbying office in Washington, called the Workshop. The artificial intelligence startup said it created the space — part lab, part showroom — just blocks from the White House to better work with lawmakers.
The office is part of OpenAI’s increasingly aggressive efforts to influence AI policy. The company lobbied for the expansion of data centers needed to power the technology and pushed for free use of copyrighted material. It spent $1 million on federal lobbying in the first quarter, double the amount a year earlier, according to congressional disclosures.
Also downtown, AI rival Anthropic opened its first office in Washington in April as it battles the Pentagon to use its technology. It has hired six lobbying firms in recent months and increased its spending on lobbying in Washington tenfold to $3 million last year, according to publicly disclosed information.
Activity by AI companies has reached a fever pitch in the nation’s capital as they open offices, hire lobbyists and hold major conferences to present policy ideas and promote their technologies.
A quarter of the 13,000 federal lobbyists in Washington are involved in AI issues, up from 11 percent in 2023, according to an analysis of congressional disclosures by Public Citizen, a nonprofit watchdog group. Meta, Nvidia and Alphabet, the parent company of Google, spent a combined $47.8 million on federal lobbying last year, a 22 percent increase over 2024, according to a Senate disclosure. Meta and Alphabet spent the most money.
“We’re seeing an unprecedented flood of lobbying money for AI companies to protect their profits and their image at a time when Americans are very concerned about the technology,” said Isabel Sunderland, technology policy director at Issue One, a nonprofit government accountability group.
The victory over federal lawmakers has gained urgency as states this year introduced dozens of laws to put up railings around AI. The Trump administration — which once said U.S. companies should mostly have a free hand to develop the technology — is also considering imposing government oversight of new AI models.
The future of AI development is at stake. OpenAI, Meta and Google have pushed for little or no regulation, saying restrictions would hurt their chances in the AI race with China. Antropická and others supported the new laws and pointed out the potential dangers of the technology.
AI faces public skepticism ahead of November election. Voters expressed concerns about energy-intensive data centers that power artificial intelligence and rising electricity costs, along with concerns that the technology could disrupt the economy.
Parents’ groups have also sounded the alarm about children’s interactions with chatbots, which have led to some teenagers taking their own lives. In recent NBC News poll57 percent of registered voters said the risks of AI outweigh its benefits, compared to 34 percent who said the opposite.
Most AI companies said they are open to legislation that encourages innovation and development of the technology.
“This is universal technology at the scale of the wheel, the printing press, the internal combustion engine, electricity,” said Chris Lehane, OpenAI’s director of global affairs. “We at OpenAI have felt for some time that the conversation about policy solutions and policy needs needs to be as transformative and big as the underlying technology itself.”
(The New York Times has sued OpenAI and Microsoft, alleging copyright infringement of news content related to AI systems. Both companies have denied the suit’s claims.)
“We’re pushing for policymakers to come together on federal legislation that supports America’s leadership in AI,” said Julie McAlister, a Google spokeswoman.
In addition to AI companies, communications firms, trade groups and think tanks have proliferated on both sides of the AI issue in Washington.
Last year, Facebook co-founder Dustin Moskovitz’s Coefficient Giving philanthropy funded a new communications and lobbying group pushing for AI regulation. The group, Alliance for Secure AI, wants strict rules for chatbots to protect youth. She also aims for greater safety oversight of AI models and has opposed efforts by President Trump and some federal lawmakers to prevent states from creating AI laws.
The group’s executive director, Brendan Steinhauser, a former Tea Party leader, lobbied Congress and met with Texas lawmakers, including Sen. Angela Paxton, to push for child safety and other measures. He has also appeared in podcasts and other media.
“I’ll go to ‘Bannon’s War Room,’ NPR or The New York Times — anywhere and everywhere to get the message that politicians need to act quickly to protect citizens,” Steinhauser said.
OpenAI and Anthropic were the most active. In September, Anthropic made its official Washington lobbying debut with a day-long event at the city’s Union Station. Anthropic co-founders Dario Amodei and Jack Clark welcomed hundreds of politicians and Trump administration officials to showcase the company’s technology.
“We have always advocated for basic model transparency requirements,” Mr Amodei said at the event. “Many of the risks we fear most are coming at us. They are on the horizon.”
Antropic tripled the number of employees last year and plans to triple this number again this year. In January, it named its first lobbying chief, Anthony Cimino.
In February, the company was embroiled in a legal dispute with the Pentagon over the use of artificial intelligence in warfare and was labeled a “national security supply chain risk.” In March, she hired the Trump-connected lobbying group Ballard Partners to bolster her case with the White House.
Anthropic then opened its Washington office last month with a large event space to showcase its technology to regulators and discuss the effects of artificial intelligence on national security, the economy and security.
Last month, Anthropic also released a new artificial intelligence model, Mythos, which it said is so powerful at identifying security vulnerabilities in software that it could lead to a cybersecurity “showdown.” This helped start White House discussions about government oversight of AI models.
Anthropic and OpenAI have been in regular discussions with the White House about a potential executive order on model testing, the companies said.
“Our focus on the safe development of artificial intelligence and ensuring that America leads in artificial intelligence requires a close partnership between industry and government,” said Sarah Heck, chief policy officer at Anthropic.
After OpenAI opens its office in Washington—in the Gallup Building, a former Masonic temple—it has planned a series of inaugural events there.
Sessions include training local high school students and older adults on how to use AI. The company will then begin hosting policy discussions with lawmakers and Trump administration officials in the space.
“What’s that line from ‘Hamilton’?” OpenAI’s Mr. Lehane said. “This will be the room where it happens.





