
For more than two years, technology leaders have made an unusual request of lawmakers in the forefront of artificial intelligence development. They wanted Washington to regulate them.
Technical managers warned the legislators that generative AI, which can produce text and pictures that mimic human creations, had the potential to disrupt national security and elections, and could eventually remove millions of jobs.
AI could be “pretty bad”, Sam Altman, OpenAI CEO, testified in Congress in May 2023. “We want to cooperate with the government to prevent it.”
But since the election of President Trump, the technical leaders and their society, they changed their melody and in some cases turned the course, with bold requests for the government to stay off the road, in what became the strongest pressure on progress in their products.
In recent weeks, META, Google, Openai and more and more Trump’s administration have asked to block state AI laws and to say that it is legal for them to use the material protected by copyrights to train their AI models. They also lobby using federal data to develop technology and also for easier access to energy sources for their computing requirements. And they asked for tax reliefs, grants and other incentives.
The shift was made possible by Mr. Trump, who said that AI is the most valuable weapon in a country to outline China in advanced technologies.
The first day in the office signed Mr. Trump the executive order of Roll Back Safety Testing Rules for AI uses the government. Two days later, he signed another order and required industry proposals to create a policy to “maintain and strengthen the American global dominance of AI”.
Technical companies “are really encouraged by Trump’s administration and even problems such as security and responsible AI, have disappeared completely from their fears,” said Laura Caroli, senior colleague in Wadhwani AI Center at the Center for Strategic and International Studies, the non -profit Think Tank. “The only thing that counts is the establishment of the American lead in AI”
Many AI policy experts are afraid that such unbridled growth could be accompanied by potential problems by the rapid expansion of political and health disinformations; Discrimination with automated financial, work and housing; and cyber attacks.
The conversion of technical leaders is sharp. In September 2023, more than a dozen of them approved AI regulation at the Hill Capitol Summit organized by Senator Chuck Schumer, Democrat of New York and at that time the majority leader. At the meeting of Elon Musk warned against the “civilization risks” that AI represents
As a result, Biden Administration began working with the largest AI companies to voluntarily test its systems on the weaknesses of security and security and order security standards for the government. States such as California have introduced legal regulation legislation with safety standards. And publishers, authors and actors sued technology companies to use the materials protected by copyrights to train their AI models.
(The New York Times sued Openai and his partner Microsoft, who accused them of violating copyrights related to intelligence content AI.
After Mr. Trump won the elections in November, technology societies and their leaders immediately increased their lobbying. Google, Meta and Microsoft donated $ 1 million inauguration Mr. Trump, as well as Mr. Altman and Apple Tim Cook. Meta Mark Zuckerberg threw an inauguration party and met Mr. Trump many times. Mr. Musk, who has his own company AI, XAI, spent almost every day on the side of the president.
On the other hand, Mr. Trump described the AI announcement, including the Openi, Oracle and SoftBank plan to invest $ 100 billion in AI data centers, huge buildings full of servers that provide computing performance.
“We have to lean into the future of AI with optimism and hope,” said vice president JD Vance last week to government officials and technology leaders.
At the AI summit in Paris last month, Mr. Vance also called on “for growth” AI and warned the world leaders of “excessive regulation” that could “kill the transformation industry as it accepts”.
Now technology companies and other disabled AI offer answers to the presidential second executive order AI, “removing the obstacles of American leadership in artificial intelligence”, which within 180 days ordered the development of AI growth. Hundreds of them have made comments on the National Science Foundation and the Office of Science and Technology to influence this policy.
Openai submitted 15 -page comments and asked the federal government to prevent states in creating AI laws. The company based in San Francisco also caused Deepseek, the Chinese chatbot created for a small fraction of the cost of American developed chatbots, and said it was an important “break of the state of this competition” with China.
If Chinese developers “have unlimited access to data and American societies are left without access to fair use, the AI plant is effectively terminated,” Openai said, demanding that the US government to transfer data to feed into its systems.
Many technological companies also argued that their use of works protected by the AI -protected by the AI and that the administration should be on the side. Openai, Google and Meta said they believe that they have a legal approach to cannons protected by copyrights such as books, films and art for training.
The Meta, which has its own AI model called Llama, has forced the White House to issue an executive order or other action to “clarify that the use of publicly available data training data is clearly fair use”.
Google, Meta, Openi and Microsoft said that their use of copyright data was legal because the information was transformed in the process of training their models and was not used to replicate intellectual property holders. Actors, authors, musicians and publishers claimed that technology companies should compensate for them for obtaining and using their works.
Some technology companies also lobbyed to Trump’s administration to support the AI ”open source”, which basically makes a computer code for copying, editing and re -use.
Meta, which owns Facebook, Instagram, and WhatsApp, has promoted the most difficult recommendations of policy for open resources that other AI companies, such as anthropic, have identified as increasing vulnerability to security risks. Meta said that open source technology speeds up the development of AI and can help start -up businesses to catch up with more established companies.
Andreessen Horowitz, the Silicon Valley risky capital with shares in dozens of start -up AI, also called for support for models with open source code, which many of its companies rely on creating AI products.
And Andreessen Horowitz has presented the most popular arguments against new regulations for AI existing security, consumers and civil rights protection laws, the company said.
“Forbid damage and punish bad actors, but do not require developers to go through difficult regulatory hoops on the basis of speculative fear,” Andreessen Horowitz said in his comments.
Others continued to warn that AI had to be regulated. Civil rights groups demanded audits of systems to ensure that they do not discriminate the vulnerable populations in the decision on housing and employment.
Artists and publishers stated that AI must publish their use of copyright and asked the White House to reject the technological industry arguments that their unauthorized use of intellectual property to train their models was within the limits of copyright. The AI Policy Center, the Think Tank and the Lobbing Group, called for third -party systems for national security vulnerabilities.
“In any other industry, if the product damages or negatively hurts consumers, this project is defective and the same standards should be used for AI,” said KJ Bagchi, vice president of the Center for Civil Rights and Technology, which submitted one of the applications.