
Google updated its AI (AI) principle on Tuesday, which highlights the company’s vision around the technology. The Mountain View-based tech giant mentioned four application areas that cannot be designed or deployed. These include weapons and surveillance and technologies that cause overall harm or violate human rights. However, a newer version of its AI principles has removed the entire section, suggesting that the tech giant may enter these previously banned areas in the future.
Google updates its AI principles
The company first released its AI principles in 2018, a time when the technology is not a mainstream phenomenon. Since then, the company has updated the document regularly, but for years it has seen harmful areas that cannot build AI-powered technology. However, on Tuesday, it was found that the section was completely removed from the page.
An archived webpage on the Wayback machine that started last week still shows the section titled “Apps We Will Not Go for”. Under this, Google lists four projects. First is the technology that “cause or may cause overall harm” and the second is the weapon or similar technology that directly promotes people’s harm.
In addition, the tech giant has promised not to use AI to violate international norms of surveillance technology and to circumvent international law and human rights norms. The omission of these restrictions has led to concerns that Google may be considering entering these areas.
In another blog post, Demis Hassabis, co-founder and CEO of Google DeepMind, and James Manyika, the company’s senior vice president of technology and society, explained the reasons for the change.
Executives highlighted the rapid growth of the AI sector, growing competition, and a “complex geopolitical landscape”, which are some of the reasons why Google updated its AI principles.
“We believe that democracies should lead in AI development guided by core values such as freedom, equality and respect for human rights. We believe that companies, governments and organizations that share these values should work together to create AI to protect people and promote the global market. Grow and support national security.”