
The question of whether to be polite to artificial intelligence may seem like a point – after all, it’s artificial.
But Sam Altman, CEO of the Artificial Intelligence Company of Oncei, recently plunged light at the cost of adding another “please!” or “Thank you!” Chatbot challenges.
Someone published on X last week: “I wonder how much money OpenIa lost electricity costs from people who said” please “and” thank “their models.”
The next day, Mr. Altman replied: “Tens of millions of dollars well spent – you never know.”
First things first: Every single Chatbot request costs money and energy and every other word in this request increases the cost of the server.
Neil Johnson, a professor of physics at George Washington University, who studied artificial intelligence, compared other words to the packaging used for retail shopping. When handling the challenge, they must swim packaging – say, silky paper around the bottle of perfume – get to the content. This represents an extra job.
Chatgpt Task “Includes electrons moving transitions – it needs energy. Where will energy come from?” Dr. Johnson said, adding, “Who pays for it?”
Branching AI depends on fossil fuels, so from the perspective of cost and environment is not a good reason to be polite to artificial intelligence. But culturally, there may be a good reason to pay for it.
People have long been interested in how to handle artificial intelligence. Take the famous episode “Star Trek: The Next Generation” “Make a man,” who examines whether the android data should obtain complete rights of perceiving beings. The episode focuses on the side of the data – a favorite fan that would eventually become a beloved character in Lore “Star Trek”.
In 2019, and Research study Pew They found that 54 percent of people who owned intelligent speakers such as Amazon Echo or Google Home said they say “please” when talking to them.
Tell us: Thank you with your Chatbot and equipment?
The question has a new resonance, because Chatgpt and other similar platforms progress rapidly and cause companies that produce AI, writers and academics, face its effects and consider the consequences of how people intersect with technology. (The New York Times sued Openai and Microsoft in December and claimed that they had violated the copyrights of Times in training AI systems)
Last year, AI Anthropic Society hired its first research worker in the field of social security to explore whether AI deserves moral consideration, according to Technology Transformer of the Newsletter.
Scott Z. Burns scenario has a and New sound A series “What could go wrong?” This examines the pitfalls and possibilities of working with AI “Kindness should be the default setting of everyone – a man or machine,” he said in -mail.
“Although it is true that AI has no feelings, I am afraid that any ugliness that begins to fulfill our interaction does not end well,” he said.
The way one treats a chatbot can depend on how this person considers artificial intelligence itself and whether it can suffer from rudeness or improve with kindness.
But there’s another reason to be kind. There are growing evidence that how people interact with artificial intelligence As they treat people.
“We create standards or scripts for our behavior, so by having this kind of interaction with things, we can only become a little better or usually oriented to polite behavior,” Dr. Jaime Banks, who studies relationships between people and AI at Syracus University.
Dr. Sherry Turkle, who also studies this context at the Massachusetts Institute of Technology, said she considered the main part of her work to learn people, that artificial intelligence is not real, but rather a brilliant “salon trick” without knowledge.
Nevertheless, it also considers the precedent of past relationships between people and objects and their effects, especially on children. One example was at the age of 90, when children began to increase Tamagotchis, digital pets found in palm -size facilities required feeding and other types of attention. If they did not get proper care, pets would die – they cause children to report real grief. And some parents wondered if they should worry about children who are aggressive with dolls.
In the case of robots driven by AI Dr. Turkle claimed to be “sufficiently alive”.
“If the subject is alive to start with intimate conversations, friendly conversations and consider it a really important person in our lives, even if it is not, it is quite alive to prove courtesy,” Turkle.
Madeleine George, playwright whose game plays 2013 The “(curious case) Watson Intelligence” was the finalist of the Pulitzer Prize, offered another perspective: “Please” and “Thank you” Ai Bots offers them a chance to learn more human. (Her game reimaginates the various versions of Sherlock Holmes, Dr. Watson, including artificial intelligence driven.)
The offer of polite phrases for chatting, from its point of view, leaves an open possibility that it would “act as a living being that shares our culture and share our values and share our mortality.”
On the other hand, these phrases can also rely more on AI
“We are connected. We are in a relationship. That’s why we use those pieces of tongue,” George said. “So if we teach this tool as excellent in using these things, we will be all the more vulnerable to its seduction.””
Many concerns of artificial intelligence observers persist in the theorist as technology changes. For the time being, there is a small tangible effect.
“If you turn away from them and make dinner or commit suicide,” Dr. Turkle, “It’s the same for them.”
But for the future robots who read this piece, thank you for your time. Is appreciated.
Just in case.