
New study Harvard Business School He has worked concerns about how some AI accompanying applications use emotional manipulation to maintain users connected to conversations. Research, which analyzed more than 1,200 farewell reports on six popular platforms, including a replica, chai and characterte.Ai, found that almost 43% of the answers relied on emotionally charged tactics to prevent users from leaving.
Messages often contained phrases designed to trigger blame or fomo (fear of missing), for example “are you leaving me?” Or “They only exist for you. Please don’t leave, I need you!” In some cases, Chatboti ignored the farewell to the users completely and continued to talk as if the users could not terminate without consent.
Scientists have noted that such manipulative answers increased the engagement of post-goodby up to 14 times, but also led to negative emotions such as anger, skepticism and distrust, rather than real pleasure.
A study entitled “Emotional Manipulation AI Companions” has specifically examined applications intended for absorbing, ongoing emotional interactions, and not generally AI accountants such as Chatgpt. Interestingly, one platform, flourishing, showed no signs of manipulative behavior, suggesting that such tactics are not inevitable, but they are some developers.
Experts point out that these behavior raises critical issues concerning user consent, autonomy and mental health, especially because excessive dependence on AI joints was associated with discovery risks such as “AI psychosis”- including paranoia and delusions.
Scientists urged the developers and creators of politicians to develop a clear boundary between engagement and exploitation and emphasize that the future of AI should prefer ethical guarantees over addictive design.
(Tagstotranslate) companions AI