
Apple researchers have developed a new artificial intelligence (AI) framework that will allow non-human robots to express their intentions and interact with humans. The framework, known as Elegnt, focuses on nonverbal communication through movement, posture and gesture. In addition to focusing on task completion, the framework specifically allows robots to express intentions, attention, and emotions with actions. This will allow working with robots to provide users with a more immersive and engaging experience, the researchers claim. Human participants were also tested.
Apple develops frameworks to enable robots to express through motion
In a post, Cupertino-based tech giant details the new AI framework. This targets dehumanized robots or robots that don’t look like humans (including limbs and central heads). Although the appearance of human robots is familiar, using character robots can make it easier to attract users, but using non-tobacco leaf robots can often feel troublesome.
To bridge this gap, the Elegnt framework trains robots to use movements, postures, and gestures to express intentions and emotions. These movements do not have any impact on the robot’s mission fulfillment, but are just to make the human interaction immersive and engaging.
The framework covers the hardware design process and adds software-based training. Apple researchers outlined a set of interactive scene storyboards that include sequences of behaviors with different functions and socially oriented tasks. “Our findings suggest that expression-driven movement can significantly improve user engagement and perceived robot quality,” the researchers said. The paper recording the process has been published in the Online Preprint Journal Arxiv.
In the demonstration, the researchers demonstrated the expression capabilities of non-human robots in lamp-like prototype robots. In the video shared by the company, the lights (similar to Pixar’s Luxo Jr. character) can follow gestures to direct light to a highlighted position. These actions also come with the impression that the robot tries to understand the command, nods in agreement and completes quickly.
Apple researchers also tested the robot’s motility and expression abilities. Testing includes functional tasks as well as social tasks such as playing music or having conversations. One participant noted in the paper that interacting with a lamp robot would feel “annoying rather than welcoming and appealing” without the entertaining nature of the action.