
Meta is researching and developing new AI models that may have potential uses in Web3 applications. The Facebook parent company has released an AI model called Meta Motipo that controls the body movement of digital avatars. It is expected to make the overall meta experience better. The newly unveiled model is expected to provide an optimized interaction of body motion and avatars in the meta-element ecosystem.
The company claims to incentivize “first-term behavioral fundamental model.” AI models can enable virtual human avatars to complete various complex whole-body tasks, while making virtual physics more seamless in the meta-element.
Through unsupervised reinforcement learning, Meta makes it convenient for Motivo to perform a series of tasks in complex environments. The company said in a blog post that it has deployed a novel algorithm to train this AI model that uses unlabeled action datasets to help it adopt human-like behaviors while retaining the inference ability of zero-camera.
Announced launch of Motivo on X, Meta shared a short video demonstration showing how the model integrates with the virtual avatar. The clip shows humanoid avatars performing dance moves and kicking football using full-body tasks. Meta says it is combining “unsupervised reinforcement learning” to trigger these “human-like behaviors” in virtual avatars, as part of its attempt to make them look more realistic.
New version of Meta Fair – Meta Motivo is the first behavioral foundation model for controlling humanoid animal agents based on virtual physics to complete a variety of complex whole-body tasks.
This model is able to express human behavior and achieve performance… pic.twitter.com/yguu5jzglw
– You met Meta (@aiatta) December 13, 2024
The company said Motivo can solve a range of full-body control tasks. This includes motion tracking, target posture achievement and reward optimization without any additional training.
Reality Labs is an internal unit of Meta and is carrying out plans related to Metaverse. Reality labs have continuously recorded losses since their launch in 2022. Despite this pattern, Zuckerberg bets on Methal’s bets, testing updated technology to micro-tune the physical experience.
Earlier this year, Meta showed off a Hyperscape demo that turned smartphone cameras into a portal to realistic meta-environment. With this tool, the tool enables smartphones to scan 2D space and convert it into a surreal meta-meta background.
In June, Meta forked its reality lab team into two divisions, one of which was tasked with working for the meta-focused Quest Hadesets and the other working on the hardware wearables that Meta might launch in the future. The purpose of this step is to consolidate the time the reality lab team has spent developing updated AI and Web3 technologies.