Embrace the face, center of artificial intelligence (AI) and machine learning (ML), which is said to contain malicious ML models. A cybersecurity research firm has discovered two such models that contain code that can be used to package and distribute malware. According to the researchers, threat participants are using a difficult-to-detection method called Pickle File serialization to insert malware. The researchers claim to have reported a malicious ML model, and the hugged faces take it away from the platform.
Researchers discover malicious ML model that embraces faces
Cybersecurity research firm RetversingLabs has discovered malicious ML models and details the new exploitation threatening participants use on embracing faces. It is worth noting that many developers and companies host open source AI models on the platform that can be downloaded and used by others.
The company found that the use of the crime involved serializing the use of pickle files. For unknown cases, ML models are stored in various data serialized formats and can be shared and reused. Pickle is a Python module for serializing and coping with ML model data. It is often considered an unsafe data format because Python code can be executed during shelter.
In a closed platform, kimchi files can access limited data from trusted sources. However, because Hugs is an open source platform, these files are widely used, allowing attackers to abuse the system to hide malware payloads.
During the investigation, the company found two models containing malicious code on the hug face. However, these ML models are said to have escaped the platform’s security measures and are not marked as unsafe. The researchers named the technology that inserts malware “nullifai” “It involves escaping existing protections in the AI community in the ML model.”
These models are stored in Pytorch format, which is essentially a compressed kimchi file. The researchers found that the models were compressed using the 7Z format, which prevented them from loading using Pytorch’s “Torch.load()” feature. This compression also prevents the face-hugging pickling tool from detecting malware.
The researchers claim that this exploit could be dangerous because unsuspecting developers who download these models will eventually install malware on their devices. The cybersecurity company reported the issue to the Embrace Facial Security Team on January 20, claiming that the models were removed in less than 24 hours. Additionally, the platform is said to have changed its pickling tool to better identify such threats in the “broken” pickling files.
