
A group of researchers has developed an artificial intelligence (AI) system that protects users from undesirable facial scans by bad actors. The AI model, called Chameleon, uses special masking techniques to generate masks that are hidden in the image without affecting the visual quality of the protected image. Additionally, researchers claim that the model is resource-optimized and can be used even with limited processing power. So far, researchers have not made public using the Chameleon AI model, but they expressed their intention to release the code publicly soon.
Researchers reveal chameleon AI model
Georgia Tech researchers detailed the AI model, which was published in a research paper published in the online preprint journal ARXIV. The tool can add an invisible mask to the face in the image to make it unaware of the facial recognition tool. This way, users can protect their identities from facial data scanning attempts by bad actors and AI data clipping robots.
“More typical data sharing and analysis will help improve governance and responsibility for adoption of AI technology and stimulate responsible science and innovation,” said Ling Liu, a professor of data and intelligence-driven computing at the Georgia Tech School of Computer Science.
Chameleon uses a special masking technology called Personalized Privacy Protection (P-3) mask. Once the mask is applied, images cannot be detected by the facial recognition tool, as the scan will show that they are “other people”.
Although facial masking tools already exist, Chameleon AI models are innovative in resource optimization and image quality perseverance. To achieve the former, the researchers stressed that instead of using a separate mask for each photo, the tool generates a mask based on facial photos extracted by some users. In this way, only limited processing power is required to produce an invisible mask.
The second challenge is to preserve image quality from protected photos, which is even more tricky. To solve this problem, the researchers used a perceptual optimization technique in chameleons. It automatically renders the mask without any manual intervention or parameter settings, allowing AI not to obfuscate the overall image quality.
Researchers say AI models are an important step towards privacy protection and they plan to publicly release Chameleon’s code on Github soon. Developers can then use open source AI models to build applications.