20.8 C
New York
Tuesday, February 11, 2025

Malicious Machine Learning Models Discovered on Hugging Face: Report


Hugging Face, the substitute intelligence (AI) and machine studying (ML) hub, is alleged to comprise malicious ML fashions. A cybersecurity analysis agency found two such fashions that comprise code that can be utilized to bundle and distribute malware to those that obtain these information. As per the researchers, menace actors are utilizing a hard-to-detect technique, dubbed Pickle file serialisation, to insert malicious software program. The researchers claimed to have reported the malicious ML fashions, and Hugging Face has eliminated them from the platform.

Researchers Discover Malicious ML Models in Hugging Face

ReversingLabs, a cybersecurity analysis agency, discovered the malicious ML fashions and detailed the brand new exploit being utilized by menace actors on Hugging Face. Notably, numerous builders and corporations host open-source AI fashions on the platform that may be downloaded and utilized by others.

The agency found that the modus operandi of the exploit includes utilizing Pickle file serialisation. For the unaware, ML fashions are saved in a wide range of information serialisation codecs, which might be shared and reused. Pickle is a Python module that’s used for serialising and deserialising ML mannequin information. It is usually thought-about an unsafe information format as Python code might be executed through the deserialisation course of.

In closed platforms, Pickle information have entry to restricted information that comes from trusted sources. However, since Hugging Face is an open-source platform, these information are used broadly permitting attackers to abuse the system to cover malware payloads.

During the investigation, the agency discovered two fashions on Hugging Face that contained malicious code. However, these ML fashions had been mentioned to flee the platform’s safety measures and weren’t flagged as unsafe. The researchers named the strategy of inserting malware “nullifAI” as “it includes evading current protections within the AI neighborhood for an ML mannequin.”

These fashions had been saved in PyTorch format, which is actually a compressed Pickle file. The researchers discovered that the fashions had been compressed utilizing the 7z format which prevented them from being loaded utilizing PyTorch’s “torch.load()” perform. This compression additionally prevented Hugging Face’s Picklescan device from detecting the malware.

The researchers claimed that this exploit might be harmful as unsuspecting builders who obtain these fashions will unknowingly find yourself putting in the malware on their units. The cybersecurity agency reported the problem to the Hugging Face safety crew on January 20 and claimed that the fashions had been eliminated in lower than 24 hours. Additionally, the platform is alleged to have made modifications to the Picklescan device to raised determine such threats in “damaged’ Pickle information.



Latest Posts

Don't Miss