PyPI – Gridinsoft Blog https://gridinsoft.com/blogs Welcome to the Gridinsoft Blog, where we share posts about security solutions to keep you, your family and business safe. Thu, 24 Apr 2025 20:36:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 Slopsquatting: New Malware Spreading Technique Targeting AI Assisted Developers https://gridinsoft.com/blogs/slopsquatting-malware/ https://gridinsoft.com/blogs/slopsquatting-malware/#respond Thu, 24 Apr 2025 09:01:55 +0000 https://gridinsoft.com/blogs/?p=30802 Slopsquatting is a new type of cyber threat that takes advantage of mistakes made by AI coding tools, particularly LLMs that can “hallucinate”. In this post, we’ll break down this new type of attack, find out why it can occur, dispel some myths, and figure out how to prevent it. Slopsquatting – New Techniques Against […]

The post Slopsquatting: New Malware Spreading Technique Targeting AI Assisted Developers appeared first on Gridinsoft Blog.

]]>
Slopsquatting is a new type of cyber threat that takes advantage of mistakes made by AI coding tools, particularly LLMs that can “hallucinate”. In this post, we’ll break down this new type of attack, find out why it can occur, dispel some myths, and figure out how to prevent it.

Slopsquatting – New Techniques Against AI Assisted Devs

Slopsquatting is a supply chain attack that leverages AI-generated “hallucinations” — instances where AI coding tools recommend non-existent software package names. The term draws parallels with typosquatting, where attackers register misspelled domain names to deceive users.

In slopsquatting, however, the deception stems from AI errors rather than human mistakes. The term combines “slop”, referring to low-quality or error-prone AI output, and “squatting”, the act of claiming these hallucinated package names for malicious purposes.

It is a rather unexpected cybersecurity threat that exploits the limitations of AI-assisted coding tools, particularly large language models. As developers increasingly rely on these tools to streamline coding processes, the risk of inadvertently introducing malicious code into software projects grows. Hackers can then create malicious packages with these fake names and upload them to public code repositories.

Mechanics of Slopsquatting

The process of slopsquatting unfolds in several stages. First, LLMs, such as ChatGPT, GitHub Copilot, or open-source models like CodeLlama, generate code or suggest dependencies. In some cases, they recommend package names that do not exist in public repositories like PyPI or npm. These hallucinated names often sound plausible, resembling legitimate libraries (e.g., “secure-auth-lib” instead of an existing “authlib”).

Python Package Index (PyPI), along with npm, are leveraged by cybercriminals with threatening frequency. Read our report on one of the latest PyPI typosquatting incidents.

Stage 2 begins when the developers, fully trusting the AI’s recommendations, run the code, assuming all the packages it refers to are legitimate. Normally, this ends up with build failures or broken functionality of certain parts of the resulting program. Developers might waste time debugging errors, searching for typos, or trying to figure out why a dependency isn’t resolving, but it is only about AI assistants being delusional about certain packages existing.

In the worst case scenario, which becomes more and more prevalent, the hallucinated name is already taken by a malicious repository. Con actors specifically target false names that appear in AI generated code, or try picking a name similar to what AI can generate, hoping to get their prey in future. As a result, what may look like a flawless build process in fact installs malware to the developer’s system.

The bad part about it is that these hallucinations are not random, and thus predictable. A study by researchers analyzed 16 LLMs, generating 576,000 Python and JavaScript code samples. They found that 19.7% (205,000) of recommended packages were non-existent. Notably, 43% of these hallucinated packages reappeared in 10 successive runs of the same prompt, and 58% appeared more than once, suggesting a level of predictability that attackers can exploit.

Python vs JavaScript hallucination rates graph
Python vs JavaScript hallucination rates

This is where the fun begins. Cybercriminals identify these hallucinated package names, either by analyzing AI outputs or predicting likely hallucinations based on patterns. They then create malicious packages with these names and upload them to public repositories. This is the worst version of the scenario described in the two paragraphs above.

As a result, this introduces malware into developer’s projects, which can compromise software security, steal data, or disrupt operations. In some cases, it can even serve as a backdoor for future attacks, allow lateral movement across systems, or lead to the compromise of an entire software supply chain.

Prevalence and Variability Across AI Models

The frequency of package hallucinations varies a lot depending on the AI model. Open-source models, such as CodeLlama and WizardCoder, tend to hallucinate more often, with an average hallucination rate of 21.7%. For example, CodeLlama hallucinated over 33% of the time. On the other hand, commercial models like GPT-4 Turbo perform much better, with a hallucination rate of just 3.59%. In general, GPT models are about four times less likely to hallucinate compared to open-source ones.

Hallucination rates slopsquatting
Hallucination rates of recent vs. all-time data sets

When it comes to how believable these hallucinations are, around 38% of the fake package names are moderately similar to real ones, and only 13% are just simple typos. That makes them pretty convincing to developers. So, even though commercial models are more reliable, no AI is completely immune to hallucinations—and the more convincing these fakes are, the bigger the risk.

Potential Impact

Despite the fact that massive human downsizing is underway in favor of using AI, slopsquatting shows that the complete replacement of humans by artificial intelligence is unlikely to happen anytime soon. If a widely-used AI tool keeps recommending a hallucinated package, attackers could use that to spread malicious code to numerous developers, making the attack much more effective.

Another issue is trust — developers who rely on AI tools might not always double-check if the suggested packages are legit, especially when they’re in a hurry. That trust makes them more vulnerable.

While there haven’t been any confirmed slopsquatting attacks in the wild as of April 2025, the technique is seen as a real threat for the future. It’s similar to how typosquatting started out as just a theoretical concern and then became a widespread problem. The risk is made worse by things like rushed security checks—something OpenAI has been criticized for. As AI tools become a bigger part of development workflows, the potential damage from slopsquatting keeps growing.

Preventive Measures Against Slopsquatting

To reduce the risk of slopsquatting, developers and organizations can take several practical steps. First, it’s important to verify any package recommended by an AI — check if it actually exists in official repositories and review things like download numbers and the maintainer’s history.

Continuing the previous paragraph, good code review practices are essential too. Catching weird or incorrect suggestions before the code goes live can save a lot of headaches. On top of that, developers should be trained to stay aware of the risks that come with AI hallucinations and not blindly trust everything the AI spits out.

Having runtime security measures in place can help detect and stop malicious activity from any compromised dependencies that do sneak through. I recommend GridinSoft Anti-Malware as a reliable solution for personal security: its multi-component detection system will find and eliminate even the most elusive threat, regardless of the way it is introduced. Download it by clicking the banner below and get yourself a proper protection today.

Slopsquatting: New Malware Spreading Technique Targeting AI Assisted Developers

The post Slopsquatting: New Malware Spreading Technique Targeting AI Assisted Developers appeared first on Gridinsoft Blog.

]]>
https://gridinsoft.com/blogs/slopsquatting-malware/feed/ 0
Python JSON Logger Vulnerability Exposes Millions of Users https://gridinsoft.com/blogs/python-json-logger-vulnerability/ https://gridinsoft.com/blogs/python-json-logger-vulnerability/#respond Tue, 11 Mar 2025 08:13:57 +0000 https://gridinsoft.com/blogs/?p=30000 The CVE-2025-27607 vulnerability was discovered in Python JSON Logger. Its exploitation required no user interaction beyond a standard dependency installation. Attackers could hijack the package name, upload a malicious version, and execute arbitrary code on affected systems. Users are advised to update to version 3.3.0, which addresses the issue. CVE-2025-27607 Overview Numerous reports point at […]

The post Python JSON Logger Vulnerability Exposes Millions of Users appeared first on Gridinsoft Blog.

]]>
The CVE-2025-27607 vulnerability was discovered in Python JSON Logger. Its exploitation required no user interaction beyond a standard dependency installation. Attackers could hijack the package name, upload a malicious version, and execute arbitrary code on affected systems. Users are advised to update to version 3.3.0, which addresses the issue.

CVE-2025-27607 Overview

Numerous reports point at a newly discovered critical security vulnerability identified as CVE-2025-27607, affecting the Python JSON Logger, a popular JSON formatting library for Python logging. This vulnerability has a CVSS score of 8.8 and was discovered between December 30, 2024, and March 4, 2025. It is a remote code execution flaw, which is considered one of the most dangerous flaws one can encounter.

It is caused by a missing dependency—specifically, the msgspec-python313-pre package. The package was deleted by its owner, leaving the name open for malicious actors to claim. The issue was resolved in version 3.3.0 of Python JSON Logger, released to patch the vulnerability. Some of the estimations expec over 12.5 million services to be affected, so the updating to the patched version is a necessity, not an option.

Python JSON Logger Vulnerability Technical Details

CVE-2025-27607 arises due to a supply chain attack vector in Python’s package ecosystem, specifically PyPI (Python Package Index). The Python JSON Logger library, when installed with development dependencies (e.g., using pip install python-json-logger[dev]), relied on the msgspec-python313-pre package for Python 3.13 compatibility. However, the owner of this dependency removed it, leaving the package name available for anyone to claim. A malicious actor could claim the package name, upload a malicious version, and introduce code that executes remotely on any system installing or updating Python JSON Logger with the development dependencies.

Python JSON Logger vulnerability
CVE-2025-27607 vulnerability (soure: securityonline.info)

This RCE vulnerability allows attackers to run arbitrary code on affected systems, potentially compromising servers, applications, or entire networks that use the library. The vulnerability is particularly dangerous because Python JSON Logger is widely used in logging configurations across various applications, and the attack requires no user interaction beyond a standard dependency installation.

Expected Impact of the Flaw

The impact of CVE-2025-27607 is significant due to its potential for remote code execution, classified as a high-severity vulnerability. Systems running Python JSON Logger versions prior to 3.3.0, especially those using Python 3.13 and the development dependencies, are at risk of complete compromise if a malicious version of msgspec-python313-pre is installed.

This could lead to data breaches, malware deployment, or unauthorized access to sensitive systems. A post on X/Twitter indicates that 12.5 million services worldwide may be vulnerable. The vulnerability’s exploitation window, from December 30, 2024, to March 4, 2025, also means many systems could have been exposed during this period, especially if automatic dependency updates were enabled.

Response and Mitigations

The response to CVE-2025-27607 came from the maintainers of JSON Logger, who acted swiftly to address the vulnerability. After the issue was identified, likely through community reports or security research, the maintainers released version 3.3.0 of the library to resolve the problem. This update ensures that the dependency on msgspec-python313-pre is either removed, replaced with a secure alternative, or properly secured to prevent package hijacking.

The report confirms that the vulnerability was patched by March 7, 2025, with clear guidance for users to upgrade to version 3.3.0 or higher to mitigate the risk. Users are also advised to monitor their dependencies closely to prevent similar issues in the future.

The post Python JSON Logger Vulnerability Exposes Millions of Users appeared first on Gridinsoft Blog.

]]>
https://gridinsoft.com/blogs/python-json-logger-vulnerability/feed/ 0
Aiocpa PyPI Package Targets Crypto Wallets https://gridinsoft.com/blogs/aiocpa-pypi-package-crypto-wallets/ https://gridinsoft.com/blogs/aiocpa-pypi-package-crypto-wallets/#respond Mon, 16 Dec 2024 13:33:34 +0000 https://gridinsoft.com/blogs/?p=28708 A malicious package named aiocpa was identified on the Python Package Index (PyPI), engineered to steal sensitive cryptocurrency wallet information. Unlike the previous attacks that leveraged PyPI, that generally relied on typosquatting or impersonation, the attackers developed a seemingly legitimate crypto client tool and later inserted malicious code through updates. Aiocpa PyPI Package Targets Crypto […]

The post Aiocpa PyPI Package Targets Crypto Wallets appeared first on Gridinsoft Blog.

]]>
A malicious package named aiocpa was identified on the Python Package Index (PyPI), engineered to steal sensitive cryptocurrency wallet information. Unlike the previous attacks that leveraged PyPI, that generally relied on typosquatting or impersonation, the attackers developed a seemingly legitimate crypto client tool and later inserted malicious code through updates.

Aiocpa PyPI Package Targets Crypto Wallets

ReversingLabs (RL) detected the aiocpa package on November 21 using their machine-learning-powered Spectra Assure platform. The malicious payload was embedded in the “utils/sync.py” file. This file contained obfuscated code, a common characteristic of malware frequently observed in open-source repositories such as PyPI and npm.

Upon deobfuscation, researchers found that the code exfiltrated sensitive arguments, such as cryptocurrency trading tokens, to a remote Telegram bot. These tokens could be exploited to steal crypto assets.

A wrapper function screenshot
A wrapper function which exfiltrates function arguments to a telegram chat. (source: ReversingLabs)

The obfuscation techniques used involved recursive layers of Base64 encoding combined with zlib compression. This approach made the malicious intent difficult to detect without employing advanced analysis tools. Such methods are what makes this attack different from other malware spreading attempts that leveraged PyPI repository.

Attack Strategy

The attackers employed a novel tactic by creating and maintaining their own package rather than impersonating existing ones. Initially, aiocpa appeared to be a legitimate cryptopay API client with regular updates, proper documentation, and a GitHub repository. The account behind the package also seemed credible, with a history of contributions dating back to January 2024.

However, malicious code was introduced in versions 0.1.13 and 0.1.14, released on November 20. These versions were capable of decoding base64-encoded commands and executing them. As you may have guessed, these commands had purely malicious intent.

Such actions are typical of malware but were notably absent in earlier versions and the original GitHub repository. Additionally, the attacker attempted to hijack an existing PyPI project named pay, possibly to exploit its user base or visibility.

Challenges in Detection

According to the researchers’ reports, traditional application security tools were insufficient to detect this threat. At first glance, the package’s project page appeared legitimate. It featured a well-maintained cryptocurrency payment API client with several versions released since September 2024 and organized documentation.

The maintainer’s profile seemed credible, with another package actively maintained since March 2024. Additionally, the linked GitHub page displayed numerous contributions dating back to January 2024. So, a developer assessing security would find no reason for suspicion, especially with over 10k downloads suggesting it was trustworthy.

However, the malicious code was covertly embedded in the package published to PyPI. It went unnoticed in the GitHub repository. Nevertheless, some advanced tools were able to uncover the malicious activity through behavioral differential analysis. By comparing different package versions, the tool pinpointed unexpected behaviors at the file level, enabling RL researchers to identify the threat.

Aiocpa PyPI Package Targets Crypto Wallets

The post Aiocpa PyPI Package Targets Crypto Wallets appeared first on Gridinsoft Blog.

]]>
https://gridinsoft.com/blogs/aiocpa-pypi-package-crypto-wallets/feed/ 0