{"id":30802,"date":"2025-04-24T09:01:55","date_gmt":"2025-04-24T09:01:55","guid":{"rendered":"https:\/\/gridinsoft.com\/blogs\/?p=30802"},"modified":"2025-04-24T20:36:05","modified_gmt":"2025-04-24T20:36:05","slug":"slopsquatting-malware","status":"publish","type":"post","link":"https:\/\/gridinsoft.com\/blogs\/slopsquatting-malware\/","title":{"rendered":"Slopsquatting: New Malware Spreading Technique Targeting AI Assisted Developers"},"content":{"rendered":"<p><strong>Slopsquatting is a new type of cyber threat that takes advantage of mistakes made by AI coding tools<\/strong>, particularly LLMs that can &#8220;hallucinate&#8221;. In this post, we&#8217;ll break down this new type of attack, find out why it can occur, dispel some myths, and figure out how to prevent it.<\/p>\n<h2>Slopsquatting &#8211; New Techniques Against AI Assisted Devs<\/h2>\n<p>Slopsquatting is a supply chain attack that leverages AI-generated &#8220;hallucinations&#8221; \u2014 instances where AI coding tools recommend non-existent software package names. The term draws parallels <a href=\"https:\/\/gridinsoft.com\/blogs\/what-is-typosquatting-how-does-it-work-in-2022\/\">with typosquatting<\/a>, where attackers register misspelled domain names to deceive users.<\/p>\n<p>In slopsquatting, however, <strong>the deception stems from AI errors rather than human mistakes<\/strong>. The term combines &#8220;slop&#8221;, referring to low-quality or error-prone AI output, and &#8220;squatting&#8221;, the act of claiming these hallucinated package names for malicious purposes.<\/p>\n<p>It is a rather unexpected cybersecurity threat that exploits the limitations of AI-assisted coding tools, particularly large language models. As developers increasingly rely on these tools to streamline coding processes, the risk of inadvertently introducing malicious code into software projects grows. <strong>Hackers can then create malicious packages with these fake names<\/strong> and upload them to public code repositories.<\/p>\n<h2>Mechanics of Slopsquatting<\/h2>\n<p>The process of slopsquatting unfolds in several stages. First, LLMs, such as ChatGPT, GitHub Copilot, or open-source models like CodeLlama, generate code or suggest dependencies. In some cases, <strong>they recommend package names that do not exist in public repositories<\/strong> like PyPI or npm. These hallucinated names often sound plausible, resembling legitimate libraries (e.g., &#8220;secure-auth-lib&#8221; instead of an existing &#8220;authlib&#8221;).<\/p>\n<div class=\"box\">Python Package Index (PyPI), along with npm, are leveraged by cybercriminals with threatening frequency. Read our report on one of the latest <a href=\"https:\/\/gridinsoft.com\/blogs\/pypi-malware-outbreak\/\">PyPI typosquatting incidents<\/a>.<\/div>\n<p>Stage 2 begins when the developers, fully trusting <a href=\"https:\/\/socket.dev\/blog\/gmail-for-exfiltration-malicious-npm-packages-target-solana-private-keys-and-drain-victim-s\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">the AI\u2019s recommendations<\/a>, run the code, assuming all the packages it refers to are legitimate. Normally, this ends up with build failures or broken functionality of certain parts of the resulting program. Developers might waste time debugging errors, searching for typos, or trying to figure out why a dependency isn\u2019t resolving, but it is only about AI assistants being delusional about certain packages existing.<\/p>\n<p>In the worst case scenario, which becomes more and more prevalent, the hallucinated name is already taken by a malicious repository. Con actors specifically target false names that appear in AI generated code, or <strong>try picking a name similar to what AI can generate<\/strong>, hoping to get their prey in future. As a result, what may look like a flawless build process in fact installs <a href=\"https:\/\/gridinsoft.com\/malware\">malware<\/a> to the developer\u2019s system.<\/p>\n<p>The bad part about it is that these hallucinations are not random, and thus predictable. A study by researchers analyzed 16 LLMs, generating 576,000 Python and JavaScript code samples. They found that <strong>19.7% (205,000) of recommended packages were non-existent<\/strong>. Notably, 43% of these hallucinated packages reappeared in 10 successive runs of the same prompt, and 58% appeared more than once, suggesting a level of predictability that attackers can exploit.<\/p>\n<figure id=\"attachment_30811\" aria-describedby=\"caption-attachment-30811\" style=\"width: 1000px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/gridinsoft.com\/blogs\/wp-content\/uploads\/2025\/04\/python-vs-javascript-hallucination-rates.webp\" alt=\"Python vs JavaScript hallucination rates graph\" width=\"1000\" height=\"800\" class=\"size-full wp-image-30811\" title=\"\" srcset=\"https:\/\/gridinsoft.com\/blogs\/wp-content\/uploads\/2025\/04\/python-vs-javascript-hallucination-rates.webp 1000w, https:\/\/gridinsoft.com\/blogs\/wp-content\/uploads\/2025\/04\/python-vs-javascript-hallucination-rates-300x240.webp 300w, https:\/\/gridinsoft.com\/blogs\/wp-content\/uploads\/2025\/04\/python-vs-javascript-hallucination-rates-768x614.webp 768w, https:\/\/gridinsoft.com\/blogs\/wp-content\/uploads\/2025\/04\/python-vs-javascript-hallucination-rates-860x688.webp 860w\" sizes=\"auto, (max-width: 1000px) 100vw, 1000px\" \/><figcaption id=\"caption-attachment-30811\" class=\"wp-caption-text\">Python vs JavaScript hallucination rates<\/figcaption><\/figure>\n<p>This is where the fun begins. Cybercriminals identify these hallucinated package names, either by analyzing AI outputs or predicting likely hallucinations based on patterns. They then create malicious packages with these names and upload them to public repositories. This is the worst version of the scenario described in the two paragraphs above.<\/p>\n<p>As a result, this introduces malware into developer&#8217;s projects, which can compromise software security, steal data, or disrupt operations. In some cases, it can even serve as <a href=\"https:\/\/gridinsoft.com\/backdoor\">a backdoor<\/a> for future attacks, allow lateral movement across systems, or lead to the compromise of an entire software supply chain.<\/p>\n<h2>Prevalence and Variability Across AI Models<\/h2>\n<p>The frequency of package hallucinations varies a lot depending on the AI model. Open-source models, such as CodeLlama and WizardCoder, tend to hallucinate more often, with an average hallucination rate of 21.7%. For example, CodeLlama hallucinated <strong>over 33%<\/strong> of the time. On the other hand, commercial models like GPT-4 Turbo perform much better, with a hallucination rate of <strong>just 3.59%<\/strong>. In general, GPT models are about four times less likely to hallucinate compared to open-source ones.<\/p>\n<figure id=\"attachment_30809\" aria-describedby=\"caption-attachment-30809\" style=\"width: 1979px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" loading=\"lazy\" decoding=\"async\" src=\"https:\/\/gridinsoft.com\/blogs\/wp-content\/uploads\/2025\/04\/graph.webp\" alt=\"Hallucination rates slopsquatting\" width=\"1979\" height=\"1580\" class=\"size-full wp-image-30809\" title=\"\" srcset=\"https:\/\/gridinsoft.com\/blogs\/wp-content\/uploads\/2025\/04\/graph.webp 1979w, https:\/\/gridinsoft.com\/blogs\/wp-content\/uploads\/2025\/04\/graph-300x240.webp 300w, https:\/\/gridinsoft.com\/blogs\/wp-content\/uploads\/2025\/04\/graph-1024x818.webp 1024w, https:\/\/gridinsoft.com\/blogs\/wp-content\/uploads\/2025\/04\/graph-768x613.webp 768w, https:\/\/gridinsoft.com\/blogs\/wp-content\/uploads\/2025\/04\/graph-1536x1226.webp 1536w, https:\/\/gridinsoft.com\/blogs\/wp-content\/uploads\/2025\/04\/graph-860x687.webp 860w\" sizes=\"auto, (max-width: 1979px) 100vw, 1979px\" \/><figcaption id=\"caption-attachment-30809\" class=\"wp-caption-text\">Hallucination rates of recent vs. all-time data sets<\/figcaption><\/figure>\n<p>When it comes to how believable these hallucinations are, around 38% of the fake package names are moderately similar to real ones, and only 13% are just simple typos. That makes them pretty convincing to developers. So, even though commercial models are more reliable, no AI is completely immune to hallucinations\u2014and the more convincing these fakes are, the bigger the risk.<\/p>\n<h2>Potential Impact<\/h2>\n<p>Despite the fact that massive human downsizing is underway in favor of using AI, slopsquatting shows that the complete replacement of humans by artificial intelligence is unlikely to happen anytime soon. If a widely-used AI tool keeps recommending a hallucinated package, attackers could use that <strong>to spread malicious code to numerous developers<\/strong>, making the attack much more effective.<\/p>\n<p>Another issue is trust \u2014 developers who rely on AI tools might not always double-check if the suggested packages are legit, especially when they\u2019re in a hurry. That trust makes them more vulnerable.<\/p>\n<p>While there haven\u2019t been any confirmed slopsquatting attacks in the wild as of April 2025, the technique is seen as a real threat for the future. It\u2019s similar to how typosquatting started out as just a theoretical concern and then became a widespread problem. The risk is made worse by things like rushed security checks\u2014something OpenAI has been criticized for. As AI tools become a bigger part of development workflows, the potential damage from slopsquatting keeps growing.<\/p>\n<h2>Preventive Measures Against Slopsquatting<\/h2>\n<p>To reduce the risk of slopsquatting, developers and organizations can take several practical steps. First, it\u2019s important <strong>to verify any package recommended by an AI<\/strong> \u2014 check if it actually exists in official repositories and review things like download numbers and the maintainer&#8217;s history.<\/p>\n<p>Continuing the previous paragraph, good code review practices are essential too. Catching weird or incorrect suggestions before the code goes live can save a lot of headaches. On top of that, developers should be trained to stay aware of the risks that come with AI hallucinations and <strong>not blindly trust everything the AI spits out<\/strong>.<\/p>\n<p>Having runtime security measures in place can help detect and stop malicious activity from any compromised dependencies that do sneak through. I recommend GridinSoft Anti-Malware as a reliable solution for personal security: its multi-component detection system will find and eliminate even the most elusive threat, regardless of the way it is introduced. Download it by clicking the banner below and get yourself a proper protection today.<\/p>\n<p style=\"padding-top:15px;padding-bottom:15px;\"><a href=\"\/download\/antimalware\" rel=\"nofollow\"><img loading=\"lazy\" loading=\"lazy\" decoding=\"async\" src=\"\/blogs\/wp-content\/uploads\/2022\/07\/env01.webp\" alt=\"Slopsquatting: New Malware Spreading Technique Targeting AI Assisted Developers\" width=\"798\" height=\"336\" class=\"aligncenter size-full\" title=\"\"><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Slopsquatting is a new type of cyber threat that takes advantage of mistakes made by AI coding tools, particularly LLMs that can &#8220;hallucinate&#8221;. In this post, we&#8217;ll break down this new type of attack, find out why it can occur, dispel some myths, and figure out how to prevent it. Slopsquatting &#8211; New Techniques Against [&hellip;]<\/p>\n","protected":false},"author":7,"featured_media":30830,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"content-type":"","_sitemap_exclude":false,"_sitemap_priority":"","_sitemap_frequency":"","footnotes":""},"categories":[17,4],"tags":[444,619,1547],"class_list":{"0":"post-30802","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-labs","8":"category-tips-tricks","9":"tag-ai","10":"tag-cybersecurity","11":"tag-pypi"},"featured_image_src":"https:\/\/gridinsoft.com\/blogs\/wp-content\/uploads\/2025\/04\/GS_Blog_Slopsquatting-The-New-AI-Powered-Supply-Chain-Threat_1280x674.webp","author_info":{"display_name":"Stephanie Adlam","author_link":"https:\/\/gridinsoft.com\/blogs\/author\/adlam\/"},"_links":{"self":[{"href":"https:\/\/gridinsoft.com\/blogs\/wp-json\/wp\/v2\/posts\/30802","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/gridinsoft.com\/blogs\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/gridinsoft.com\/blogs\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/gridinsoft.com\/blogs\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/gridinsoft.com\/blogs\/wp-json\/wp\/v2\/comments?post=30802"}],"version-history":[{"count":13,"href":"https:\/\/gridinsoft.com\/blogs\/wp-json\/wp\/v2\/posts\/30802\/revisions"}],"predecessor-version":[{"id":30828,"href":"https:\/\/gridinsoft.com\/blogs\/wp-json\/wp\/v2\/posts\/30802\/revisions\/30828"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/gridinsoft.com\/blogs\/wp-json\/wp\/v2\/media\/30830"}],"wp:attachment":[{"href":"https:\/\/gridinsoft.com\/blogs\/wp-json\/wp\/v2\/media?parent=30802"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/gridinsoft.com\/blogs\/wp-json\/wp\/v2\/categories?post=30802"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/gridinsoft.com\/blogs\/wp-json\/wp\/v2\/tags?post=30802"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}