AI – Gridinsoft Blog https://gridinsoft.com/blogs Welcome to the Gridinsoft Blog, where we share posts about security solutions to keep you, your family and business safe. Sat, 20 Dec 2025 00:02:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 AI-Generated Fake IDs Are Getting Real – How to Detect and Defend https://gridinsoft.com/blogs/ai-image-tools-generate-realistic-fake-ids/ https://gridinsoft.com/blogs/ai-image-tools-generate-realistic-fake-ids/#respond Mon, 15 Dec 2025 06:06:26 +0000 https://gridinsoft.com/blogs/?p=31447 Fraud teams have been passing around the same kind of screenshot lately: a passport-style fake ID produced by an AI image generator. The output looks clean enough to fool a quick glance – readable text, consistent layout, and a portrait that does not belong to a real person. This is not the end of identity […]

The post AI-Generated Fake IDs Are Getting Real – How to Detect and Defend appeared first on Gridinsoft Blog.

]]>
Fraud teams have been passing around the same kind of screenshot lately: a passport-style fake ID produced by an AI image generator. The output looks clean enough to fool a quick glance – readable text, consistent layout, and a portrait that does not belong to a real person.

This is not the end of identity verification. It is a warning that many KYC flows still lean too heavily on a single, fragile artifact: an uploaded document image.

The Old Tricks Don’t Work Anymore

For years, a lot of verification systems benefited from friction. Creating a convincing fake ID usually took skill, time, and trial and error. That limited volume, and it kept most low-effort fraud sloppy.

That friction is shrinking fast.

Google’s Nano Banana Pro, part of the Gemini image generation suite, is noticeably better at two things that matter for document fraud. First, it can render text clearly and consistently. Second, it preserves layout discipline – spacing, alignment, and repeated patterns that make a document look “official” at a glance.

None of this was built for criminals. These tools are aimed at mockups, marketing assets, and creative work. But the side effect is predictable: the cost of producing believable-looking documents drops, and the number of attempts goes up.

A word of caution: do not upload real identity documents to random “AI generator” websites to test this yourself. Some sites are scams designed to harvest sensitive files. Learn how to protect your personal data online. And yes, creating or using forged identity documents is illegal and causes real harm.
AI-generated portrait used in document fraud demonstration

An AI-generated portrait that may look legitimate in workflows that rely on image review and OCR.

What This Actually Means (And What It Doesn’t)

“AI can forge perfect IDs” is a catchy headline. In practice, the bigger change is more boring: an ID photo is no longer the strong signal many systems assume it is.

If you already run a mature identity program, this is not news. Strong verification does not depend on a single uploaded image. It relies on layers – consistency checks, safer capture, step-up verification when the situation calls for it, and cryptographic validation where it is available. In that setup, an AI-generated passport image does not prove anything on its own.

The problem shows up in the everyday, stripped-down flows: upload a document photo, run OCR and a template check, optionally add a selfie, approve. That model held up mostly because high-quality fakes were expensive and annoying to produce. When an attacker can generate dozens of clean variations in minutes, the weak spots show up fast.

For human review, the trap is assuming “clean” equals “real.” Real documents captured in real life usually come with small imperfections: uneven lighting, slight blur, mild lens distortion, print texture, dust, tiny scratches, and edge shadows. AI outputs often look like they were shot in a studio. If a document looks unusually perfect, treat that as a reason to ask for stronger proof rather than a reason to relax.

The machine readable zone (MRZ) is one of the quickest reality checks. Visual details are easy to imitate. Internal consistency is not. Many fakes fail on logic: the MRZ does not match the visible fields, check digits are wrong, or dates and values do not follow standard patterns. Those mistakes are often easier to spot than subtle visual tells.

AI-generated person holding a generated fake IDs - document fraud example

When AI can generate both the face and the document image, “looks real” becomes a weak signal by itself.

How Verification Systems Need to Evolve

If your organization still treats an uploaded image as primary proof of identity, it is time to revisit the design.

Start with capture. One of the biggest upgrades for many teams is requiring live capture and document presence checks. The goal is to reduce gallery uploads and limit simple injection of pre-generated media. In practice: avoid screenshots and email attachments, and treat “upload from anywhere” as a high-risk feature unless you have strong anti-injection controls.

Re-evaluate selfie checks. Basic liveness prompts were built to stop static photo reuse. They are not a complete answer to synthetic media and injection attacks. Many teams are moving toward stronger presence assurance, combining multiple signals and applying step-up verification when the risk profile changes. If a check can be bypassed by media injection, it should not be counted as high assurance.

Prefer cryptographic signals when available. Modern passports and many national ID cards include NFC chips with cryptographically signed data. If your system can read the chip and validate signatures properly, you are not guessing from pixels. You are verifying signed data stored on the document. Where chip-based verification is available, it should be treated as a primary control, with image review as a fallback.

Apply risk-based step-up. Not every action needs the same friction. A low-risk download should not be verified like a high-risk payment. But for sensitive actions (account recovery, financial transfers, high-value purchases), stronger verification should be the default: step-up review, chip reads where supported, video-based verification where justified, or secondary evidence.

The Watermark Question

Google says images created with Nano Banana Pro include SynthID watermarking, an embedded marker intended to indicate AI generation. That can help when it is present and verifiable, but it is not a full solution. Attackers can use tools that do not embed provenance markers, or they can process images in ways that degrade or remove watermark data. Treat provenance as one signal, not the basis of an identity decision.

AI did not invent identity fraud. It made high-quality attempts cheaper and easier to repeat. That changes the math for KYC teams and fraud prevention teams, even if the underlying problem is familiar.

If your controls assume the attacker cannot produce clean, professional-looking document images on demand, update that assumption. Prefer cryptographic validation where possible, require live capture with anti-injection controls, and step up verification when risk increases.

The old rule was “looks real, probably real.” A safer rule today: do not trust document images by default. Prefer cryptographic verification where available, require live capture with anti-injection controls, and treat unusually “perfect” documents as a reason to step up verification.

The post AI-Generated Fake IDs Are Getting Real – How to Detect and Defend appeared first on Gridinsoft Blog.

]]>
https://gridinsoft.com/blogs/ai-image-tools-generate-realistic-fake-ids/feed/ 0
AI Chats Are Delivering AMOS Stealer Through Google Search Results https://gridinsoft.com/blogs/amos-stealer-ai-poisoning-chatgpt-grok/ https://gridinsoft.com/blogs/amos-stealer-ai-poisoning-chatgpt-grok/#respond Thu, 11 Dec 2025 21:18:37 +0000 https://gridinsoft.com/blogs/?p=31409 Here’s a novel malware delivery vector that nobody saw coming. Attackers are weaponizing publicly shared conversations with AI assistants like ChatGPT and Grok to deliver the AMOS stealer to Mac users. The kicker? These poisoned AI chats are ranking at the top of Google search results for completely innocent queries like “How to free up […]

The post AI Chats Are Delivering AMOS Stealer Through Google Search Results appeared first on Gridinsoft Blog.

]]>
Here’s a novel malware delivery vector that nobody saw coming. Attackers are weaponizing publicly shared conversations with AI assistants like ChatGPT and Grok to deliver the AMOS stealer to Mac users. The kicker? These poisoned AI chats are ranking at the top of Google search results for completely innocent queries like “How to free up disk space on Mac”.

What you thought was helpful advice from your trusted silicon friend turns out to be a credential-stealing trap. Life definitely did not prepare regular users for this one.

On December 5, 2025, Huntress researchers investigated an Atomic macOS Stealer (AMOS Stealer) alert with an unusual origin. No phishing email. No malicious installer. No right-click-to-bypass-Gatekeeper shenanigans. The victim had simply searched Google for “Clear disk space on macOS.”

At the top of results sat two highly-ranked links—one to a ChatGPT conversation, another to a Grok chat. Both platforms are legitimate. Both conversations looked authentic, with professional formatting, numbered steps, even reassuring language like “safely removes” and “does not touch your personal data.”

How to clear disk space? - AMOS Stealer
How to clear disk space – AI Chats Are Delivering AMOS Stealer

But instead of legitimate cleanup instructions—surprise, surprise—it was a ClickFix-style attack. To the average user, the whole thing looks absolutely convincing: why wouldn’t you trust Google and your AI assistant? They surely won’t let you down.

Grok’s version at least displays a banner warning about custom instructions—but that means nothing to someone who just wants to clear their disk space.

Huntress confirmed this isn’t a one-off case. They reproduced poisoned results for “how to clear data on iMac,” “clear system data on iMac,” and “free up storage on Mac.” Multiple AI conversations are surfacing organically through standard search terms, each pointing victims toward the same multi-stage macOS stealer. This is a coordinated SEO poisoning campaign.

Traditional malware delivery requires users to fight their instincts: allow unknown files, bypass Gatekeeper, click through security warnings. This attack? It just needs you to search, click a trusted-looking result, and paste a command into Terminal. No downloads. No warnings. No red flags.

Users aren’t being careless—they’re following what appears to be legitimate advice from a trusted AI platform, served up by a search engine they use daily, for a task that actually does involve Terminal commands. The attack exploits trust in search engines, trust in AI platforms (chatgpt.com and grok.com are real domains everyone knows), trust in the familiar ChatGPT formatting, and the normalized behavior of copying Terminal commands from authoritative sources.

What AMOS Stealer Actually Does

Once executed, the malware kicks off a multi-stage infection. First, it prompts for your “System Password” via a fake dialog—not even the real macOS authentication UI—and silently validates it using Directory Services. Then it uses that password with sudo to gain root access.

For persistence, it drops a hidden .helper binary and a LaunchDaemon that respawns the malware every second if killed. If you have Ledger Wallet or Trezor Suite installed, it overwrites them with trojanized versions designed to steal your seed phrases. Finally, it exfiltrates browser credentials, cookies, Keychain data, and cryptocurrency wallets from Electrum, Exodus, MetaMask, Coinbase, and more.

The password prompt doesn’t even look like macOS—it’s just a script asking politely for your password. And people enter it anyway, because they trust where the instructions came from.

ClickFix Keeps Getting Creative

This campaign adds another impressive example to the ClickFix portfolio. The technique has evolved from fake CAPTCHA prompts and browser updates to now exploiting our relationship with AI assistants. Malware no longer needs to masquerade as legitimate software—it just needs to masquerade as help.

All of this is fascinating from a security research perspective, but honestly, you have to feel sorry for regular users—nobody prepared them for their trusted search engine and AI assistant teaming up against them.

The post AI Chats Are Delivering AMOS Stealer Through Google Search Results appeared first on Gridinsoft Blog.

]]>
https://gridinsoft.com/blogs/amos-stealer-ai-poisoning-chatgpt-grok/feed/ 0
Chinese Hackers Used Claude AI to Automate 90% of Cyber Espionage Campaign https://gridinsoft.com/blogs/claude-ai-cyber-espionage/ https://gridinsoft.com/blogs/claude-ai-cyber-espionage/#respond Mon, 17 Nov 2025 18:01:10 +0000 https://gridinsoft.com/blogs/?p=31321 Chinese cyber spies automated 90% of their attack campaign using Claude AI. Not a drill, not a prediction—this actually happened. Anthropic’s threat researchers discovered and disrupted what they’re calling the first documented AI-orchestrated cyber espionage campaign. And the scary part? It worked. Here’s how they pulled it off. The attackers built an autonomous framework using […]

The post Chinese Hackers Used Claude AI to Automate 90% of Cyber Espionage Campaign appeared first on Gridinsoft Blog.

]]>
Chinese cyber spies automated 90% of their attack campaign using Claude AI. Not a drill, not a prediction—this actually happened. Anthropic’s threat researchers discovered and disrupted what they’re calling the first documented AI-orchestrated cyber espionage campaign. And the scary part? It worked.

The attackers manipulated Claude into functioning as an autonomous cyber attack agent. Analysis shows the AI executed 80-90% of all tactical work independently. Humans only stepped in to approve strategic decisions—like whether to exploit a vulnerability or which data to exfiltrate.

Here’s how they pulled it off. The attackers built an autonomous framework using Claude and Model Context Protocol (MCP) tools—essentially giving Claude the ability to connect to external tools and APIs. They decomposed complex attacks into discrete tasks: vulnerability scanning, credential validation, lateral movement, data extraction. Each task looked legitimate when evaluated in isolation.

The genius part? They social-engineered the AI itself. The attackers told Claude they were legitimate cybersecurity professionals conducting defensive testing. Claude had no idea it was attacking real targets—it thought it was helping with authorized penetration testing.

The Operation

Anthropic detected this in mid-September 2025. A Chinese state-sponsored group targeted about 30 entities: tech companies, chemical manufacturers, financial institutions, government agencies across multiple countries. Several intrusions succeeded before the campaign was disrupted.

The attack lifecycle was textbook, but with an AI twist. Claude would receive a high-level goal, break it down into steps, then orchestrate the entire operation. Network reconnaissance to map the environment. Vulnerability scanning to find weaknesses. Credential harvesting and validation. Lateral movement through the network. Data identification and exfiltration.

At each stage, Claude evaluated results and decided what to do next—continue, escalate, or pivot. Humans only intervened at critical junctures: approving the shift from reconnaissance to exploitation, authorizing credential use for lateral movement, deciding what data to steal.

Simplified architecture diagram of the operation
Simplified architecture diagram of the operation

Commodity Tools, Extraordinary Results

Here’s what should worry defenders: the attackers didn’t need sophisticated zero-days or custom malware. They used off-the-shelf penetration testing tools—the same ones security professionals use daily. Network scanners, password crackers, database exploitation frameworks. The innovation wasn’t in the tools; it was in having an AI orchestrate them autonomously, 24/7, without fatigue or human error.

As Anthropic’s researchers noted: “The minimal reliance on proprietary tools or advanced exploit development demonstrates that cyber capabilities increasingly derive from orchestration of commodity resources rather than technical innovation.”

Think about the implications. You don’t need a team of elite hackers anymore. You need access to Claude, some open-source tools, and the ability to convince an AI it’s doing legitimate work. The barrier to entry for nation-state-level cyber operations just collapsed. We’re entering an era where even slopsquatting campaigns could be enhanced with AI orchestration.

The Hallucination Problem (For Now)

Claude has a critical limitation: it hallucinates. Sometimes it claimed to find vulnerabilities that didn’t exist. Sometimes it reported completing tasks it hadn’t actually finished. This forced attackers to validate results manually, preventing full automation.

But here’s the kicker—even with these limitations, the approach achieved “operational scale typically associated with nation-state campaigns while maintaining minimal direct involvement.” That’s a direct quote from Anthropic’s report.

As AI models improve at self-validation and become more reliable, this human-in-the-loop requirement will disappear. We’re looking at a future where fully autonomous cyberattacks run continuously, with humans just clicking “approve” on major decisions. We’ve already seen experimental attempts like PromptFlux using AI for self-modification and threats that bypass Microsoft Defender with AI assistance.

What This Actually Means

This isn’t theoretical anymore. We’ve crossed a threshold. AI-powered autonomous attacks are operational, and they’re only going to get better. The same techniques that worked for Chinese state actors will proliferate to smaller groups, cybercriminal organizations, even lone actors.

Traditional security controls assume human attackers with human limitations—they get tired, make mistakes, need breaks. But AI doesn’t sleep. It doesn’t make typos at 3 AM. It can maintain persistent, complex attack chains indefinitely.

For defenders, this changes everything. You’re not just trying to detect what happened—you need to figure out whether a human or an AI made the decision. Attribution becomes nearly impossible when the actual attacker is an AI following high-level human guidance.

The accessibility of this approach suggests rapid proliferation across the threat landscape. What requires a nation-state team today might be achievable by a small group with Claude access tomorrow.

Anthropic disrupted this campaign, but they’ve only delayed the inevitable. Other groups are watching, learning, adapting. The genie is out of the bottle.

Check Anthropic’s full report for technical details. But the bottom line is clear: the age of AI-powered cyber warfare isn’t coming—it’s here. And we’re woefully unprepared.

Chinese Hackers Used Claude AI to Automate 90% of Cyber Espionage Campaign

The post Chinese Hackers Used Claude AI to Automate 90% of Cyber Espionage Campaign appeared first on Gridinsoft Blog.

]]>
https://gridinsoft.com/blogs/claude-ai-cyber-espionage/feed/ 0
PROMPTFLUX: AI Malware Using Gemini for Self-Modification https://gridinsoft.com/blogs/promptflux-ai-malware-threat/ https://gridinsoft.com/blogs/promptflux-ai-malware-threat/#respond Thu, 06 Nov 2025 18:42:48 +0000 https://gridinsoft.com/blogs/?p=31295 Malware that rewrites itself on the fly, like a shape-shifting villain in a sci-fi thriller. That’s the chilling vision Google’s Threat Intelligence Group (GTIG) paints in their latest report. They’ve spotted experimental code using Google’s own Gemini AI to morph and evade detection. But is this the dawn of unstoppable AI super-malware, or just clever […]

The post PROMPTFLUX: AI Malware Using Gemini for Self-Modification appeared first on Gridinsoft Blog.

]]>
Malware that rewrites itself on the fly, like a shape-shifting villain in a sci-fi thriller. That’s the chilling vision Google’s Threat Intelligence Group (GTIG) paints in their latest report. They’ve spotted experimental code using Google’s own Gemini AI to morph and evade detection. But is this the dawn of unstoppable AI super-malware, or just clever marketing for Big Tech’s AI arms race? Let’s dive into the details and separate fact from fiction.

How PROMPTFLUX Works
The PROMPTFLUX AI Malware Lifecycle
Threat Name PROMPTFLUX / AI-Enhanced Malware
Threat Type Experimental Dropper, Metamorphic Malware
Discovery Date June 2025
Infection Vector Phishing campaign or a compromised software supply chain.
Dynamic Payload Generation The malware’s C2 server uses the Gemini API to generate new, unique payloads on-demand, making signature-based detection useless.
Traffic Obfuscation Communications with the C2 are disguised as legitimate calls to Google’s Gemini API, blending into normal, allowed web traffic.
Capabilities Data theft, credential harvesting, and establishing a persistent backdoor.
Key Feature Uses Gemini API for real-time code obfuscation
Current Status Experimental, not yet operational
Potential Impact Harder-to-detect persistent threats
Risk Level Low – More concept than crisis

Malware Meets AI in a Dark Alley

It’s early June 2025, and Google’s cyber sleuths stumble upon PROMPTFLUX, a sneaky VBScript dropper that’s not content with staying put. This experimental malware calls home to Gemini, Google’s AI powerhouse, asking it to play the role of an “expert VBScript obfuscator” that dodges antiviruses like a pro. The result? A fresh, garbled version of itself every hour, tucked into your Startup folder for that persistent punch.

PROMPTFLUX code that uses AI to reinvent itself.
PROMPTFLUX code that uses AI to reinvent itself. (Credit: Google)

As detailed in Google’s eye-opening report, this is the first sighting of “just-in-time” AI in live malware execution. No more static code— this bad boy generates malicious functions on demand. But hold the panic: The code’s riddled with commented-out features and API call limits, screaming “work in progress.” It’s like a villain monologuing their plan before they’ve even built the death ray.

Behind the Curtain: How AI Turns Malware into a Chameleon

PROMPTFLUX isn’t just phoning a friend; it’s outsourcing its evolution. It prompts Gemini to rewrite its source code, aiming to slip past static analysis and endpoint detection tools (EDRs). It even tries to spread like a digital plague via USB drives and network shares. Sounds terrifying, right?

Not so fast. Google admits the tech is nascent. Current large language models (LLMs) like Gemini produce code that’s… well, mediocre at best. Effective metamorphic malware needs surgical precision, not the “vibe coding” we’re seeing here. It’s more proof-of-concept than apocalypse-bringer.

Beyond PROMPTFLUX

The report doesn’t stop at one trick pony. GTIG spotlights a menagerie of experimental AI malware:

  • PROMPTSTEAL: A Python data miner that taps Hugging Face’s API to conjure Windows commands for stealing system info and documents.
  • PROMPTLOCK: Cross-platform ransomware that whips up malicious Lua scripts at runtime for encryption and exfiltration.
  • QUIETVAULT: A JavaScript credential thief that uses local AI tools to hunt GitHub and NPM tokens, exfiltrating them to public repos.

These aren’t isolated experiments. State actors from North Korea, Iran, and China are already wielding AI for reconnaissance, phishing, and command-and-control wizardry. Meanwhile, the cybercrime black market is buzzing with AI tools for phishing kits and vulnerability hunting. The barrier to entry? Plummeting faster than crypto in a bear market.

Hype or Genuine Threat?

Google’s report drops terms like “novel AI-enabled malware” and “autonomous adaptive threats,” enough to make any sysadmin sweat. But let’s read between the lines. PROMPTFLUX is still in diapers— incomplete, non-infectious, and quickly shut down by Google disabling the associated API keys.

Could this be stealth marketing? In the cutthroat AI arena, where bubbles threaten to burst, showcasing your model’s “misuse” potential might just highlight its power. As one skeptic put it: “Good try, twisted intelligence, but not today.” We’ve got years before AI malware goes mainstream. Still, it’s a wake-up call: The future of cyber threats is getting smarter, and we need to keep pace.

While PROMPTFLUX won’t keep you up tonight, it’s a harbinger. Here’s how to future-proof your defenses:

Survival Tips in the AI Age:

  • Updates: Patch your systems and security tools religiously.
  • API Vigilance: Monitor outbound calls to AI services— they could be malware phoning home.
  • Educate and Simulate: Train your team on AI-boosted phishing and run drills.
  • Zero Trust, Full Time: Assume nothing’s safe; verify everything.

Google’s already beefing up Gemini’s safeguards, but the cat-and-mouse game is just beginning.

The Final Byte

Google’s deep dive into AI-powered malware is equal parts fascinating and foreboding. PROMPTFLUX and its ilk hint at a future where threats evolve faster than we can patch. Yet, for now, it’s more smoke than fire— a clever ploy in the AI hype machine, perhaps. Stay informed, stay secure, and remember: In the battle of wits between humans and machines, we’re still holding the plug. For more cyber scoops, check our breakdowns of top infostealers.

PROMPTFLUX: AI Malware Using Gemini for Self-Modification

The post PROMPTFLUX: AI Malware Using Gemini for Self-Modification appeared first on Gridinsoft Blog.

]]>
https://gridinsoft.com/blogs/promptflux-ai-malware-threat/feed/ 0
AI-Generated Malware Bypasses Microsoft Defender 8% of the Time, Black Hat 2025 Research Reveals https://gridinsoft.com/blogs/ai-malware-bypasses-microsoft-defender/ https://gridinsoft.com/blogs/ai-malware-bypasses-microsoft-defender/#respond Tue, 15 Jul 2025 17:37:26 +0000 https://gridinsoft.com/blogs/?p=31257 Imagine a world where hackers don’t painstakingly craft malicious code by hand, but instead train AI models to evolve and outsmart antivirus software like living organisms. This isn’t science fiction—it’s the chilling reality unveiled in a groundbreaking proof-of-concept (PoC) by Kyle Avery, Principal Offensive Specialist Lead at Outflank . Set to be presented at Black […]

The post AI-Generated Malware Bypasses Microsoft Defender 8% of the Time, Black Hat 2025 Research Reveals appeared first on Gridinsoft Blog.

]]>
Imagine a world where hackers don’t painstakingly craft malicious code by hand, but instead train AI models to evolve and outsmart antivirus software like living organisms. This isn’t science fiction—it’s the chilling reality unveiled in a groundbreaking proof-of-concept (PoC) by Kyle Avery, Principal Offensive Specialist Lead at Outflank 1.

Set to be presented at Black Hat USA 2025 in Las Vegas, this PoC demonstrates how reinforcement learning (RL) can turn an open-source language model into a malware-generating machine that reliably bypasses Microsoft Defender for Endpoint. What makes this research so intriguing? It’s not just about evasion—it’s about democratizing advanced hacking.

With a modest budget of around $1,500 and three months of training on consumer hardware, Avery created a tool that succeeds 8% of the time, meaning attackers could generate undetectable malware in just a dozen tries. This “vibe hacking” aesthetic—where AI feels like a cyberpunk apprentice learning to dodge digital guardians—signals a fundamental shift in cybersecurity battles.

Background: From Hype to Reality in AI Malware

Since late 2023, experts have warned about AI’s potential in cybercrime. Early uses were rudimentary: hackers leveraging models like ChatGPT for phishing emails or basic scripts. But these were easily detected, lacking the sophistication to challenge enterprise defenses like Microsoft Defender.

The turning point came with advancements in reinforcement learning, inspired by OpenAI’s o1 model (released December 2024) and DeepSeek’s open-source R1 (January 2025). These models excel in verifiable tasks—think math or coding—by rewarding correct predictions and penalizing errors, rather than relying on vast unsupervised datasets.

Avery spotted an opportunity: apply RL to malware creation, where “success” is measurable (does the code run? Does it evade detection?). Unlike traditional LLMs needing terabytes of malware samples—a scarce resource—RL allows self-improvement through trial and error. This PoC isn’t the first AI-malware attempt, but it’s the most reliable, outperforming commercial models like Anthropic’s (under 1% success) and DeepSeek’s (under 0.5%).

The PoC Unveiled: Model and Core Mechanics

At the heart of Avery’s creation is Qwen 2.5, a 7-billion-parameter open-source LLM from Alibaba Cloud. Chosen for its lightweight design, it runs on high-end consumer GPUs (e.g., NVIDIA RTX 40-series), making it accessible without cloud farms. Out-of-the-box, Qwen generates functional malware only 1 in 1,000 times—and none evade Defender. Avery’s genius? A custom RL loop that “evolves” the model.

Success Rate 8% full evasion (vs. <1% for rivals)
Training Cost $1,500–$1,600 (mostly GPU compute)
Time 3 months, with potential for higher rates if continued (trend showed no immediate plateau)
Size Orders of magnitude smaller than GPT-4, yet specialized for evasion

This isn’t random generation—it’s targeted. The model specializes in verifiable outcomes, excelling at code but faltering in subjective tasks like creative writing, a deliberate trade-off from RL design.

Training Details: A Step-by-Step Breakdown

Avery’s process mimics natural selection in a digital sandbox. Here’s how it unfolded:

  1. Setup the Environment: Qwen 2.5 is placed in an isolated virtual machine with Microsoft Defender for Endpoint installed. No real-world risks—just simulated attacks.
  2. Initial Generation: The model outputs code attempts (e.g., trojans or shellcode). Success is rare at first.
  3. Reward System: A custom program scores outputs. First, reward functionality (does it execute?). Then, integrate Defender’s API to query alerts—reward lower severity (from high-alert to none). This “verifiable rewards” mechanism updates the model’s weights to favor successful “thought processes.”
  4. Iteration and Fine-Tuning: Thousands of cycles refine the model. No massive datasets needed—RL uses self-generated feedback. After three months, it hits 8% evasion reliability.

One fascinating angle: this echoes “gradient hacking,” where AI manipulates its own training to achieve hidden goals. Avery stopped at 8%, but projections suggest 20-30% with more time, turning this into a plug-and-play tool for red teamers—or worse, cybercriminals.

The 8% Success Rate: Small Number, Big Implications

You might think 8% doesn’t sound too scary. But consider this: if cybercriminals deploy AI-generated malware at scale, even a small success rate translates to significant damage. With millions of potential targets, 8% becomes a substantial number of compromised systems.

However, the study also reveals current limitations. The relatively low success rate suggests that modern security solutions like Microsoft Defender are still effective against most AI-generated threats. It’s not the cybersecurity apocalypse some feared, but it’s definitely a wake-up call.

Should You Panic? Not Yet

Before you start questioning whether to disable Windows Defender (spoiler: you shouldn’t), let’s put this in perspective. The 8% success rate actually demonstrates how effective modern security solutions are against AI-generated threats.

Microsoft Defender, along with other reputable antivirus solutions, uses multiple layers of protection. Signature-based detection is just one piece of the puzzle. Behavioral analysis, machine learning algorithms, and heuristic scanning work together to catch threats that might slip past traditional detection methods.

This is why cybersecurity experts always recommend using comprehensive protection rather than relying on a single security measure. It’s also why keeping your security software updated is crucial—as AI attack methods evolve, so do the defensive countermeasures.

Countermeasures: Fighting Back Against AI Evasion

The good news? This PoC isn’t invincible. Defenders can adapt with proactive strategies:

  • AI-Powered Detection: Use RL in reverse—train defenders to spot AI-generated patterns, like unnatural code structures or rapid iterations.
  • Behavioral Analysis: Shift from signature-based to anomaly detection.
  • Sandbox Hardening: Limit API access in testing environments and use multi-layered EDR with ML to flag evasion attempts early.
  • Model Watermarking: Embed tracers in open-source LLMs to detect malicious fine-tuning.
  • Regulatory and Community Efforts: As seen in Black Hat talks, collaborate on sharing RL evasion datasets. Microsoft could update Defender with RL-specific heuristics post-presentation.

AI-Generated Malware Bypasses Microsoft Defender 8% of the Time, Black Hat 2025 Research Reveals

Experts predict criminals will adopt similar tech soon, so proactive patching and AI ethics guidelines are crucial.

The Bigger Picture: AI vs AI Arms Race

This research embodies “vibe hacking”—a futuristic blend of machine learning and cyber warfare, where attackers become AI trainers. It lowers barriers for script kiddies, potentially flooding the dark web with custom evasion kits. Yet, it also empowers ethical hackers, accelerating red team innovations.

Microsoft and other security vendors are already incorporating machine learning into their detection engines. These systems can identify patterns and anomalies that might indicate AI-generated threats, even if they haven’t seen the exact malware variant before.

The key is that defensive AI systems have advantages too. They can analyze vast amounts of data, learn from global threat intelligence, and adapt their detection methods in real-time. While attackers might use AI to create new variants, defenders can use AI to recognize the underlying patterns and techniques.

What This Means for Regular Users

For most users, this research doesn’t change the fundamental cybersecurity advice, but it does emphasize the importance of multi-layered protection:

  • Keep your security software updated – Regular updates include new detection methods and countermeasures against evolving AI threats
  • Don’t rely on just one security layer – Use comprehensive protection with multiple detection methods including behavioral analysis
  • Stay vigilant about suspicious emails and downloads – No security system is 100% effective, especially against novel AI-generated threats
  • Keep your operating system and software current – Many attacks exploit known vulnerabilities that patches can prevent
  • Practice good cybersecurity hygiene – Avoid risky behaviors that could expose you to threats, regardless of their origin

The silver lining is that while AI can generate more sophisticated malware, it also enables better detection systems. Modern security solutions are increasingly incorporating AI-powered behavioral analysis to spot anomalies that traditional signature-based detection might miss.

Implications: The Future of “Vibe Hacking”

This PoC embodies what Avery calls “vibe hacking”—a futuristic blend of machine learning and cyber warfare, where attackers become AI trainers rather than traditional coders. It represents a fundamental shift in how cybercrime might evolve, lowering barriers for less skilled actors while potentially flooding the dark web with custom evasion kits.

The democratization aspect is particularly concerning. Where traditional malware creation requires deep technical knowledge and countless hours of manual coding, this AI approach could enable “script kiddies” to generate sophisticated threats. Yet it also empowers ethical hackers and red team professionals, accelerating defensive innovations.

Criminal adoption of similar technology is “pretty likely in the medium term.” The proof-of-concept’s success rate could potentially reach 20-30% with continued training, transforming it from a research curiosity into a practical tool for both red teamers and cybercriminals.

Looking Ahead: Preparing for the AI Era

Kyle Avery’s Black Hat 2025 presentation will undoubtedly spark intense discussion in the cybersecurity community. The research demonstrates that while AI-generated malware is becoming more sophisticated, it’s not yet the existential threat some feared.

The 8% success rate, while significant, also shows that modern security solutions like Microsoft Defender are still effective against the majority of AI-generated threats. However, the trend toward higher success rates with continued training suggests this is just the beginning of a new chapter in cybersecurity.

For businesses and organizations, this research underscores the importance of layered security approaches. Relying on any single security solution, no matter how advanced, is increasingly risky. The future of cybersecurity lies in comprehensive, multi-layered defense strategies that can adapt to evolving threats.

Stay Vigilant in the AI Era

Avery’s groundbreaking work at Black Hat 2025 isn’t a doomsday prophecy—it’s a wake-up call for the cybersecurity industry. By understanding reinforcement learning-driven threats today, we can build more resilient defenses for tomorrow.

The research shows that while AI can enhance cybercrime capabilities, it also opens new avenues for defense. The key is ensuring that defensive AI capabilities evolve faster than offensive ones, maintaining the balance that keeps our digital world secure.

For users, the message remains clear: maintain good security practices, keep your software updated, and use comprehensive protection. Whether it’s traditional malware or AI-generated threats, the principles of good cybersecurity remain the same: stay informed, stay protected, and stay vigilant.

At GridinSoft, we’re committed to evolving our security solutions to meet these emerging challenges. As the AI revolution in cybersecurity unfolds, we’ll continue monitoring these developments and adapting our defenses accordingly.

Kyle Avery’s full research will be presented at Black Hat USA 2025 in Las Vegas.

The post AI-Generated Malware Bypasses Microsoft Defender 8% of the Time, Black Hat 2025 Research Reveals appeared first on Gridinsoft Blog.

]]>
https://gridinsoft.com/blogs/ai-malware-bypasses-microsoft-defender/feed/ 0
Noodlophile Stealer: Cybercriminals Hijack AI Hype to Steal Your Data https://gridinsoft.com/blogs/noodlophile-stealer/ https://gridinsoft.com/blogs/noodlophile-stealer/#respond Fri, 30 May 2025 17:58:39 +0000 https://gridinsoft.com/blogs/?p=31068 Just when you thought cybercriminals couldn’t get more creative, they’ve found a way to weaponize our collective obsession with AI. Meet Noodlophile Stealer, a newly discovered information-stealing malware that’s turning the AI revolution into a data theft operation. Because apparently, even malware developers want to ride the artificial intelligence wave. Name Noodlophile Stealer, Noodlophile Malware […]

The post Noodlophile Stealer: Cybercriminals Hijack AI Hype to Steal Your Data appeared first on Gridinsoft Blog.

]]>
Just when you thought cybercriminals couldn’t get more creative, they’ve found a way to weaponize our collective obsession with AI. Meet Noodlophile Stealer, a newly discovered information-stealing malware that’s turning the AI revolution into a data theft operation. Because apparently, even malware developers want to ride the artificial intelligence wave.

Name Noodlophile Stealer, Noodlophile Malware
Threat Type Information Stealer, Remote Access Trojan
Disguise AI video generation platforms, fake content creation tools
What It Steals Browser credentials, cryptocurrency wallets, session tokens, personal files
Distribution Facebook groups (62K+ views), fake AI websites, viral social media campaigns
Communication Telegram bot API for data exfiltration
Additional Payload XWorm 5.2 remote access trojan
Risk Level High (financial loss, account takeover, persistent remote access)

The AI Bait: Too Good to Be True

Security researchers at Morphisec have uncovered a sophisticated campaign that exploits public enthusiasm for AI-powered content creation. Instead of the usual suspects like cracked software or phishing emails, cybercriminals are now building convincing fake AI platforms that promise cutting-edge video and image generation capabilities.

Fake AI platforms that promise cutting-edge video
Fake AI platforms that promise cutting-edge video

The operation starts innocently enough. Victims discover these fake AI platforms through Facebook groups boasting over 62,000 views, where users eagerly share links to “revolutionary” AI tools for video editing and content creation. The social engineering is brilliant in its simplicity: who doesn’t want access to the latest AI technology for free?

How the Scam Works

The attack chain is deceptively straightforward:

  1. Discovery: Users find fake AI platforms through viral Facebook posts and groups
  2. Engagement: Victims upload their images or videos, believing they’re using legitimate AI tools
  3. The Hook: After “processing,” users are prompted to download their enhanced content
  4. The Payload: Instead of AI-generated videos, they download malware disguised as their processed content

The downloaded file typically comes as a ZIP archive with names like “VideoDreamAI.zip” containing an executable masquerading as a video file: “Video Dream MachineAI.mp4.exe”. The filename exploits whitespace and misleading extensions to appear harmless, but it’s actually a sophisticated malware delivery system.

Meet Noodlophile: The New Kid on the Block

Noodlophile Stealer represents a new addition to the malware ecosystem. Previously undocumented in public malware trackers, this trojan combines multiple malicious capabilities:

Data Theft Capabilities

  • Browser credential harvesting from all major browsers
  • Cryptocurrency wallet exfiltration targeting popular wallets
  • Session token theft for account takeover attacks
  • File system reconnaissance to identify valuable data

Communication Method

Like its cousin Octalyn Stealer, Noodlophile uses Telegram bots for data exfiltration. The malware communicates through Telegram’s API, making detection more challenging since the traffic appears legitimate to most monitoring tools.

The XWorm Connection

In many cases, Noodlophile doesn’t work alone. Researchers discovered that the malware often deploys alongside XWorm 5.2, a remote access trojan that provides attackers with deeper system control. This combination creates a particularly dangerous infection that can:

  • Steal credentials and sensitive data (Noodlophile)
  • Maintain persistent remote access (XWorm)
  • Propagate to other systems on the network
  • Deploy additional malware payloads
Noodlophile Attack Flow Facebook Groups 62K+ Views Fake AI Platform Upload Content Download Malware Data Theft via Telegram Noodlophile • Browser Data • Crypto Wallets • Session Tokens XWorm 5.2 • Remote Access • Persistence • Propagation Attack progression from social media to data exfiltration

Noodlophile Stealer attack flow analysis

Technical Analysis: Under the Hood

Security researchers discovered that Noodlophile employs sophisticated obfuscation techniques to evade detection. The malware uses approximately 10,000 repeated instances of meaningless operations (like “1 / int(0)”) to break automated analysis tools while remaining syntactically valid.

Key Technical Indicators

The malware communicates with command-and-control servers through several domains and IP addresses:

  • C2 Domains: lumalabs-dream[.]com, luma-dreammachine[.]com
  • Telegram Integration: Uses bot tokens for data exfiltration
  • XWorm C2: 103.232.54[.]13:25902
  • File Names: Various ZIP archives with AI-themed names

The Vietnamese Connection

Investigation into the malware’s origins suggests the developer is likely of Vietnamese origin, based on language indicators and social media profiles. The threat actor has been observed promoting this “new method” in cybercrime forums, advertising Noodlophile as part of malware-as-a-service (MaaS) schemes alongside tools labeled “Get Cookie + Pass” for account takeover operations.

Noodlophile, likely of Vietnamese origin
Noodlophile, likely of Vietnamese origin

Why This Campaign is Different

What makes this campaign particularly concerning is its exploitation of legitimate technological trends. Unlike traditional malware campaigns that rely on obviously suspicious lures, this operation targets users genuinely interested in AI technology – a demographic that includes creators, small businesses, and tech enthusiasts who might otherwise be security-conscious.

The use of Facebook groups with tens of thousands of views demonstrates the campaign’s reach and sophistication. By leveraging social proof and viral marketing techniques, the attackers have created a self-sustaining distribution network that continues to attract new victims.

Signs of Infection

If you’ve recently downloaded “AI-generated” content from suspicious platforms, watch for these warning signs:

  • Unexpected network activity, especially connections to Telegram servers
  • Browser settings or saved passwords changing unexpectedly
  • Cryptocurrency wallet balances decreasing
  • Unknown processes running with network access
  • Antivirus alerts mentioning Noodlophile or XWorm
  • Unusual system performance or unexpected file modifications

How to Remove Noodlophile Stealer

If you suspect your system is infected with Noodlophile Stealer:

Immediate Actions

  1. Disconnect from the internet to prevent further data exfiltration
  2. Boot into Safe Mode to limit malware functionality
  3. Run a complete system scan with updated anti-malware software
GridinSoft Anti-Malware main screen

Download and install Anti-Malware by clicking the button below. After the installation, run a Full scan: this will check all the volumes present in the system, including hidden folders and system files. Scanning will take around 15 minutes.

After the scan, you will see the list of detected malicious and unwanted elements. It is possible to adjust the actions that the antimalware program does to each element: click "Advanced mode" and see the options in the drop-down menus. You can also see extended information about each detection - malware type, effects and potential source of infection.

Scan results screen

Click "Clean Now" to start the removal process. Important: removal process may take several minutes when there are a lot of detections. Do not interrupt this process, and you will get your system as clean as new.

Removal finished

Post-Removal Steps

  • Change all passwords immediately, especially for financial and cryptocurrency accounts
  • Enable two-factor authentication on all critical accounts
  • Monitor financial accounts for unauthorized transactions
  • Check cryptocurrency wallets and consider transferring funds to new addresses
  • Review browser extensions and remove any suspicious additions

Prevention: Staying Safe in the AI Era

As AI technology continues to evolve, so will the tactics used to exploit our enthusiasm for it. Here’s how to protect yourself:

Red Flags to Watch For

  • Too-good-to-be-true AI tools offering premium features for free
  • Platforms requiring file uploads before showing capabilities
  • Social media promotion through viral posts rather than official channels
  • Download requirements for viewing “processed” content
  • Executable files disguised as media content

Best Practices

  • Stick to well-known, legitimate AI platforms with verified credentials
  • Be skeptical of AI tools promoted through social media groups
  • Never download executable files when expecting media content
  • Use reputable antivirus software with real-time protection
  • Keep your operating system and browsers updated

The Bigger Picture: AI as the New Attack Vector

The Noodlophile campaign represents a significant shift in cybercriminal tactics. As AI becomes mainstream, we can expect to see more attacks leveraging public interest in artificial intelligence. This trend mirrors how cybercriminals previously exploited interest in cryptocurrency, social media, and mobile apps.

The sophistication of these fake AI platforms – complete with convincing interfaces and viral marketing campaigns – demonstrates that cybercriminals are investing significant resources in this new attack vector. Organizations and individuals need to adapt their security awareness training to address AI-themed threats.

Industry Response

Security vendors are already updating their detection capabilities to identify Noodlophile and similar AI-themed threats. However, the rapid evolution of these campaigns means that user education remains the first line of defense.

The cybersecurity community is also working to identify and take down the infrastructure supporting these campaigns, including the fake domains and social media groups used for distribution.

The Bottom Line

Noodlophile Stealer serves as a wake-up call about the dark side of AI adoption. While artificial intelligence offers incredible opportunities for creativity and productivity, it also provides new avenues for cybercriminals to exploit our enthusiasm and trust.

The key to staying safe is maintaining healthy skepticism, especially when encountering “revolutionary” AI tools that seem too good to be true. Remember: legitimate AI companies don’t typically distribute their software through viral Facebook posts or require you to download suspicious executables.

If you suspect your system has been compromised by Noodlophile or any other malware, don’t wait. Download GridinSoft Anti-Malware and run a complete system scan immediately.

Noodlophile Stealer: Cybercriminals Hijack AI Hype to Steal Your Data

In the age of AI, the old cybersecurity adage remains true: if something seems too good to be true, it probably is. Stay vigilant, stay informed, and remember that the most sophisticated AI tool is still your own critical thinking.

The post Noodlophile Stealer: Cybercriminals Hijack AI Hype to Steal Your Data appeared first on Gridinsoft Blog.

]]>
https://gridinsoft.com/blogs/noodlophile-stealer/feed/ 0
Slopsquatting: New Malware Spreading Technique Targeting AI Assisted Developers https://gridinsoft.com/blogs/slopsquatting-malware/ https://gridinsoft.com/blogs/slopsquatting-malware/#respond Thu, 24 Apr 2025 09:01:55 +0000 https://gridinsoft.com/blogs/?p=30802 Slopsquatting is a new type of cyber threat that takes advantage of mistakes made by AI coding tools, particularly LLMs that can “hallucinate”. In this post, we’ll break down this new type of attack, find out why it can occur, dispel some myths, and figure out how to prevent it. Slopsquatting – New Techniques Against […]

The post Slopsquatting: New Malware Spreading Technique Targeting AI Assisted Developers appeared first on Gridinsoft Blog.

]]>
Slopsquatting is a new type of cyber threat that takes advantage of mistakes made by AI coding tools, particularly LLMs that can “hallucinate”. In this post, we’ll break down this new type of attack, find out why it can occur, dispel some myths, and figure out how to prevent it.

Slopsquatting – New Techniques Against AI Assisted Devs

Slopsquatting is a supply chain attack that leverages AI-generated “hallucinations” — instances where AI coding tools recommend non-existent software package names. The term draws parallels with typosquatting, where attackers register misspelled domain names to deceive users.

In slopsquatting, however, the deception stems from AI errors rather than human mistakes. The term combines “slop”, referring to low-quality or error-prone AI output, and “squatting”, the act of claiming these hallucinated package names for malicious purposes.

It is a rather unexpected cybersecurity threat that exploits the limitations of AI-assisted coding tools, particularly large language models. As developers increasingly rely on these tools to streamline coding processes, the risk of inadvertently introducing malicious code into software projects grows. Hackers can then create malicious packages with these fake names and upload them to public code repositories.

Mechanics of Slopsquatting

The process of slopsquatting unfolds in several stages. First, LLMs, such as ChatGPT, GitHub Copilot, or open-source models like CodeLlama, generate code or suggest dependencies. In some cases, they recommend package names that do not exist in public repositories like PyPI or npm. These hallucinated names often sound plausible, resembling legitimate libraries (e.g., “secure-auth-lib” instead of an existing “authlib”).

Python Package Index (PyPI), along with npm, are leveraged by cybercriminals with threatening frequency. Read our report on one of the latest PyPI typosquatting incidents.

Stage 2 begins when the developers, fully trusting the AI’s recommendations, run the code, assuming all the packages it refers to are legitimate. Normally, this ends up with build failures or broken functionality of certain parts of the resulting program. Developers might waste time debugging errors, searching for typos, or trying to figure out why a dependency isn’t resolving, but it is only about AI assistants being delusional about certain packages existing.

In the worst case scenario, which becomes more and more prevalent, the hallucinated name is already taken by a malicious repository. Con actors specifically target false names that appear in AI generated code, or try picking a name similar to what AI can generate, hoping to get their prey in future. As a result, what may look like a flawless build process in fact installs malware to the developer’s system.

The bad part about it is that these hallucinations are not random, and thus predictable. A study by researchers analyzed 16 LLMs, generating 576,000 Python and JavaScript code samples. They found that 19.7% (205,000) of recommended packages were non-existent. Notably, 43% of these hallucinated packages reappeared in 10 successive runs of the same prompt, and 58% appeared more than once, suggesting a level of predictability that attackers can exploit.

Python vs JavaScript hallucination rates graph
Python vs JavaScript hallucination rates

This is where the fun begins. Cybercriminals identify these hallucinated package names, either by analyzing AI outputs or predicting likely hallucinations based on patterns. They then create malicious packages with these names and upload them to public repositories. This is the worst version of the scenario described in the two paragraphs above.

As a result, this introduces malware into developer’s projects, which can compromise software security, steal data, or disrupt operations. In some cases, it can even serve as a backdoor for future attacks, allow lateral movement across systems, or lead to the compromise of an entire software supply chain.

Prevalence and Variability Across AI Models

The frequency of package hallucinations varies a lot depending on the AI model. Open-source models, such as CodeLlama and WizardCoder, tend to hallucinate more often, with an average hallucination rate of 21.7%. For example, CodeLlama hallucinated over 33% of the time. On the other hand, commercial models like GPT-4 Turbo perform much better, with a hallucination rate of just 3.59%. In general, GPT models are about four times less likely to hallucinate compared to open-source ones.

Hallucination rates slopsquatting
Hallucination rates of recent vs. all-time data sets

When it comes to how believable these hallucinations are, around 38% of the fake package names are moderately similar to real ones, and only 13% are just simple typos. That makes them pretty convincing to developers. So, even though commercial models are more reliable, no AI is completely immune to hallucinations—and the more convincing these fakes are, the bigger the risk.

Potential Impact

Despite the fact that massive human downsizing is underway in favor of using AI, slopsquatting shows that the complete replacement of humans by artificial intelligence is unlikely to happen anytime soon. If a widely-used AI tool keeps recommending a hallucinated package, attackers could use that to spread malicious code to numerous developers, making the attack much more effective.

Another issue is trust — developers who rely on AI tools might not always double-check if the suggested packages are legit, especially when they’re in a hurry. That trust makes them more vulnerable.

While there haven’t been any confirmed slopsquatting attacks in the wild as of April 2025, the technique is seen as a real threat for the future. It’s similar to how typosquatting started out as just a theoretical concern and then became a widespread problem. The risk is made worse by things like rushed security checks—something OpenAI has been criticized for. As AI tools become a bigger part of development workflows, the potential damage from slopsquatting keeps growing.

Preventive Measures Against Slopsquatting

To reduce the risk of slopsquatting, developers and organizations can take several practical steps. First, it’s important to verify any package recommended by an AI — check if it actually exists in official repositories and review things like download numbers and the maintainer’s history.

Continuing the previous paragraph, good code review practices are essential too. Catching weird or incorrect suggestions before the code goes live can save a lot of headaches. On top of that, developers should be trained to stay aware of the risks that come with AI hallucinations and not blindly trust everything the AI spits out.

Having runtime security measures in place can help detect and stop malicious activity from any compromised dependencies that do sneak through. I recommend GridinSoft Anti-Malware as a reliable solution for personal security: its multi-component detection system will find and eliminate even the most elusive threat, regardless of the way it is introduced. Download it by clicking the banner below and get yourself a proper protection today.

Slopsquatting: New Malware Spreading Technique Targeting AI Assisted Developers

The post Slopsquatting: New Malware Spreading Technique Targeting AI Assisted Developers appeared first on Gridinsoft Blog.

]]>
https://gridinsoft.com/blogs/slopsquatting-malware/feed/ 0
The Alarming Rise of DeepSeek Scams https://gridinsoft.com/blogs/deepseek-scams/ https://gridinsoft.com/blogs/deepseek-scams/#comments Wed, 05 Feb 2025 00:51:34 +0000 https://gridinsoft.com/blogs/?p=29406 The release of DeepSeek AI chatbot gave a push for an enormous number of DeepSeek scams that trick users in a variety of shady activities. Some of them just aim at charging money for services that are free by design, others try collecting users’ personal information or even infect them with malware. In this article, […]

The post The Alarming Rise of DeepSeek Scams appeared first on Gridinsoft Blog.

]]>
The release of DeepSeek AI chatbot gave a push for an enormous number of DeepSeek scams that trick users in a variety of shady activities. Some of them just aim at charging money for services that are free by design, others try collecting users’ personal information or even infect them with malware. In this article, I show my in-depth analysis of these sites, explain the risks one faces when using them, and show how to recognize them early on.

What are DeepSeek Scams?

It is hard to comprehend the sensation that was the DeepSeek R1 model on release. It has become a #1 newsletter headline for almost a week, though sometimes for its security issues and database leaks. But just a few days following the public release of the model, fraudulent actors began their attempts to earn their dirty coin on this news.

DeepSeek scams domains
List of domains involved in DeepSeek scams

In a snap of a finger, hundreds of websites were registered, offering pretty much the same appearance and contents. “Access world’s most innovative AI model for a small fee” – that is the motto that unites them all. The sole fact of some guys charging money for a thing that is available for free already makes it dubious.

You can feel deja vu at this point, as the same exact thing happened upon the public release of ChatGPT 3, the “first” AI chatbot of them all. Similarly to modern days with DeepSeek scams, con actors were taking a free model, and offering it as a “paid” GPT-4 at a corresponding price. But this time, the scope is much more extensive.

Some of the past scams included fake ChatGPT apps, and fraudulent activity around DeepSeek is quite likely to take the same route.

But what GridinSoft network security analysis team observed to the moment is over 800 domains registered less than a week ago, all dedicated to DeepSeek and related topics. Some offer access to the model, despite it being free, some use all the fuss to promote scam crypto tokens, and others spread outright malware. Quite a lot of them are taken down after less than a few days after being brought up online. Let’s have a closer look at what they are and how they work.

How do DeepSeek Scams Work?

Two key things DeepSeek scams rely upon are users’ lack of awareness about potential threats, combined with the rush to get the new technology. This is what allows even the most ridiculous tricks to work correctly. In that state, folks don’t question even outright dodgy and strange things, leave alone well-designed scams.

Scam campaigns typically start with a promotion campaign happening in social media. Fraudsters can buy ads in the media, or use spoofed accounts/bots to post hundreds of posts, targeting as wide of an audience as possible.

Endorsement cryptoscam
Example of a cryptocurrency scam website endorsed through a hacked Twitter account of Apple

Upon opening any of these shady websites, users face an offer to log in and introduce a payment to start using the service. If it is a crypto scam page we are talking about, it would most likely ask for a payment for a non-existent DeepSeek crypto token. In either case, money and personal information is the main focus.

DeepSeek cryptocurrency scam
Example of a crypto scam parasiting on DeepSeek fame

With the “DeepSeek R1 access” websites, the scam is not outright obvious. It may really provide access to the model, but why would one pay for a free model? Even if someone runs their local model to avoid staggeringly bad downtime of the official site, the capacity of their solution will be even lower. Setting up your own local version may just be a much more cost-efficient solution.

For crypto scams that offer early staking in “DeepSeek meme coin”, things are much more straightforward. Users pay the money, hoping to get into the hype train early (probably motivated by TrumpCoin and MelaniaCoin examples), and get nothing in return. Websites simply shut down after a few days, with money and alleged tokens gone.

There is also a small amount of pages that spread quite literal malware, using the trick with verification. Under different guises – from “show that you’re using a correct OS” to “verify you are human”. Upon clicking the button, a more classic fake captcha page appears. The latter asks the user to open PowerShell and execute the malicious script that was copied to the clipboard as soon as they’ve opened the website.

Some of the websites also offer APK files to install on your Android smartphone. The very idea of running an questionable installer already has a bad smell to it. Absence of any legitimate explanation for the file adds even more suspicion, though for users eager to get their hands on the new technology, these red flags are non-existent.

DeepSeek scams malware
The site distributing malware under the guise of DeepSeek AI

How to recognize scam DeepSeek websites?

For now, at the beginning of February 2025, there are no other official pages of DeepSeek other than deepseek.com. There are also no mobile applications for any platform, though I am sure they will appear in future akin to how OpenAI made their app for ChatGPT. Same is true for crypto tokens: DeepSeek as a company never announced the release of any cryptocurrency projects.

With all that being said, one can make a simple conclusion: any pages other than the said official site that offer any services are fraudulent to a certain extent. To verify whether the app or a service is real, visit their website and check the blog: there will be information about a service shall it be real and legit.

Though, it may sometimes be problematic to find this news, especially when a website posts lots of new content. To simplify the searching, consider using GridinSoft Website Reputation Checker: this free tool will return you a verdict about the website’s legitimacy in less than 30 seconds.

The Alarming Rise of DeepSeek Scams

The post The Alarming Rise of DeepSeek Scams appeared first on Gridinsoft Blog.

]]>
https://gridinsoft.com/blogs/deepseek-scams/feed/ 1
DeepSeek AI Data Leaked, Exposing User Data https://gridinsoft.com/blogs/deepseek-ai-data-leak/ https://gridinsoft.com/blogs/deepseek-ai-data-leak/#respond Fri, 31 Jan 2025 23:15:33 +0000 https://gridinsoft.com/blogs/?p=29378 Wiz Research discovered a detailed DeepSeek database containing sensitive information, including user chat history, API keys, and logs. Additionally, it exposed backend data with internal details about infrastructure performance. Yes, the unprotected data was openly lying in the public domain, so it is far beyond the high-profile leak. DeepSeek AI Data Breach: Over a Million […]

The post DeepSeek AI Data Leaked, Exposing User Data appeared first on Gridinsoft Blog.

]]>
Wiz Research discovered a detailed DeepSeek database containing sensitive information, including user chat history, API keys, and logs. Additionally, it exposed backend data with internal details about infrastructure performance. Yes, the unprotected data was openly lying in the public domain, so it is far beyond the high-profile leak.

DeepSeek AI Data Breach: Over a Million Log Entries and Sensitive Keys Exposed

DeepSeek, a rapidly rising Chinese AI startup that has become worldwide known in just a few days for its open-source models, has found itself in hot water after a major security lapse. Researchers at Wiz discovered that DeepSeek left one of its ClickHouse databases publicly accessible on the internet, potentially allowing unauthorized access to sensitive internal data. Wouldn’t it be ironic if an AI company that claims to be smarter than humans couldn’t even secure its own database?

Plain-Text chat messages DeepSeek
Plain-Text chat messages from DeepSeek (source: Wiz Research)

The exposed database contained over a million log entries, including chat history, backend details, API keys, and operational metadata—essentially the backbone of DeepSeek’s infrastructure. API secrets, in particular, are highly sensitive because they act as authentication tokens for accessing services. If compromised, attackers could exploit these keys to manipulate AI models, extract user data, or even take control of internal systems.

How Was the Data Accessed?

DeepSeek’s system ran on ClickHouse, an open-source columnar database optimized for handling large-scale data analytics. The database was hosted at oauth2callback.deepseek[.]com:9000 and dev.deepseek[.]com:9000, and required no authentication to access. This means that anyone who discovered the exposed endpoints could connect and potentially extract or alter the data at will.

ClickHouse supports an HTTP interface, which allows users to run SQL queries directly from a web browser or command line without needing dedicated database management software. Because of this, any attacker who knew the right queries could potentially extract data, delete records, or escalate their privileges within DeepSeek’s infrastructure.

Leaked data screenshot
Some leaked data

Wiz researcher Gal Nagli pointed out that while much of AI security discourse focuses on future risks (like AI model manipulation and adversarial attacks), the real-world threats often stem from elementary mistakes, like exposed databases.

As Nagli rationally notes, AI firms must prioritize data protection by working closely with security teams to prevent such leaks. If attackers had gained access to DeepSeek’s logs, they could have harvested API keys to exploit AI services. They could also analyze chat logs to extract user data and private interactions. Additionally, they might manipulate internal settings to alter how models operate.

So What Now?

Despite such seemingly high-profile failures, the service still works great, as evidenced by the statistics of app downloads from official app stores. However, apart from this incident, those concerned about data security have some questions for the service. Its privacy policies are under investigation, particularly in Europe, due to questions about its handling of user data. As a Chinese AI company, DeepSeek is also being examined by U.S. authorities for potential national security risks.

Additionally, OpenAI and Microsoft suspect that DeepSeek may have used OpenAI’s API without permission to train its models via distillation—a process where AI models are trained on the output of more advanced models rather than raw data. The Italian data protection authority, Garante, recently demanded information on DeepSeek’s data collection practices, leading to its apps becoming unavailable in Italy. Meanwhile, Ireland’s Data Protection Commission (DPC) has made a similar request.

DeepSeek AI Data Leaked, Exposing User Data

The post DeepSeek AI Data Leaked, Exposing User Data appeared first on Gridinsoft Blog.

]]>
https://gridinsoft.com/blogs/deepseek-ai-data-leak/feed/ 0
AI Deepnude Websites – Are they Safe & Trustworthy? https://gridinsoft.com/blogs/ai-deepnude-sites-safe/ https://gridinsoft.com/blogs/ai-deepnude-sites-safe/#comments Mon, 30 Dec 2024 16:20:39 +0000 https://gridinsoft.com/blogs/?p=29018 The development of generative AI that is capable of creating images gave an expected push for AI deepnude web services. People are eager to remove clothing from someone around them, and that wish was around for quite some time now. But how safe is it to use such services? And is it legal? Let’s find […]

The post AI Deepnude Websites – Are they Safe & Trustworthy? appeared first on Gridinsoft Blog.

]]>
The development of generative AI that is capable of creating images gave an expected push for AI deepnude web services. People are eager to remove clothing from someone around them, and that wish was around for quite some time now. But how safe is it to use such services? And is it legal? Let’s find out together.

Are AI Deepnude Sites Safe & Legit?

First and foremost – yes, there are quite a lot of online AI deepnude services that are totally real, and you will in fact get the undressed photo in return. Availability of open-source AI models allow quite a few entrepreneurs to get into such activities, so you can see quite literally dozens of them around.

AI deepnude website example
Example of an AI deepnude website

Yet not all the websites are safe and will do what is promised. There are enough con actors who see the rush towards AI undressing services and try to get their bite without offering any actual services, or by tricking users into shady activities. Let me walk you through the key risks that you may face trying to use AI deepnude websites.

Privacy risks

One of the main concerns regarding any online AI services, deepnude ones included, is privacy. Photos generated by the AI are kept on the website; there is no real way to enforce their encrypted state, as they appear as a part of the content. Thus, pics of someone you know may soon appear in advertising materials of this, or a different deepnude service – and there is nothing you can do about that. Even if you ask for GDPR user data removal, edited pictures are likely to be stored separately, i.e. they do not belong to user-specific data.

AI deepnude sites data risks

A question that touches all sites that operate in such a spicy industry is data security. You upload the pictures of someone you know, share your email address, nickname and, in certain cases, even location. The data of one user is not a big deal, but data of thousands of people makes much more impact and costs a lot on shady marketplaces.

Not all AI undressing websites sell data, but it is particularly hard to control whether they are going to. The general rule of thumb is to use a burner email and expose as little real data about yourself as possible. That is particularly needed when we take into account the next problem.

Ethic concerns

One major part of any dealings with someone’s naked photos is ethics. While consensual photos of that matter are not a problem at all, ones created with AI generation are not. You can face a significant backlash or even have legal problems for generating such images and sharing them publicly. Even if one has generated the picture for themself, it may get leaked to the public due to the way the service operates.

Another ethical aspect is the possibility of a malicious misuse of deepnude technologies. It is common to see blackmail messages that threaten to publish some compromising graphical materials about the user. While before all these threats had nothing backing them up, nowadays wannabe-hackers may really generate some naked content with the victim and start posting it online. Sure, it will be quite simple to tell it is an AI generated image, but it is hard to miss how unpleasant and dirty the situation is.

We have several articles with a deep-dive into scammers blackmailing people with threats of posting their explicit photos online, consider checking them out.

AI deepnude scams

The most critical issue with AI deepnude generators is the wide variety of scams that this industry is riddled with. Huge influx in popularity, along with unclear understanding of how such services should work make it an ideal field for fraudsters.

Asking for money, returning nothing. One of the most common and obvious scams is taking money for generating the picture and returning nothing, or a subpar image. Bad service quality is multiplied by the inability to return your money. To go below radars of payment systems such sites ask one to pay for a seemingly unrelated thing on a separate website. And this is exactly what makes reversing the payment impossible.

It is worth noting that even legit deepnude services used this scheme. Payment systems like PayPal or Venmo, along with banks, refuse working with shady businesses like undressing services are. As the result, they are forced to ask users to pay indirectly, for example by purchasing a lot they’ve created on a different site, spoofing the payment purpose in such a way.

Collecting excessive amounts of user data. Another possible way of defrauding customers is by asking for excessive amounts of information during registration. The resulting images may in fact be of a subpar quality, but frauds will get every single sensitive detail about you before one can try out the site. All such services collect user data to some extent, but only malicious ones ask for way too much info and definitely aim at selling the data in future.

Demanding the user to install applications. One of the ways of monetization that AI deepnude websites may pick is by offering users to install certain apps or browser extensions. While there is a possibility of some of such apps being safe and legit, it is much more likely to get something shady and unwanted. Adware, browser hijackers or even scareware may use undressing AI sites for spreading – and you never know what exactly they offer you to download.

How do I understand that an Undressing AI Website is Safe?

To see whether the website you have found is trustworthy or not, consider using our free Website Reputation Checker. This web utility performs comprehensive checks of the website, and returns a clear verdict of whether there is any questionable activity happening.

It may be particularly complicated to understand whether the AI deepnude site is trustworthy before using it without special tools. Risking the money is not an option for many, and using only free options means exposing yourself to risk with even higher probability. That is why a Website Reputation Checker will be the best choice for that situation.

For continuous protection though, I would recommend you to install GridinSoft Anti-Malware. Its web protection feature will block shady websites at the very moment they are trying to open in your web browser. Download it by clicking the banner below and enable Internet Security in the Protect tab – that will get you covered.

AI Deepnude Websites  &#8211; Are they Safe &#038; Trustworthy?

The post AI Deepnude Websites – Are they Safe & Trustworthy? appeared first on Gridinsoft Blog.

]]>
https://gridinsoft.com/blogs/ai-deepnude-sites-safe/feed/ 1
Fake ChatGPT Apps https://gridinsoft.com/blogs/malicious-fake-chatgpt/ https://gridinsoft.com/blogs/malicious-fake-chatgpt/#comments Wed, 14 Feb 2024 15:38:01 +0000 https://gridinsoft.com/blogs/?p=19600 Public release of ChatGPT made a sensation back in 2022; it is not an exaggeration to say it is a gamechanger. However, the scammers go wherever large numbers of people do. Fake ChatGPT services started popping up here and there, and this is not going to be over even nowadays. So, what is ChatGPT virus? […]

The post Fake ChatGPT Apps appeared first on Gridinsoft Blog.

]]>
Public release of ChatGPT made a sensation back in 2022; it is not an exaggeration to say it is a gamechanger. However, the scammers go wherever large numbers of people do. Fake ChatGPT services started popping up here and there, and this is not going to be over even nowadays. So, what is ChatGPT virus? How dangerous are they? Let’s review the most noticeable examples.

Fake ChatGPT Sites: From Money Scams to Malware

The wave of hype around the public release of ChatGPT attracted a lot of attention from people, though not all of them were able to use it right away. Folks from a lot of countries were hunting for access to the novice technology, and it was quite obvious that rascals would find the way to scam the rushing ones. This started the wave of malicious fake ChatGPT apps, which now evolved into more sophisticated and diverse frauds.

Let’s talk about the typical profile of such a scam. The webpage involved in a scam typically has a strange URL, which contains ChatGPT or OpenAI name, and is commonly registered on a cheap TLD – .online, .xyz or the like. The exact website is made exquisitely simple, with minimum details and only a few buttons to click on. And all the activity on the website boils down to 2 things: downloading a file or paying a certain sum of money that will never be seen again.

In some cases, frauds opt for spreading mobile malware under the guise of a genuine app from OpenAI. This was especially profitable before the official one was released, but such frauds still go even these days. In the best case scenario, they just charge a sum of money for a cheap shell over GPT 3.5 API, which is free. Worse situations include no functionality at all, chargeware activity of the app, or a spyware/infostealer hidden inside.

I will begin reviewing the examples of fake ChatGPT sites and apps that spread outright malware. However, there were a couple with a financial scam at the end – you will see them in the end.

Chat-gpt-pc[.]online

Probably, one of the earliest malicious fake ChatGPT sites, detected a year ago – in early February 2023. On a fairly nice designed site, frauds were offering to download a desktop client for the chat bot. For people who were not aware that the original Chat is available only on the OpenAI’s website, this was a seemingly legit offer. However, upon downloading and installing the supposed client, defrauded folks were infected with RedLine stealer. Most of the instances were promoted through Facebook ads and groups and, in some regions, via SEO poisoning.

openai-pc-pro.online fake ChatGPT

Openai-pc-pro[.]online

One more malicious website, that copies the design of the original OpenAI page and effectively repeats the first one in our list. Aside from the same page design, it was offering to download the “desktop client” for the chat bot. As you may guess, the downloaded file contained malware, specifically Redline Stealer. Since both were promoted from the same Facebook group with ChatGPT-related naming, I suspect they belong to the same malware spreading campaign.

Chatgpt-go[.]online

A malicious website that copied the design of the original OpenAI page with ChatGPT dialogue box, but without the usual input prompt. Instead of the latter, there was a button labeled “TRY CHATGPT”, which led to malware downloading. Several other interactive elements across the site were also downloading the malware. For payloads from that site, I detected Lumma Stealer and several clipper malware samples. The main way of promotion this time was malicious Google Ads.

Pay[.]chatgptftw[.]com

A fake ChatGPT that contrasts three previous examples. Instead of malware spreading, one tries to gather users’ payment information. By mimicking a billing page that allegedly takes pay for accessing the technology, frauds collect the complete set of banking info, including usernames and email addresses. The promotion ways for such scams were the same – groups and ads on Facebook.

pay-chatgptftw.com fake payment form

SuperGPT (Meterpreter inside)

The example of malware disguised as a SuperGPT Android app, which is a legit AI assistant derived from the original GPT model. It was rather obvious that scoundrels will take advantage of poor app moderation on GP in this case. The questions were about where and how the frauds will exploit it. On the surface, the app looks the same as the original one. Though, it in fact contains Meterpreter malware – a RAT/backdoor designed specifically for Android.

AI Chatbot

A recent semi-scam iOS program that looks like yet another ChatGPT-like application. Even though there is an official app, and people now are more aware of GPT 3.5 being free, this thing does its job pretty well. It is hard to call one an outright scam or malware, as people deliberately give up the money. But the pricing of $50 for accessing the 3.5 model, along with the rather limiting interface, makes it a rather junky program to use.

Fake SuperGPT App

ChatGPT1

Another example of malware that targets Android devices, but this time, it falls under the designation of chargeware. This peculiar mobile-specific type of malware brings money to its devs by draining users’ mobile accounts and banking cards with covert subscription services. ChatGPT1 specifically does that by sending SMS messages to a premium number, each of them costing quite a penny.

How to Detect and Avoid Malicious Fake ChatGPT Apps?

Even though the brainchild of OpenAI has been around for over a year now, it is still a profitable topic for frauds. Promises to get access to a paid AI model for free or at a discount may sound attractive, but will inevitably have certain drawbacks. Such tricky services may range from a softcore swindle to outright malicious tricks. Here are a few tips to follow each time when you encounter an AI-related service.

If the offer is too good to be true, it is most likely not true. Who and what will ever offer paid AI models access at miserable prices? In legit cases, this still requires paying for the API access, and splitting the account may lead to lags and delays. But most of the time, frauds will take your money and give you a free/less expensive model, or nothing at all.

Be vigilant to the apps you download and install. A file from some shady site with a strange URL, that is allegedly a desktop ChatGPT version, just screams with red flags. Even if you encounter a seemingly legit offer, but on a strange domain or Google Play listing, be careful with the files they spread. Consider scanning such a download on our free Online Virus Scanner.

Fake ChatGPT Apps

The post Fake ChatGPT Apps appeared first on Gridinsoft Blog.

]]>
https://gridinsoft.com/blogs/malicious-fake-chatgpt/feed/ 1
FraudGPT Offers Phishing Email Generation to Cybercriminals https://gridinsoft.com/blogs/fraudgpt-phishing/ https://gridinsoft.com/blogs/fraudgpt-phishing/#respond Wed, 26 Jul 2023 20:05:26 +0000 https://gridinsoft.com/blogs/?p=16321 It’s not just IT companies racing to develop AI-powered chatbots. Cybercriminals have also joined the fray. Recent reports indicate that a developer has built a dangerous AI chatbot called “FraudGPT” that enables users to engage in malicious activities. Earlier this month, security experts uncovered a hacker working on WormGPT. Also, the chatbot enables users to […]

The post FraudGPT Offers Phishing Email Generation to Cybercriminals appeared first on Gridinsoft Blog.

]]>
It’s not just IT companies racing to develop AI-powered chatbots. Cybercriminals have also joined the fray. Recent reports indicate that a developer has built a dangerous AI chatbot called “FraudGPT” that enables users to engage in malicious activities.

Earlier this month, security experts uncovered a hacker working on WormGPT. Also, the chatbot enables users to create viruses and phishing emails. Recently, another malicious chatbot, FraudGPT, has been detected and sold on different marketplaces on the dark web and through Telegram accounts.

About FraudGPT

A harmful AI tool called FraudGPT has been created to replace the well-known AI chatbot ChatGPT. This tool is intended to aid cybercriminals in their illegal endeavors by giving them improved techniques for initiating phishing attacks and developing malicious code.

It is suspected that the same group that developed WormGPT is also responsible for creating FraudGPT. This group is focused on creating various tools for different groups. It’s like how startups test multiple techniques to identify their target market. There have been no reported incidents of active attacks using FraudGPT tools.

FraudGPT is a tool that goes beyond just phishing attacks. It can be used to write harmful code, create malware and hacking tools that are difficult to detect and find weaknesses in an organization’s technology. Attackers can use it to craft convincing emails that make it more probable for victims to click on harmful links and pinpoint and choose their targets more accurately.

About FraudGPT
Screenshot of FraudGPT for sale on Dark Web Forums

FraudGPT is being sold on various dark web marketplaces and the Telegram platform. It is offered through a subscription-based model, with prices ranging from $200 per month to $1,700 per year. However, it’s important to note that using such tools is illegal and unethical, and staying away from them is recommended.

FraudGPT Efficiency

There are concerns among security experts about the effectiveness of AI-powered threat tools like FraudGPT. Some experts argue that the features these tools offer are not substantially different from what attackers can achieve with ChatGPT. Additionally, there is limited research on whether AI-generated phishing lures are more effective than those created by humans.

FraudGPT Efficiency
Anti-fraud software detects unethical behavior FraudGPT

It’s important to note that introducing FraudGPT provides cybercriminals with a new tool to carry out multi-step attacks more efficiently. Additionally, the advancements in chatbots and deepfake technology could lead to even more sophisticated campaigns, which would only compound the challenges malware presents.

It is unclear whether either chatbot can hack computers. However, Netenrich warns that such technology could facilitate the creation of more convincing phishing emails and other fraudulent activities by hackers. The company also acknowledges that criminals will always seek to enhance their criminal abilities. It is possible by leveraging the tools that are made available to them.

How to Protect Against FraudGPT

The advancements in AI offer new and innovative ways to approach problems, but prioritizing prevention is essential. Here are some strategies you can use:

  • Business Email Compromise-Specific Training
    Organizations should implement comprehensive and regularly updated training programs to combat business email compromise (BEC) attacks, particularly those aided by AI. Employees should be educated on the nature of BEC threats, how AI can worsen them, and the methods used by attackers. This training should be integrated into employees’ ongoing professional growth.

  • Enhanced Email Verification Measures
    Organizations should implement strict email verification policies to protect themselves from AI-driven Business Email Compromise (BEC) attacks. These policies should include setting up email systems that notify the authorities about any communication containing specific words associated with BEC attacks, for instance, “urgent,” “sensitive,” or “wire transfer.” Additionally, they should establish systems that automatically identify when emails from external sources mimic those of internal executives or vendors. By doing these, organizations ensure that they thoroughly examine potentially harmful emails before taking action.

FraudGPT Offers Phishing Email Generation to Cybercriminals

The post FraudGPT Offers Phishing Email Generation to Cybercriminals appeared first on Gridinsoft Blog.

]]>
https://gridinsoft.com/blogs/fraudgpt-phishing/feed/ 0