AI Didn't Replace Hackers. It Built Them an Assembly Line.
Mandiant's M-Trends 2026 data shows AI in the attack chain at every stage — but the breaches still start with the same old failures. The uncomfortable truth is both things are true at once.
Twenty-two seconds. That's the median time between an initial access broker gaining entry to a corporate network and a ransomware affiliate beginning their work inside it. In 2022, that handoff took over eight hours. Mandiant's M-Trends 2026 report, drawn from more than 500,000 hours of frontline incident response, clocked it at 22 seconds last year.
That number broke something in my mental model. Eight hours gives a SOC team a chance — maybe they catch the alert, maybe they isolate the host, maybe the attacker trips a canary. Twenty-two seconds means the alert hasn't finished populating in the SIEM before the next stage is already running. It means the handoff isn't a human reading a forum post and buying access. It's automated.
AI made that automation possible.
What the Data Actually Shows
M-Trends 2026 is Mandiant's annual report on what they saw across their global incident response practice. It covers incidents investigated throughout 2025 and represents the closest thing we have to ground truth about how attacks actually work (as opposed to how we think they work, or how vendors claim they work in their marketing decks).
Here are the numbers that matter:
Time-to-exploit has gone negative. The mean time-to-exploit for high-value vulnerabilities is now negative seven days. That means exploitation is routinely beginning before vendors issue patches. Not "within hours of disclosure" — before disclosure. The number of CVEs exploited within 24 hours of public disclosure is 28.3%, up from the already alarming figures in previous years. The entire concept of "patch within 72 hours" as a security strategy assumes a positive number in that column. The number is no longer positive.
80% of ransomware attacks incorporate AI tooling. Not "use AI to write the malware" — incorporate AI at some stage of the kill chain. That includes reconnaissance, target profiling, payload customization, social engineering personalization, and evasion tuning. The AI isn't doing the hacking. It's greasing every joint in the pipeline.
Named malware families now query LLMs mid-execution. This is the finding that got under my skin. Mandiant documented three specific malware families that actively use AI during operation:
-
QUIETVAULT is a credential stealer that, once on a compromised machine, checks for local AI command-line tools — Ollama, LM Studio, llama.cpp, anything with a model loaded. If it finds one, it executes predefined prompts against the local model to search for configuration files, API keys, and credentials. Your local AI, the one you set up for privacy, is being interrogated by malware to help it find your secrets.
-
PROMPTFLUX queries large language models mid-execution to dynamically modify its own evasion behavior. It generates variant strings, rewrites registry key names, and alters its network signatures based on LLM output — making static signatures nearly useless against it.
-
PROMPTSTEAL operates similarly, using LLM queries to adapt its exfiltration patterns based on the specific environment it finds itself in.
These aren't proof-of-concept demos from a conference talk. They're in the wild. Mandiant found them during real incident response engagements.
The Paradox Nobody Wants to Sit With
Here's the part where I'd normally write "this changes everything." But Mandiant's own researchers said something more interesting: they do not consider 2025 to be the year where breaches were the direct result of AI.
Read that again. Eighty percent of ransomware uses AI tooling. Malware queries LLMs at runtime. Exploits arrive before patches. The handoff takes 22 seconds. And Mandiant still says the vast majority of successful intrusions stem from "fundamental human and systemic failures."
Unpatched software. Reused credentials. Misconfigured cloud environments. Phished employees. The same failures we've been writing about for a decade.
The AI isn't the thing that gets attackers in the door. It's the thing that makes what happens after they get in faster, more efficient, and harder to detect. The initial access vector in most of Mandiant's 2025 cases was still exploitation of a known vulnerability that should have been patched, or a credential that shouldn't have been reused, or a phishing email that someone clicked.
This is genuinely uncomfortable to hold in your head at the same time: the attack chain is AI-powered AND the underlying cause is still basic hygiene failures. Both are true. Neither cancels the other out.
What Changed Is the Economics
The way I make sense of it is this: AI didn't give attackers a new weapon. It gave them a factory.
Before, a ransomware operation needed humans at every step. A human to find the initial access. A human to buy it on a forum. A human to examine the target. A human to customize the payload. A human to handle the lateral movement. A human to deploy the encryptor. Each human was a bottleneck, a cost center, and a potential point of failure.
Now, the humans are still there — but they're managing automated systems. The initial access gets advertised and sold programmatically. Target profiling is AI-assisted. Payload customization is AI-assisted. The handoff is automated. The evasion is AI-assisted. One ransomware crew can run 50 simultaneous operations because AI handles the commodity work.
The Akira ransomware group (which Mandiant tracked as REDBIKE) was the most commonly observed ransomware variant in 2025, followed by Qilin (AGENDA). Both run what are effectively platform businesses. They have affiliate programs. They have toolkits. They have support infrastructure. AI makes each affiliate more productive, the same way it makes a legitimate sales team more productive — by automating the repetitive parts and personalizing the outreach.
This is what I mean by "assembly line." Henry Ford didn't invent the car. He made building them cheap and fast. The attackers didn't invent ransomware. AI made running it cheap and fast.
The Social Engineering Piece
One finding from M-Trends that deserves its own callout: both nation-state and financially motivated groups are using LLMs to move beyond mass phishing toward what Mandiant calls "hyper-personalized, rapport-building social engineering."
Think about what that means in practice. Instead of a spray-and-pray phishing campaign with obvious grammatical errors, you get an email that references your actual projects, uses the right internal terminology, mirrors the tone of your real colleagues, and arrives at a plausible time. The AI has read your LinkedIn, your GitHub commits, your conference talks. It knows what would make you click.
This is also why the "22-second handoff" is possible at volume. If your social engineering is personalized enough, your hit rate goes up. If your hit rate goes up, you can afford to automate the post-compromise steps because you have enough successful intrusions to justify the automation investment. The AI doesn't just accelerate each step — it creates a positive feedback loop between them.
What You Actually Do
I want to be honest: the defensive playbook for "attackers are faster and more automated" is not a checklist you implement on a Saturday afternoon. Some of it is organizational. Some of it is architectural. But there are things individuals and small teams can do.
-
Accept that patch-within-days is no longer sufficient for internet-facing systems. If time-to-exploit is negative for high-value CVEs, your exposure window starts before you even know the vulnerability exists. This means reducing attack surface — taking things off the internet, putting them behind VPNs or zero-trust proxies, minimizing what's exposed. You can't patch what you don't know about yet, but you can make it unreachable.
-
Treat local AI tools as attack surface. QUIETVAULT specifically targets machines with local LLMs. If you run Ollama or LM Studio, those tools should not have unrestricted access to your filesystem. Run them in containers. Don't store API keys or credentials in plaintext config files on the same machine. Assume that if your machine is compromised, the first thing malware will do is ask your own AI where the good stuff is.
-
MFA is no longer optional — but neither is token hygiene. The 22-second handoff often starts with a stolen session token or credential. MFA helps at the front door but does nothing once a token is live. Enforce short token lifetimes. Monitor for impossible-travel patterns on active sessions. Use hardware-bound credentials (passkeys, FIDO2) where you can, because those can't be exfiltrated by an infostealer.
-
Assume your phishing training is out of date. If attackers are using AI to personalize social engineering, the "look for typos and urgent language" advice is from a different era. The new tell isn't grammar — it's context. Train people to verify out-of-band when they receive any request that involves credentials, money, or access changes, regardless of how legitimate it looks.
-
If you run a SOC, your detection window just shrank to seconds. Twenty-two seconds means you need automated response, not automated alerting. The alert that fires 30 seconds after initial access is already too late for human triage. This is where EDR auto-isolation, network microsegmentation, and pre-authorized containment actions matter.
Where This Leaves Us
The prediction industry spent 2024 and 2025 warning that AI would revolutionize cyberattacks. The data says they were right about the mechanism and wrong about the nature of the revolution.
AI didn't give attackers some qualitatively new ability to break into systems. It made the existing playbook run at industrial scale. The attacks still start with the same failures. They just finish faster, hit more targets simultaneously, and adapt better to whatever they find.
Mandiant's researchers landed on a phrase I keep turning over: "fundamental human and systemic failures." That's what still causes breaches. AI just makes the exploitation of those failures brutally efficient.
If you're looking for a reason to finally fix the basics — patch the known vulns, rotate the stale credentials, segment the flat network, replace the password with a passkey — this is it. Not because AI attacks are some exotic new threat that requires exotic new defenses. Because the margin for error on the basics just went from "days" to "seconds," and the assembly line doesn't take weekends off.
Sources: Mandiant M-Trends 2026 — Google Cloud Blog, Help Net Security — Attackers are handing off access in 22 seconds, The Hacker News — 2026: The Year of AI-Assisted Attacks, SecurityWeek — M-Trends 2026: Initial Access Handoff Shrinks From Hours to 22 Seconds, National Law Review — Mandiant M-Trends 2026 Report: Threat Actors Using AI in Attacks