Table of Contents
Cybersecurity teams have entered 2026 with growing concern about how AI is changing day-to-day defense. According to the WEF’s Global Cybersecurity Outlook 2026, 93% of cyber experts believe AI is reshaping cybersecurity in their organizations. Meanwhile, 66% say AI is already having the greatest impact on phishing and social engineering.
That pressure is already showing up in the kinds of attacks reaching employees. AI is making it easier to produce cleaner language, more believable impersonation, and faster variations of the same lure. For security teams, that means the old signs of malicious intent are becoming less reliable on their own.
The challenge becomes more visible when you consider the current cyber threat landscape in the context of real inboxes. One 2026 threat intelligence report found hundreds of thousands of attacks based on user-reported emails that bypassed security filters in the first half of 2025 alone.
Defenders are not only facing more messages. They are facing messages that arrive looking credible enough to survive both technical controls and a first human glance.
AI-Polished Phishing That Blends Into Daily Work
One of the most visible changes is the quality of phishing itself. Messages are now easier to write, cleaner in tone, and better aligned with real business workflows. That weakens old awareness guidance that told employees to focus mainly on spelling mistakes or awkward phrasing. Lures look more polished now, so users need a different decision model.
Training should move away from surface cues and focus on workflow verification. Employees should know how to validate payment changes, file-share requests, login prompts, and urgent access requests to bypass established business processes. Detection logic should also move past superficial text cues and focus more on sender behavior, destination patterns, and requests that deviate from normal protocols.
Token Theft Through Adversary-in-the-Middle Phishing
The next trending attack method is potentially even more damaging, because it targets the session, not just the password, which makes it harder to detect and control. Modern adversary-in-the-middle kits can intercept credentials and steal session tokens in real time, which reduces the protective value of weaker MFA methods such as one-time codes and simple push notification approval.
The result is that a user can complete what looks like a normal login and still hand over access. If a phishing kit can steal a live session, the control stack has to assume the password, and the OTP may already be compromised. Phishing-resistant MFA, shorter token lifetimes, stronger session binding, and detections for session reuse or suspicious post-login behavior should now be part of the baseline for security protocols.
Voice Cloning and Executive Impersonation
AI-powered social engineering is also spreading beyond email. In May 2025, the FBI warned that malicious actors were impersonating senior U.S. officials through text messages and AI-generated voice messages to build trust and steer targets toward malicious links or platforms.
This is not an isolated case. In early 2024, a finance executive in Hong Kong was tricked into transferring millions of dollars to a scammer through a deepfaked video call with the company’s supposed CFO.
This is where process discipline matters. Security teams should formalize call-back procedures and out-of-band verification for sensitive approvals, wire transfers, and access requests. A voice note that sounds convincing is still only a message. Even videos can be deepfaked.
AI-Assisted Reconnaissance and Pretext Building
Not every AI-powered threat starts with malware or a phishing kit. In many cases, the advantage comes earlier, during reconnaissance. AI can help attackers gather public information, summarize company structures, identify likely approvers, and build more convincing pretexts from LinkedIn posts, job ads, press releases, and other documents available in the open web.
That makes social engineering attempts feel more informed and more specific, especially when they reference real projects, vendors, or internal roles. For security teams, this raises the value of reducing publicly exposed context that can be stitched into a believable story. Access reviews, approval workflows, and awareness training should account for messages that sound accurate because they are built from real fragments of company information.
The risk no longer involves a fake message with poor grammar. Today’s lures feel plausible because the attacker did their homework faster.
AI-Obfuscated Payloads and Phishing Code
AI is also complicating analysis by making malicious code look less recognizable at first glance. Recently, threat researchers discovered a credential phishing campaign that likely used AI-generated code to obfuscate a malicious SVG payload and slip past straightforward detection logic.
Defenders cannot rely only on static analysis. File behavior, process chains, infrastructure reuse, and downstream identity activity all matter more when malicious code starts looking less familiar on the surface.
Phishing-as-a-Service at Greater Speed and Scale
The cyber threat landscape is also being reshaped by service-based cybercrime. In September 2025, Microsoft said it had seized 338 websites tied to RaccoonO365, a subscription-based phishing platform that had been used to steal at least 5,000 Microsoft 365 credentials across 94 countries since July 2024.
AI makes this model more potent, because even lower-skill operators can now improve message quality, translation, and impersonation. For defenders, that raises the value of faster reporting loops. User reports should not sit in a side queue but should feed directly into detection tuning and incident review.
Attacks on AI Systems Themselves
The final trend is broader than phishing, but it belongs in any current assessment of cyber threats. AI systems are no longer just helping defenders or attackers. They are also becoming part of the surface that needs protection.
Security teams should extend their controls to prompt logging, API key hygiene, model access governance, and environment separation for internal AI tools.
Where Security Teams Should Focus
AI is helping attackers refine proven tactics and run them with more polish and less effort. Defense is increasingly challenging, because familiar attacks are becoming easier to scale.
Security teams should respond by strengthening phishing-resistant MFA, tightening session monitoring, formalizing verification for sensitive requests, and treating user-reported phishing as valuable operational intelligence.