The Internet in 2026: Where the Scams Have Better AI Than Your Company
If you have ever felt a creeping sense of dread while opening your inbox, congratulations on your excellent survival instincts. The year 2026 has officially become the most perilous time to exist online, and the culprit is the same technology we were told would revolutionise everything: artificial intelligence.
Welcome to the Fifth Wave
Cybersecurity firm Group-IB has dubbed the current era the 'fifth wave of cyber crime', and the label is grimly appropriate. According to their latest High-Tech Crime Trends Report, AI has handed criminals a toolkit that would make a Bond villain weep with joy. Phishing kits are now available on the dark web for roughly the cost of a Netflix subscription. Synthetic identity kits? From as little as five dollars. Forget learning to code; the barrier to entry for cyber crime is now lower than the barrier to getting a decent coffee in central London.
Dmitry Volkov, CEO of Group-IB, put it bluntly: 'AI is giving criminals unprecedented reach.' Former Interpol Director of Cybercrime Craig Jones went further, saying 'AI has industrialised cyber crime.' Neither man, it is fair to say, is being dramatic.
The Numbers Are Eye-Watering
Vectra AI's research found that AI-driven scams surged by a staggering 1,200 per cent in 2025. Let that sink in for a moment. That is not a typo, and it is not a rounding error (their precise figure is actually 1,210 per cent, if you fancy the extra ten).
Looking ahead, Deloitte's Centre for Financial Services projects that losses from generative AI-enabled fraud alone could hit $40 billion by 2027, up from an estimated $12.3 billion baseline in 2023. That represents a compound annual growth rate of 32 per cent. For context, the FBI's Internet Crime Complaint Centre reported $16.6 billion in total US cybercrime losses in 2024, covering everything from phishing to ransomware. The AI-specific slice of that pie is growing faster than almost anyone predicted.
Deepfakes That Fool the Best of Us
Perhaps the most unsettling development involves deepfakes sophisticated enough to fool people in live video calls. Google's Threat Intelligence Group documented a case where hackers linked to North Korea used an AI-generated deepfake of a prominent cryptocurrency CEO to deceive a victim over a spoofed Zoom call. The target had no idea they were speaking to a digital puppet.
Google researchers also found that hackers have been using tools like Google's own Gemini AI to develop attack tooling, conduct research, and assist during reconnaissance. In short, the attackers are using our own tech against us.
Meet Promptflux: Malware That Rewrites Itself
Then there is Promptflux, a piece of malware identified by Google's Threat Intelligence team that uses a 'Thinking Robot' function to rewrite its own source code on an hourly basis. Google described it as 'a new operational phase of AI abuse, involving tools that dynamically alter behaviour mid-execution.'
Before you barricade yourself offline, a note of perspective: Google's researchers have stressed that Promptflux is still in its research and development phase and does not yet demonstrate the ability to compromise a victim's network or device. It is a proof of concept, not a finished weapon. But the direction of travel is clear, and it is not comforting.
What Can You Actually Do?
The honest answer is that vigilance remains your best defence. Be sceptical of unexpected video calls, verify identities through separate channels, and treat unsolicited links like you would a stranger offering sweets from a van. Multi-factor authentication, regular software updates, and a healthy dose of paranoia go a long way.
The fifth wave of cyber crime is here. The least we can do is learn to swim.
Read the original article at source.
No comments yet. Be the first to share your thoughts.