Cyber experts warn AI will accelerate attacks and overwhelm defenders in 2026

Cybersecurity specialists are providing warnings in regards to the yr forward, and whereas they arrive from totally different backgrounds and firms, their predictions all level to AI altering the character of assaults, rushing up intrusions and forcing defenders to rethink how they work.
Most analysts count on 2026 to be the primary time that AI-driven incidents outpace what the vast majority of groups can reply to manually, and a few see the largest modifications coming from attackers who use absolutely autonomous methods.
Loris Degioanni, CTO and founding father of Sysdig, expects AI to develop into central to each side of the struggle: “For defenders, we’ll see end-to-end, agentic AI methods develop into normal for duties like vulnerability administration. We’ve already seen what’s doable: within the DARPA AI Cyber Problem, an autonomous system uncovered 18 zero-day vulnerabilities in 54 million strains of code, and patched 61 % of vulnerabilities in a median of 45 minutes with no single human within the loop. As for adversaries, we’ll see a surge in zero-days and automatic exploitation in 2026 as weaponizing “darkish AI” turns into the default methodology for attackers at scale. In flip, defenders will likely be compelled to struggle machine towards machine.”
AI threats at scale
One other concern is the best way attackers will use AI to function at scale. Rajeev Gupta, Co-Founder and CPO at Cowbell, warns that: “Whereas AI is revolutionizing cyber insurance coverage, it’s additionally empowering cybercriminals. The identical instruments used to streamline underwriting and claims are being weaponized to launch automated, scalable cyberattacks. These assaults require no human oversight and might repeatedly crawl, exploit, and deploy malware throughout methods. With funding cuts to key cybersecurity companies like CISA, the menace panorama might worsen, placing stress on insurers to evolve.”
AI-enhanced extortion additionally seems on the horizon. Derek Manky, Chief Safety Strategist and World VP Risk Intelligence at Fortinet: “GenAI will speed up information monetization and extortion: GenAI will develop into extra central to post-compromise operations. As soon as attackers achieve entry to giant datasets (by way of infiltration or by buying entry on the darkish internet), AI instruments will analyze and correlate huge volumes of information in minutes, pinpointing essentially the most priceless belongings for extortion or resale. These capabilities will allow adversaries to establish vital information, prioritize victims, and generate tailor-made extortion messages at scale. By automating these steps, attackers can rapidly rework stolen information into actionable intelligence, rising effectivity and profitability.”
Others assume compute theft will develop into commonplace. Michael Clark, Senior Director of Risk Analysis at Sysdig, expects criminals to pursue uncooked processing energy as AI workloads develop: “In 2026, compute energy will develop into the brand new cryptocurrency. As AI fashions develop hungrier for processing assets, menace actors — particularly these dealing with sanctions or restricted entry to chips — will start hijacking infrastructure to coach their very own giant language fashions (LLMs) and run autonomous AI brokers. The Sysdig Risk Analysis Group (TRT) first noticed LLMjacking in 2024, with attackers utilizing stolen credentials to achieve entry to a sufferer’s LLMs. This pattern will rework from attackers compromising entry for utilization to stealing compute energy. Enterprises ought to put together to mannequin GPU utilization and mannequin coaching exercise with the identical vigilance they as soon as held when watching community visitors for cryptojacking.”
Cyber weapons arms race
Some foresee wider political and social fallout. Bryan Cunningham, President of Liberty Protection, warns that deepfakes and autonomous brokers might attain a a lot bigger scale: “Each AI and QC will likely be used to create way more subtle cyber and important infrastructure assaults. Deep Pretend audio and video-enabled social engineering assaults will likely be commoditized and offered to allow anybody to conduct them. It additionally is probably going that autonomous AI brokers will likely be used to develop and deploy new assaults at mass scale and with little or no human involvement. After all, AI and QC will even be utilized by all sides to establish and defend towards evolving assaults. The US 2026 mid-term elections might be the primary wherein widespread deep fakes are used to attempt to sway votes and probably even to incite violence and chaos.”
Ransomware is one other space anticipated to speed up. Biren Patel, Senior Cyber Defender at Ontinue, describes shrinking timelines: “Most ransomware households can encrypt a system inside about quarter-hour. In 2026, that window will shrink even additional as attackers optimize their payloads. Organizations counting on guide investigation will be unable to maintain up. Automated enrichment, agentic AI assist, and fast decision-making will develop into obligatory to cease ransomware earlier than it spreads.”
Scams are additionally evolving. Alex Quilici, CEO of YouMail, expects AI-driven voice fraud to develop quickly: “AI supercharges voice scams (together with those that sound such as you) — Scammers used to wish huge name facilities to run large-scale fraud however not anymore. AI will deal with all of it. Generative instruments will write personalized texts, voice scripts, and emails, and even reply to victims in actual time. That can make scams quicker, cheaper, and tougher to hint. We’ll transfer from most robocalls connecting somebody to an individual to most robocalls connecting somebody to an AI bot, a minimum of at first. The excellent news is that the identical AI methods utilized by dangerous actors can be used to detect patterns, flag impersonation, and shut down fraud at scale (if corporations are proactive).”
Fortune 500 materials breach
Different specialists count on new types of breach altogether. Jason Soroko, Senior Fellow at Sectigo, believes organizations will lastly face one menace they’ve been warned about for years: “2026 will mark a milestone nobody desires: the primary publicly acknowledged Fortune 500 materials breach brought on by immediate injection. Corporations will deploy LLM-integrated methods with out guardrails, and adversaries will uncover how you can coerce these fashions into executing dangerous inner instructions or leaking delicate information.”
Defenders are additionally getting ready for heavy use of AI on their very own facet. Dan Zaniewski, CTO of Auvik, sees a change in on a regular basis operations: “The following part of AI in community operations gained’t be about changing people however about operationalizing AI so it supplies steady, reliable help — devices that automate routine duties whereas surfacing context and uncertainty for people to behave on. IT groups must be enthusiastic about instrumenting telemetry, establishing quick suggestions loops, and embedding AI-aware observability so AI turns into an operational benefit slightly than an experiment.”
AI written code issues
Krishna Vishnubhotla, VP of Product Technique at Zimperium, warns that AI-written code will change growth at a pace that many groups should not prepared for: “In 2026, the talents hole in cell safety will widen as AI-written code turns into the norm. AI will assist builders transfer quicker, however that introduces vulnerabilities at a scale most groups aren’t prepared for. Organizations that succeed will undertake AI-driven safety instruments to detect points rapidly, triage intelligently, and repair issues earlier than attackers exploit them. The abilities hole gained’t disappear, however AI-driven safety can bridge it and maintain cell apps resilient as growth pace accelerates.
He added, “Probably the most underestimated cell threat heading into 2026 is the pace at which AI helps groups ship insecure code. Practically half of AI-generated code incorporates safety flaws, and sixty-eight % of builders now spend extra time fixing vulnerabilities than constructing new options. AI will enhance, however not quick sufficient to maintain tempo to match adoption. Anticipate extra vulnerabilities, not fewer. Organizations that keep forward will repeatedly scan code and binaries, to defending vital belongings, as a result of pace means nothing if what you ship isn’t safe.”

Dipto Chakravarty, Chief Product Officer at Black Duck, expects longstanding approaches to safety to get replaced altogether: “The normal method to vulnerability administration and safety testing will likely be disrupted, primarily pushed by the rising adoption of AI in cybersecurity. The outdated software program world is gone, giving option to a brand new set of truths outlined by AI. Risk actors will leverage AI to automate and scale assaults, whereas defenders will use AI to reinforce detection and response capabilities.”
Tim Roddy, VP of Product Advertising and marketing at Zimperium, believes AI will take over early-stage work usually dealt with by junior analysts: “AI brokers will start to seem as assistants to pulling data from documentation, as assistants to flag anomalies requiring investigation and as Triage brokers to investigate incidents and monitor the assault chain and implement response that’s often performed by SOC personnel, usually on the first stage. This can pace up incident response instances and backbone from days to hours and maybe minutes. It can additionally scale back the necessity for entry stage 1 analysts which could have employment influence and restrict the pipeline to superior stage 3 analysts, which will likely be a long-term problem for the safety {industry}.”
Maintaining with AI attackers
Saeed Abbasi, Senior Supervisor of Safety Analysis on the Qualys Risk Analysis Unit, says menace searching will depend on AI merely to maintain up with attacker pace: “Proactive menace searching is not about discovering a menace ‘by no means seen earlier than.’ It is about trying to find the behaviors and patterns that attackers reuse. Attackers do not innovate; they iterate. They discover a weak product or a posh expertise and brutally exploit that whole class of software program till it turns into an industry-level legal responsibility.”
Alex Quilici, CEO of YouMail, additionally expects AI to reshape name safety: “Future call-blocking options is not going to simply detect suspicious calls however actively neutralize threats in actual time utilizing predictive AI fashions. This can embrace dynamic scoring methods for telephone numbers and automatic and fast takedown of impersonation campaigns.”
Morey J. Haber, Chief Safety Advisor at BeyondTrust, focuses on the necessity for organizations to anticipate change slightly than react to it: “Cybersecurity has all the time been a forward-looking self-discipline. By anticipating the place expertise, menace actors, and regulation are heading, we will higher defend our clients and assist the {industry} put together for what’s subsequent. Trying forward permits us to adapt quicker and switch perception into proactive safety motion. The way forward for cybersecurity isn’t nearly defending information, it’s about anticipating how digital and bodily worlds will proceed to collide.”
A number of specialists spotlight the continued significance of human judgment. Dave Gerry, CEO of Bugcrowd, warns that overconfidence in AI could make incidents tougher to interpret: “AI confidence can mislead — In 2026, AI-generated outputs will proceed to current data confidently, even when incorrect. As organizations depend on AI for effectivity, reviews on threats or incidents could also be confidently mistaken, creating noise that safety groups should reduce by way of to establish actual dangers. Human oversight stays vital — The rise of AI-driven hallucinations, deepfakes, and lifelike artificial media will make it tougher for non-technical customers to discern actuality from AI-generated content material.”
AI will convey a vital pondering renaissance
Trey Ford, Chief Technique and Belief Officer at Bugcrowd, expects a shift in how customers method data altogether: “AI will convey a vital pondering renaissance — Overly keen to assist GenAI desires to assist, to the purpose that hallucinations and deceptive responses are virtually unsurprising. As deepfakes, AI-generated movies and pictures, and trending pretend social media content material proceed to flood the web, the necessity for vital pondering and deductive reasoning has by no means been extra essential. In 2026, customers will thoughtfully query the content material coming from their AI instruments and social media tendencies, counting on classical thought patterns to navigate the content material introduced.”
Crystal Morin, Senior Cybersecurity Strategist at Sysdig, expects identification points to develop into much more harmful: “Identification will stay the first cyberattack vector in 2026, and poorly managed machine identities could possibly be the weak hyperlink that sparks the primary globally disruptive AI-driven breach. Credential theft and account compromise will hit quicker and tougher than ever, concentrating on each human and machine identities. The proliferation of machine identities, usually poorly managed, will solely amplify the danger.”
Throughout all these predictions, the message stays a lot the identical: AI will pace all the things up — assaults, defenses and errors — and organizations must stability automation with human scrutiny in the event that they need to keep forward in 2026.
What do you consider these predictions? Share your ideas on them within the feedback



