This year, cybersecurity leaders are rethinking how artificial intelligence can strengthen digital defenses and reshape security operations. After leaning into AI to handle routine alerts and surface critical threats with greater speed, chief security officers are now exploring smarter, more specialized uses. With both defenders and attackers harnessing AI’s capabilities, this next phase promises to accelerate responses, reduce fatigue, and raise the strategic stakes for organizations worldwide.
As reported by Wall Street Journal by James Rundle on January 2nd, 2026.
Security Chiefs Plan New Uses for AI in 2026
Security chiefs say artificial intelligence dramatically improved the nerve centers of their operations in 2025. Now, they’re hunting for the next area it can make an impact.
AI has the potential to address perennial cybersecurity challenges that often lead to breaches, chief information security officers say, including how teams manage software bugs and handle compromised passwords.
One of the key ways AI can help is the speed with which security teams can identify what needs to be fixed in their systems to keep hackers out, said Rich Baich, CISO at telecommunications company AT&T.
For example, the Known Exploited Vulnerabilities catalog is a vital tool for security teams. Maintained by the U.S. Cybersecurity and Infrastructure Security Agency, the list contains software flaws that hackers are actively using to breach systems. Yet matching that information to what is in a company’s network is a laborious task, Baich said.
His team is exploring how AI can automatically retrieve the list, compare it to AT&T’s tech inventory and highlight which threats require immediate attention, he said.
“All of a sudden, where before there may have been lots of individual manual things going on, we can now do that in a matter of minutes versus days,” he said.
Amy Herzog, CISO of Amazon Web Services, is using generative AI in the same way. The technology has cut the time it takes to identify potentially vulnerable systems to 11 minutes, on average, compared with 27 hours before, she said.
Many CISOs grew increasingly comfortable with AI over the past year, introducing the technology into their security operations centers to triage and diagnose the overwhelming number of alerts their systems generate daily. At AWS, generative AI assembles information about key SOC alerts in 11 minutes, on average, down from four hours, Herzog said.
The emergence of agentic AI, where bots are designed and trained to perform specific tasks such as threat intelligence analysis, has allowed companies’ human staff to focus more on crucial cybersecurity work such as penetration testing and forensics, CISOs say.
Hackers are also leveraging AI, moving beyond basic experimentation into more sinister attack methods.
“Spam had gotten a little harder to suss out, you didn’t see as many misspelled emails from the prince from Nigeria and that sort of thing, but it wasn’t terrifying—up until about six months ago,” said William MacMillan, chief product officer at AI security company Andesite AI.
MacMillan, who formerly ran security for tech company Salesforce and the Central Intelligence Agency, said incidents such as state-backed hackers using popular AI platforms, including Anthropic’s Claude, to design new attacks show that attackers aren’t just experimenting with AI but are incorporating it into their tactics.
In the Anthropic incident, hackers used Claude to help automate 80% to 90% of a campaign targeting governments and major corporations. Researchers at Alphabet’s Google have also alleged Russia has used AI to help design malware to attack Ukrainian targets that customized the software’s instructions in real time.
If attackers are increasing the speed at which they can exploit compromised credentials or newly discovered flaws, defenders must also be faster, said Michael Baker, global CISO at technology services company DXC Technology.
“You have to defend against AI with AI. So, we should be having AI check our perimeter all the time, 24 hours a day, with no fatigue,” Baker said.
Patrick O’Keefe, head of global cybersecurity and risk management at retailer Alimentation Couche-Tard, sees identity management as the next big challenge for AI. Stolen credentials have fueled some of the most high-profile hacks in recent years.
Alimentation Couche-Tard, based in Laval, Canada, operates in more than a dozen countries and has close to 17,000 employees. Automating password reset processes can significantly reduce the time required to secure compromised accounts, O’Keefe said.
“It becomes very simple to let the AI actually do all that execution, instead of bringing in the help desk, the service desk, and then my security operations people to handle something,” he said.
AI can also introduce identity problems, though. The use of agentic AI, for instance, means security teams must manage new machine identities on networks, making sure they access only the systems or data they need for a specific task. “Use all the security basics,” Herzog advised.
There are limits to what AI can—and should—be doing. Identifying the need for a patch or an update is fine, but automatically applying it without human oversight could be disastrous, said AT&T’s Baich. Hackers have in the past inserted malicious code into product updates.
“We’re not at the point of being fully automated without some type of manual quality assurance,” he said.
