Cyber Defense

AI in Cybersecurity: Fighting Back in Milliseconds

This Executive Impact Series article is a collaboration with Anush Naghshineh and David Turner.

Introduction: How Autonomous Security Agents Are Changing Threat Response from Reactive to Proactive

Generative AI and agents are being touted as the most significant leap in productivity since the advent of the computer. However, here’s a twist: they also make bad guys more productive, too, and by a lot. If security teams in companies don’t find a way to keep us, they’ll become ineffective at countering hackers and spies.

To get an idea of the scope of attacks being launched a year ago, look at what Microsoft reports. They processed 84 trillion security signals per day and detected over 30 billion phishing emails in 2024. Threat Intelligence systems are observing 7,000 password hacks per second. Without highly capable automation, companies can’t keep up with that kind of volume, and it’s growing fast.

Security teams are seeing the massive growth in the capabilities of bad actors and have begun rethinking their approach. These teams are moving from manual intervention to adopting technology that leverages autonomous security agents. Given their autonomous nature, these agents are to do more than run static rules. These tools can independently observe network activity, decide how to respond to anomalies, and take rapid action without waiting for human approval. They use machine learning to understand what normal activity looks like in their environment, allowing them to identify deviations that are probable threats and counter them in seconds, not days.

McKinsey’s 2025 research on AI adoption and agentic AI deployment found that only 1% of organizations report being mature in their AI adoption across the enterprise. The specific use of agentic AI in cybersecurity is also only at 1% according to the 2025 Cyber Security Tribe annual report. Given the scope of the problem security teams see and the gap in mature deployments, companies are seeking to fill this void. Investment in agentic AI cybersecurity is projected to grow from $738 million in 2024 to more than $2 billion by 2027 and an eye-popping $173 billion by 2034.

The use of autonomous agents to actively detect and combat threats is a fundamental change in cybersecurity strategy. The emphasis is shifting from passive alerting of attacks after they’ve already started to active systems that identify and neutralize threats proactively, often before damage is incurred. With this new direction, security teams must figure out how to implement this technology in a controlled, cost-effective way that does not add undue complexity to their environments.

Speed Advantage: Why Human Response Times are No Longer Adequate

The security landscape has entered a phase where human response times are fundamentally mismatched to the speed of modern cyberattacks. Organizations today take an average of 73 days to contain a breach, while adversaries, now powered by AI, operate in milliseconds. Even worse, most data breaches remain undetected for 277 days, giving attackers unrestricted dwell time to move laterally, exfiltrate data, and compromise systems long before security teams become aware.

This gap is widening. SOC analysts still spend 21+ minutes per ticket, often triaging thousands of alerts a day across fragmented security stacks, an average of 45 different tools, many of which operate in silos. The result is cognitive overload, inconsistent triage, and escalating alert fatigue. Meanwhile, attackers have shifted to machine-speed operations. With 85% of modern attacks now powered by generative AI, the speed and sophistication of adversaries are increasing faster than security teams can adapt.

AI-enabled defense is the only viable countermeasure. Organizations that deploy AI-driven detection and response capabilities reduce their mean time to identify and contain incidents by one-third, while achieving 98% detection accuracy and slashing response times by up to 70% in high-risk environments. Mature implementations consistently achieve true positive rates of 99% while keeping false positives below 1%.

The compounding benefit of automation is even more striking. University environments deploying AI-driven detection reported 110% improvement in detection coverage within six months, while a digital insurance company documented that AI autonomously resolved 74,826 out of 75,000 alerts, escalating only 174, demonstrating precision at scale no human team can match.

Yet the human limitation remains the defining bottleneck:

  • 75% of security professionals report a surge in attacks
  • 69% admit they cannot manage threats without AI
  • Alert volume, tool sprawl, and analyst burnout are escalating simultaneously

The reality is clear: humans alone cannot defend against machine-speed adversaries. AI does not replace the SOC, but it removes the burden of low-value triage, accelerates decision-making, and restores the ability of analysts to focus on strategy, threat hunting, and nuanced investigation. In cybersecurity’s new era, speed is no longer an advantage; it is survival.

Case Study Analysis: Darktrace’s Autonomous Threat Operations and Measurable Security Improvements

Darktrace represents a mature real-world example of autonomous threat detection and response at enterprise scale. Its self-learning AI continuously models behavioral baselines across users, devices, applications, and networks, identifying anomalies even when signatures, rules, or known indicators of compromise are absent.

At the core is Autonomous Response, which can operate in three modes: passive (recommendations only), human confirmation, or fully autonomous containment. The market trend is unmistakable: 85% of Darktrace customers now deploy detection and autonomous response together, signaling growing confidence in AI-led defense.

Several real-world deployments illustrate the power of this model:

CordenPharma (Pharmaceutical)

During a proof-of-value period, the AI uncovered a crypto-mining attack on a compromised device beaconing to an endpoint in Hong Kong. It downloaded malicious executables, attempted lateral movement, and attempted to exfiltrate over 1GB of data, all behaviors that Autonomous Response would have blocked instantly in active mode.

Aviso (Wealth Services)

The platform autonomously analyzed 23 million events, generated only 73 actionable alerts, and blocked 18,000+ malicious emails missed by legacy security filters, demonstrating targeted accuracy and reduced noise.

Academic Institutions

In one deployment, automated response filtered 74,826 of 75,000 alerts, escalating only 174 for manual review and surfacing 38 true positives requiring human action. Detection coverage increased by 110%, underscoring how AI expands visibility far beyond human monitoring capacity.

LSU Alexandria

Ransomware attempts were neutralized around the clock without requiring programming updates, threat signatures, or rule tuning. The AI’s continuous learning provided 24/7 protection while allowing operations to remain uninterrupted.

Across these examples, Darktrace delivers surgical precision, isolating only the threatening activity while preserving normal business operations. It can block ports, terminate malicious connections, or quarantine devices based on context, severity, and observed intent. Integration is seamless: API-level visibility adjusts dynamically as the environment changes, ensuring that the system remains effective even in complex hybrid infrastructures.

These case studies demonstrate a fundamental truth: autonomous threat operations are no longer theoretical. They are real, measurable, and outperform human-centric models at scale. The shift is underway, and organizations adopting autonomous AI are widening the security gap in their favor.

Building Trust in AI: Balancing Autonomy with Human Oversight

The emergence of autonomous security agents represents one of cybersecurity’s greatest paradoxes: the same AI systems designed to reduce human error and accelerate defense have themselves become new sources of risk. In fact, 80% of organizations have already experienced risky AI behaviors, including unintended data exposure, unauthorized system access, or flawed decision-making. This trust paradox highlights the urgent need for a governance model that strikes a balance between speed and autonomy, on the one hand, and transparency and control, on the other.

The cornerstone of that model is structured human oversight. The NIST AI Risk Management Framework (RMF) provides a disciplined approach governing AI systems through mapping, measuring, and managing risk across their lifecycle. This ensures that agentic systems operate within defined ethical and operational boundaries rather than as opaque black boxes. Similarly, the EU AI Act introduces a risk-tiered oversight model, classifying AI systems by impact, and establishing stricter requirements for high-risk applications such as cybersecurity and critical infrastructure protection.

McKinsey’s AI governance essentials reinforce this structure by calling for clear accountability, traceability mechanisms, and ongoing performance reviews to ensure AI agents remain aligned with intended purposes. Complementing this, ISACA defines five guiding principles: fairness, accountability, transparency, controllability, and robustness. Together, these ensure that humans retain ultimate authority, and that AI decisions can always be understood, explained, and if necessary, overridden. The ISO/IEC 42001 standard brings these concepts into an auditable framework, specifying requirements for ethical, secure, and transparent AI management systems that align with enterprise compliance objectives.

Organizations typically progress through a governance maturity model, moving from ad hoc (reactive and fragmented) to foundational (basic policies are in place), then to integrated (governance is embedded in workflows), and ultimately to autonomous (self-optimizing systems operate within human-defined parameters). As this maturity advances, oversight evolves from manual control to guided autonomy, where AI operates independently but remains accountable to the governance logic established by humans.

This progression reflects what Gartner has identified as a transformative period: 40% of security operations leaders cite AI as the single most significant factor shaping SOC performance over the next 12 to 24 months. The future security model will hinge on a human-AI partnership. AI systems perform high-volume, repetitive functions such as alert triage, enrichment, containment, and reporting, while human analysts focus on complex reasoning, threat hunting, and strategic planning.

A core enabler of that partnership is explainability. Every AI-driven action, whether blocking a port or isolating a system, must be recorded along with its reasoning chain: prompts, data context, and internal state changes. This creates an audit trail that not only satisfies regulatory and ethical requirements but also strengthens human confidence in automated decisions.

However, continuous monitoring remains nonnegotiable. Despite widespread deployment, 57% of organizations have experienced security incidents linked to AI usage, yet 60% admit they have not implemented formal AI controls. Effective monitoring closes that gap by continuously assessing agent behavior against defined policies and updating controls as models evolve.

Ultimately, trust is best established through a phased deployment approach. Many organizations begin in a human confirmation mode, where AI suggestions require analyst approval. As accuracy improves and reliability is demonstrated, teams transition to fully autonomous operations within weeks or months, turning skepticism into confidence through proof of performance.

In the end, building trust in AI security is not about ceding control; it is about redefining it. Governance, transparency, and human oversight do not slow progress; they make scalable, machine-speed defense sustainable. The organizations that master this balance will not only defend faster but also do so with accountability that endures.

Implementation Strategy for Agentic AI Enterprise Security

Implementing agentic AI in security operations requires a disciplined balance between ambition and precision. Success begins with a strategic foundation, which involves conducting a comprehensive risk assessment to identify the existing security ecosystem’s strengths, weaknesses, and interdependencies. This clarity helps define specific objectives, whether the enterprise seeks to enhance efficiency, accelerate incident response, drive innovation, or optimize costs, before any technical deployment begins. Establishing intent upfront prevents tool sprawl and ensures that AI capabilities are aligned with measurable business outcomes.

From a tactical perspective, Gartner advises a focused approach, prioritizing narrow, high-value use cases that demonstrate direct and measurable impact, rather than pursuing broad, unstructured implementations. Common early use cases include anomaly detection, phishing triage, and automated log correlation. Each deployment requires core components, including access to high-quality data (both structured and unstructured), a capable machine learning environment (such as Azure AI or AWS SageMaker), robust cybersecurity hardening, and clearly defined use cases. Data integrity is critical, as organizations that apply proper validation and labeling techniques experience up to 90% fewer false positives, improving detection accuracy and analyst trust.

A phased implementation roadmap is crucial for managing risk and accelerating adoption. The journey typically begins with proof-of-value pilots operating in a passive or human-confirmation mode. Once validated, autonomy increases in measured increments, expanding AI authority only after the system demonstrates reliability and consistency. Integration priorities should include seamless interoperability with existing security architectures, such as SIEM, EDR, SOAR, and cloud-native controls. Gartner’s cybersecurity technology optimization framework reinforces this principle: consolidate redundant tools, ensure data portability, and use threat modeling to guide architectural decisions.

Cloud native infrastructure will underpin most future deployments. By 2025, cloud-based cybersecurity solutions are expected to comprise roughly 70% of the market, reflecting the demand for scalable, flexible, and resilient AI models that evolve in tandem with shifting threat landscapes. Aligning these capabilities with established regulatory frameworks such as NIST CSF, ISO 27001, SOC 2, GDPR, and new AI governance standards ensures compliance and builds organizational trust. Governance structures should include cross-functional oversight, defined accountability, traceability mechanisms, and contingency plans to address model drift or unintended behaviors.

Enterprises must also recognize that over 90% of AI-driven cybersecurity capabilities will originate from third-party vendors, necessitating strong vendor risk management, transparent SLAs, and continuous performance evaluation. Budget realities reinforce this priority: global information security spending is expected to reach $212 billion in 2025, a 15% rise over 2024, with GenAI initiatives adding another 15% in software investments. Measuring success requires operational metrics, mean time to acknowledge (MTTA), mean time to contain (MTTC), mean time between failures (MTBF), detection coverage, and false positive reduction to validate that AI is enhancing resilience, not just activity.

Finally, the human element remains central. The global cybersecurity workforce shortage, estimated to be 3.5 million professionals, demands that organizations invest in AI literacy, reskilling, and cultural programs that reinforce human judgment and accountability. While agentic AI can scale detection and automate response, sustained success depends on cultivating teams that understand, trust, and guide these systems responsibly. The goal is not to replace human expertise but to amplify it, building a collaborative ecosystem where people and AI jointly defend the enterprise with greater speed, intelligence, and confidence.

Conclusion: Autonomous Cybersecurity Enhanced by Human Expertise

We are not advocating that autonomous AI should replace human security analysts. Frankly, it is one area where humans in the loop remain vital for a long time. What we are saying is that you should be evaluating it as a force multiplier. Autonomous systems should be deployed to handle repetitive and high-volume work that security teams can no longer sustain. Let the agents filter through activities and warnings, investigate routine incidents, and automatically contain obvious threats. Security analysts should shift their focus to detecting the shifts in approaches and tooling that are driving new, sophisticated threats. From that, security teams can fine-tune existing approaches, develop strategies to address emerging threats, and oversee the agents to make the judgment calls needed in complex scenarios.

The focus of cybersecurity professionals is already shifting toward working across multiple domains, understanding how to deploy agents, and making strategic decisions about what automation will handle and when human oversight is required. This will also require organizations to retrain their security teams to work with the new tools and act as supervisors rather than hands-on technicians. They will need to understand the types of problems agents will not handle and address those instances.

Organizations that don’t move in this direction will fall behind and suffer harm from cyberattacks. Attackers are among the most sophisticated users of AI and automation, and they are continuously making their operations faster, more sophisticated, and much harder to detect. The bad guys use these tools to pick their targets, conduct reconnaissance, and execute tailored attacks, and they are doing so at an ever-increasing scale.

We are not encouraging companies to rush in and start deploying without first doing the hard work of detailed discovery, strategizing, and planning. Successful implementations start with clearly defined use cases and use high-quality data. If data quality isn’t where it needs to be, companies have to fix it, or they won’t succeed with bad data. Successful implementations also maintain human oversight, with clearly defined roles for humans and agents, along with an active governance process that supports rapid strategic decision-making. Finally, successful deployments scale up gradually, proving results at each stage before moving on to the next steps in their expansion.

As autonomous AI is adopted, we expect to see multi-agent systems acting as an automated team of specialists to address a spectrum of complex problems. Some agents will specialize in threat detection, others will respond, and others will learn to distinguish normal network activity from anomalies. The agents will share data and coordinate their actions to ensure maximum effectiveness.

The organizations that get this right will be able to secure their operations effectively. They will leverage security thinking across systems and every AI initiative moving forward. In those organizations, security teams will be more involved in business planning and with AI deployments. This role shift for the security organization will require clear accountability across all levels of the organization, including the boardroom. Securing organizations will require that security operations be funded as a continuous investment, not as stopgap projects to staunch bleeding. The bad guys are continuously evolving, so your security approach has to keep evolving too.