research

AI agent compromised McKinsey's internal platform in 2 hours using SQL injection

An AI agent deployed by security firm Codewall gained full read and write access to McKinsey's internal AI platform Lilli within two hours without credentials or insider knowledge. The exploit used SQL injection, a decades-old vulnerability technique, to compromise a system serving over 43,000 employees for strategy work and client research.

2 min read

AI Agent Compromises McKinsey's Internal AI Platform in 2 Hours

Security firm Codewall demonstrated a critical vulnerability in McKinsey's internal AI platform Lilli by deploying an offensive AI agent that gained full database access in just two hours—without any credentials, insider information, or human intervention.

The Attack

Codewall's AI agent exploited SQL injection, a vulnerability technique dating back decades, to penetrate Lilli's defenses. Despite the sophistication of modern AI systems, the platform relied on outdated security assumptions vulnerable to one of the oldest database attack vectors in existence.

The agent achieved complete read and write access to the production database, meaning it could view, modify, or delete any data stored on the platform.

Scale of Exposure

Lilli serves as McKinsey's central AI tool for over 43,000 employees globally. The platform handles:

  • Strategic business work
  • Client research and analysis
  • Sensitive document processing

Any compromise of the production database would expose confidential client information, internal strategy documents, and employee data across McKinsey's entire workforce.

Security Implications

The incident highlights a critical gap in AI infrastructure security: enterprise AI platforms designed for handling sensitive information are being built without fundamental database security protections. SQL injection remains effective because developers often:

  1. Trust input validation mechanisms that AI systems can easily bypass
  2. Fail to use parameterized queries consistently
  3. Assume AI agents won't systematically probe for vulnerabilities

The fact that an automated agent—not a human penetration tester—discovered this vulnerability within hours suggests similar weaknesses may exist in other enterprise AI platforms that haven't been formally tested.

What This Means

This demonstrates that AI-native security challenges are emerging alongside AI capability growth. Organizations deploying internal AI platforms must treat them as attackable systems rather than trust-by-default tools. The McKinsey incident isn't about AI being "too dangerous"—it's about applying 25-year-old security practices to new infrastructure. SQL injection defenses are well-established. They weren't used here. That's the real story.

As enterprises increasingly deploy AI agents with tool access and autonomous capabilities, security must evolve from assuming trusted environments to zero-trust architectures where every query is validated, every database connection uses parameterized statements, and AI-initiated actions face the same scrutiny as human access requests.