Explained: Darktrace CEO says she was deepfaked — and couldn’t tell the difference

9 hours ago 6
ARTICLE AD BOX

 Darktrace CEO says she was deepfaked — and couldn’t tell the difference

Popelka’s research team replicated her voice using open-source tools, demonstrating how little data is needed to train AI models/ Image: X@TimesBusiness

When the boss of a cybersecurity firm can’t tell real from fake, it’s time to worry about AI’s next frontier.

The big picture

In a startling admission that highlights how advanced artificial intelligence scams have become, Darktrace CEO Jill Popelka revealed she was targeted by a deepfake of her own voice during a company board meeting.Her team received a voicemail requesting confidential details — while she was physically present elsewhere. The audio, Popelka said, was so realistic that even she couldn’t distinguish it from her real voice.The revelation comes as governments and corporations scramble to counter a surge in AI-driven impersonation fraud, where cloned voices, faces, or emails are used to breach security systems, steal data, or manipulate decisions.

Driving the news

Speaking at The Times UK Tech Summit in London, Popelka described how the fake message appeared authentic and well-timed. The scam surfaced shortly after Darktrace’s £4.4-billion takeover by US private-equity firm Thoma Bravo, during the company’s first post-acquisition board meeting.“They can just type in a message and my voice can instantly be replicated. I couldn’t tell the difference,” she said. “These deepfakes exploit human vulnerability — and they’re very hard to protect from.”

Popelka’s research team later replicated her voice themselves using publicly available tools, confirming how little data is needed to train an AI model convincingly.Etienne De Burgh, senior security and compliance specialist at Google Cloud, who shared the stage, added:“We already knew these techniques existed, but they are becoming far more believable.”

Why it matters

Darktrace isn’t just another tech firm — it’s one of the world’s foremost AI-driven cybersecurity companies, originally founded in Cambridge with backing from the late entrepreneur Mike Lynch.If a company built to detect digital anomalies can be targeted by a deepfake, the threat has clearly moved beyond theoretical.The incident illustrates a profound shift: voice and video can no longer be trusted as proof of identity.From corporate executives to politicians and journalists, anyone with a digital footprint can now be cloned with frightening accuracy.Such scams have already caused multi-million-dollar losses globally:

  • In 2019, fraudsters used AI-generated audio to impersonate a CEO and tricked a UK energy firm into transferring €220,000.
  • In 2023, an Asian multinational lost $35 million after a finance executive was duped by a video call featuring a deepfaked CFO.
  • Now, even cybersecurity leaders are falling within the crosshairs.

The technology behind the scam

Modern voice-cloning tools can reproduce a person’s tone, accent, and cadence from as little as a 30-second voice sample — often available through public speeches, interviews, or even social media clips.Once trained, the model can “speak” any typed text in that person’s voice in real time.Detection tools exist, but the arms race between creation and detection is uneven.AI audio generators evolve faster than filters designed to flag synthetic content, leaving institutions perpetually one step behind.

The context

Darktrace, listed on the London Stock Exchange in 2021 at £1.7 billion, was acquired by Thoma Bravo earlier this year following a volatile period on public markets.

Popelka took over in 2024 after former CEO Poppy Gustafsson stepped down.The company’s systems are widely used to identify “patterns of life” — the normal digital behaviour within an organisation — and flag anomalies that could signal a cyberattack. Ironically, it now faces a new threat: AI-driven deception aimed directly at human trust.

The bigger picture

Deepfakes are democratised: What once required sophisticated labs can now be done by anyone with a laptop and access to open-source AI models.

  • Corporate defences are unprepared: Traditional cybersecurity focuses on code and networks, not human perception.
  • Legal frameworks lag: The UK, EU, and US are only beginning to draft legislation for labelling or watermarking synthetic media.
  • AI companies are conflicted: The same technologies that power creative tools and customer service bots are also enabling sophisticated fraud.

What’s next

Experts warn that deepfake risks will escalate as AI becomes embedded in daily workflows. Corporates are urged to:

  • Introduce multi-factor verification for all voice or video-based requests.
  • Train staff to verify sensitive communications via secondary channels.
  • Implement synthetic-media detection systems across email and messaging platforms.

For now, Popelka’s case serves as a cautionary tale: even the protectors of cyberspace can fall prey to its illusions.When CEOs can’t tell real from fake, trust itself becomes the new cybersecurity frontier.

Read Entire Article