From tools to teammates: (Dis)trust in AI for cybersecurity with Neele Roch

From tools to teammates: (Dis)trust in AI for cybersecurity with Neele Roch

Jan. 2, 2025

As usable security and privacy researcher Neele Roch found, “on the one hand, when you ask the [security] experts directly, they are very rational and they explain that AI is a tool. AI is based on algorithms and it's mathematical. And while that is true, when you ask them about how they're building trust or how they're granting autonomy and how that changes over time, they have this really strong anthropomorphization of AI. They describe the trust building relationship as if it were, for example, a new employee.” 

In this episode, Heidi Trost and I talk to Neele about:

  • How security experts’ risk–benefit assessments drive the level of AI autonomy they’re comfortable with.

  • How experts initially view AI: the tension between AI-as-tool vs. AI-as-“teammate.”

  • The importance of recalibrating trust after AI errors—and how good system design can help users recover from errors without losing their trust in it.

  • Ensuring AI-driven cybersecurity tools provide just the right amount of transparency and control.

  • Why enabling security practitioners to identify, correct, and learn from AI errors is critical for sustained engagement.

Episode webpage