Discussion about this post

User's avatar
Jacob Combs's avatar

I admit I started reading this expecting to disagree, as I work in medical devices where safety and cybersecurity are often linked. But as I read further, it started to make sense, and simultaneously raised my blood pressure. What you describe here is exactly why I’m so passionate about cybersecurity. The game will never end; it will only change in scope or complexity.

Fernando Lucktemberg's avatar

Ross, this is a masterclass in why analogies matter. The 'seatbelt' comparison is comforting because it implies we can 'solve' security with a one-time engineering fix and some regulation. But as you pointed out, gravity doesn't have an exploit kit.

I’ve spent the last few months researching how this distinction applies to the AI agent space, and your point about standardization becoming a source of vulnerability is particularly resonant. Much of the industry is currently stuck in the 'Safety' mindset—which is essentially the seatbelt approach. They are trying to 'align' models and add toxicity filters, assuming a stable environment where the 'mind' of the AI just needs to be 'trained' to be good.

The real shift happening now—the one that actually mirrors your 'Security' definition—is moving the defense away from 'hope and alignment' and toward hardened infrastructure.

IMO, Instead of trying to make the AI 'safe,' we have to move toward:

- Architectural Isolation: Containing the blast radius so the environment isn't 'uncontrolled.'

- Identity Delegation: Managing what an agent is allowed to do, regardless of its 'intent.'

- Egress Controls: Treating AI agents as untrusted software workloads rather than misbehaving 'minds.'

Agree with you, it’s time to put the seatbelt analogy to rest. We need to stop treating these systems like people to be educated and start treating them like the heterogeneous, adaptive risks they are.

Really appreciated the clarity you brought with this article!

No posts

Ready for more?