7 Comments

“Fundamentally, they have two paths ahead: to pick a vision of the future they believe in and build for that vision, or to design a very flexible solution the value of which will remain strong regardless of how the future unfolds.” <- This is exactly the issue many security vendors are going through at the moment: how to secure something that’s not standard. They are still unable to fully secure IoT because of that, needing to mainly resort to allow/block practices only. Great article Ross!

Expand full comment

Thanks Ignacio!

Expand full comment

Ross, great article as always. I think "AI compliance" is the most promising near-term market; upcoming governmental mandates + large companies looking for risk transference means that there's an opportunity for someone who can come in and "certify" that the business is controlling for the appropriate AI risks. Probably not a big enough market for a VC to be interested, unless you can sell the software that those compliance auditors are using (thinking the AI version of Vanta + SOC2).

Expand full comment

I also think the adoption of AI security will start with AI compliance. I do think the size of the market is very likely to match VC expectations though.

Expand full comment

Very informative post. Helped me get a better understanding of the landscape. Loved it! 🙌

We are simplifying by betting on the interface. In the early stage most AI implementation and products will rely on prompts. Whether system prompts or user prompts there is scope to ‘scan’ and evaluate for security.

Injections flaws have always been a way in for attackers. Solving for prompt injection security by building a scanner allows us to get a foot in the door.

Our thinking matches what you mentioned in the post. Just as cloud security used familiarity with vulnerability management, assets inventory and access control the first gen products building for AI products will need to do the same.

Expand full comment

Ross, great article. You touched on so many different important aspects, but especially leveraging AI for solving security problems, "The “good enough” part is the key here." and UX as decision component resonated the most with me. I don't know how often I am still telling people that are either dissappointed or excited about an interaction with ChatGPT that the result was to be expected because their topic is either one that is likely to be represented in the training data or not. I think the near-term business models really depend on identifiying a problem that fits to AI capabilities. Only then big corporates will be more prone to invest in longterm developments.

Expand full comment

Indeed!

Expand full comment