There’s only one kind of tool security teams should be building with AI
There are always exceptions but most security teams should generally speaking not build anything else except this one kind of product
I am not sure what I’ve been doing on social media over the past year (particularly on LinkedIn), but these days my feed is filled with posts of security people who build some very cool tools. There’s so much excitement that with LLMs, anyone can now be a product developer, which means that security teams can build their own products.
It just so happens that many of the people arguing that AI will push companies to build security tools in-house instead of buying from vendors are friends of mine. They’re smart, thoughtful people; I simply happen to respectfully disagree with their perspective. In this piece, I will explain why I disagree and why we’re nowhere near the time when companies will just start building their own security tools. At the same time, I do think that security teams should definitely be building one particular kind of tool internally (but these aren’t really a replacement for cyber vendors).
This issue is brought to you by… Varonis.
AI adoption doesn’t have to equal risk expansion.
AI doesn’t operate in isolation. It connects to the same data, permissions, and identities your security strategy already struggles to govern. That’s why AI security isn’t about the model; it’s about the data that fuels it. Discovery alone isn’t enough when copilots and agents can retrieve sensitive data in seconds.
Varonis helps organizations apply real controls to AI risk by understanding what information AI systems access, enforcing guardrails across data and identities, and preventing exposure before damage is done. Don’t settle for point solutions that address isolated risks. Start securing everything you build and run with AI today.
A few simple reasons why most companies will not build their own security tools (and they shouldn’t)
Expertise required to build products
The number one reason why most companies are not going to build their own security tools is a lack of engineering expertise. I know this sounds counterintuitive in a world where so many suggest that software can write itself, but hear me out. I have been a product leader for many years before I became a founder, and I’ve seen many, many times that there is a huge difference between writing code and building functional products. Software that can withstand real-world enterprise environments isn’t some random CRUD (create - read - update - delete) apps. Security tools in particular require deep domain knowledge, architectural rigor, performance optimization, edge-case handling, and the ability to operate across messy, heterogeneous environments. You don’t get that for free with Claude Code.
AI makes writing code super quick, but it makes skills like systems thinking and architecture more critical than ever. What AI doesn’t do is magically give companies years of accumulated understanding about enterprise networks, identity models, logging pipelines, compliance nuances, or operational workflows. That expertise still has to come from somewhere, and most companies don’t have senior engineers sitting around waiting to reinvent what security vendors have already spent a decade refining.
What I think AI is going to do is deepen the divide between engineering-centric security teams and everyone else. If you’re a Bay Area-style, product-driven company that already has strong security engineers building internal tooling, AI is going to amplify that big time. It will let your team prototype faster, build internal tools faster, automate many more workflows, and so on. If you work at companies like OpenAI, Anthropic, Google, Airbnb, Canva, Figma, Notion, Uber, Reddit, or Discord, where engineering is the DNA of the organization, you should 100% be building much more now with AI than ever before. I think that when it comes to these companies, my friend Frank Wang is right that they are going to build a lot in-house.
For the rest of the world, if building security products in-house wasn’t realistic yesterday, AI won’t magically make it possible tomorrow. Companies that didn’t have the engineering talent aren’t going to suddenly build their own tools because the three SOC analysts they can afford to hire now have Claude. Companies that were under a lot of regulatory pressure yesterday surely aren’t going to see that pressure go down today. I can keep going with these examples, but that’s not the point. Will AI expand the percentage of companies that can build security products? Definitely. If I were to guess, I’d think that instead of 1-2% of companies that number will grow to 4-7% but it’s not going to be 70% or even 30% (I’m pulling those figures out of thin air (nobody has hard data) but I’m willing to bet the real number sits in the single digits).
All this makes me think that while some security tools can definitely be built internally, security teams are less likely to build those that are technologically complex, that need to operate at a large scale, process a lot of data, require agents, etc.
Expertise required to secure the company
If finding engineering talent is already incredibly hard, now consider how hard it is to find senior security talent. There’s a lot of talk about how there’s the so-called “cybersecurity talent shortage”, but both the people who claim that security is oversaturated and people who say that we need 4 million (or whatever the number is now) security engineers are all wrong. I explained what’s actually going on years ago. It’s pretty simple: there are too many people in cyber looking for entry-level jobs, and very few senior professionals.
The people who deeply understand detection engineering, incident response, cloud security and engineering, identity (I mean deeply - like being able to find an attack path by hand), network exploitation, compliance requirements, and adversary behavior are extremely rare. Most security tools, whether they are focused on prevention, detection, response, remediation, or recovery, essentially “encode” the expertise of a small number of highly specialized researchers and engineers, and then distribute that knowledge to hundreds or thousands of customers. In a sense, security vendors industrialize very limited expertise, turning scarce security talent into software.
Obviously, this model is not perfect. The logic security vendors build can be generic, it can miss the nuances of individual environments, and so on, but what it does really well is deliver the kind of expertise to the end customers that most would never be able to afford and/or attract on their own.
Now imagine for a second what would happen if every company started trying to build security tools internally. Instead of several thousand vendors competing for the top 1% of security engineers, we would have tens (or hundreds?) of thousands of enterprises all trying to recruit, retain, and manage their own in-house detection engineers, cloud security architects, threat hunters, and security platform builders. The math simply doesn’t work because there is not enough security expertise. Contrary to popular belief, AI is not going to change that.
All this makes me think that while some security tools can definitely be built internally, security teams are less likely to build those that require specialized security expertise.
The cost of building is going down, but the cost of ownership is not so much
Another important reason why companies won’t start building their own tools is the cost of ownership.
AI is truly reducing the upfront development cost, but in no way is it changing the equation around the total cost of ownership. I totally get why people are so excited about building with AI, but building software isn’t just about launching new cool features fast. It’s also about maintenance. Who is going to update integrations when APIs change? Who is going to refactor the system when there’s too much tech debt? Who is going to debug it at all times? AI is going to increasingly take care of more and more of these tasks, but humans will still be required to make things work at enterprise scale. Every software vendor has to deal with on-call rotations, documentation, knowledge transfer, feature requests, bug fixes, scaling infrastructure, compliance, audits, and retraining models, and the long tail of maintenance is usually much more expensive than the initial cost of building.
Buying software is about buying a lot of intangibles that come with it, like reliability, security, being able to pass complex audits across jurisdictions, operational resilience at scale, partners they can trust when something breaks at the worst possible moment, etc. All these things have to be considered when companies get too excited about building things internally, and most teams will decide against it as soon as they see how big the gap between the cost of building and the cost of maintenance actually is.
All this makes me think that while some security tools can definitely be built internally, security teams are less likely to take a stab at building tools where the total cost of ownership is very high (this applies to a lot of security products because most products need constant updating to stay effective).
Liability is playing a critical role in build vs. buy decisions
When companies build software internally, they own the consequences of things not going well. If something breaks, or when an incident happens (be it an outage or a security breach), there’s no vendor contract to lean on, and no shared responsibility model. It’s entirely on them to deal with whatever happens.
For a lot of the areas of enterprise, the risks are acceptable. Say, if a marketing team wants a dashboard to track how their campaigns are doing, it can make sense to build a tool that pulls from internal CRM data and shows it in a nice UI. Or, say a human resources team is spending too much time dealing with PTO requests - that, too, can be automated internally.
Security is a different beast because it is directly connected to compliance, regulatory requirements, customer trust, and brand reputation. Few CISOs are going to sleep well if their already under-resourced teams start vibecoding their own security tooling, both because that can increase the probability of a breach, and because it can make compliance a nightmare. Auditors may not accept “we built it ourselves” as proof of control and start asking for documentation, testing, change management, and independent validation. When that happens, an internal tool quickly becomes a liability instead of an advantage.
Another factor people are forgetting in these discussions about vibecoding is insurance. Cyber insurance underwriters don’t just ask whether a company “has security”; they ask what tools the company is using, whether they’re industry-recognized, whether they’re maintained, audited, supported, and so on. Even if they don’t ask these questions upfront, if a large enterprise gets breached, and it turns out that critical security controls were replaced with internally built tools instead of dedicated vendors, that can create serious complications, and in some cases, it could even invalidate coverage. Bottom line, liability concerns are an important factor to consider. For a tech company that just needs SOC 2 to sell to enterprises, it may not matter as much, but publicly traded companies or especially those in regulated industries surely have more to lose.
All this makes me think that while some security tools can definitely be built internally, security teams are less likely to build those that can expose them to liability and have the potential to create issues with regulators, auditors, or insurance companies.
Tools built internally usually lack industry-level intelligence
As I have said, products built by security vendors can offer somewhat of a generic coverage, but sometimes that is a feature, not a bug. Vendors get to see patterns (attack techniques, misconfiguration trends, exploit chains, false positive patterns, real-world breach data, etc.) across many different environments. That kind of visibility creates a network effect, and the more customers they get, the better their detection logic, threat models, and defensive playbooks become.
An internal tool only sees one environment, so even though at some companies it can be super tailored to what they think they need, it is always going to lack breadth compared to something built by a vendor servicing many customers. Internal tools don’t get the benefit of having shared intelligence or even lessons learned from incidents that happen at other companies. For security products this is a huge gagp because exposure to many examples of “badness” is a requirement for truly comprehensive coverage. The detection know-how of CrowdStrike or Palo Alto can only really be replicated if a company can see that many environments at the same time, and I don’t think anyone can truly replicate that inside a single enterprise, no matter how big or mature that enterprise is.
All this makes me think that while some security tools can definitely be built internally, security teams are less likely to build those that benefit from industry-wide network effects and that would be of limited value without them.
Two bonus reasons why CISOs should not be rushing to build their own security tools
It will really complicate hiring and new employee onboarding
Security teams are always resource-constrained and can’t afford to have new analysts or engineers spend months just learning internal systems before contributing. When new people join the company, they need to ramp quickly and start delivering value on day one.
When companies use standard tools, onboarding is much faster because new hires will often already know the tools they’ll be using or can rely on existing documentation and vendor support. Custom-built tools, on the other hand, tend to have much worse documentation, rely on tribal knowledge, making hiring harder and onboarding longer since no one comes in with existing experience in your internal custom stack.
You have to build your own integrations
No security team can just use a single tool to do everything it needs in one place. Inevitably tools sprawl, and all these SIEMs, EDRs, CSPMs, DSPMs, clouds, IdPs, vuln scanners, ticketing platforms, etc. have to talk to one another. Maintaining integrations is a lot of work, and while we have to admit many cyber vendors don’t always do a great job here, when you build your own tools, integration becomes 100% your problem.
If you build custom tools, you own every connector. AI can help write them, but APIs change, auth breaks, vendors update endpoints, and you end up having to maintain plumbing instead of improving your company’s security. I personally don’t think this is a good use of security teams’ time but I am sure there are plenty of people who will disagree.
Security teams will be building a lot of their own glue and productivity tools (and they should)
To summarize, I don’t think security teams are going to be able to build their own security tools at scale, in particular:
Tools that are technologically complex, need to operate at a massive scale, process a lot of data, require agents, etc.
Tools where the total cost of ownership is very high (this applies to a lot of security products because most products need constant updating to stay effective).
Tools that require specialized security expertise.
Tools that can expose the company to liability and/or can create issues with regulators, auditors, or insurance companies.
Tools that benefit from industry-wide network effects and that would offer limited value without it.
This leaves one area where I do think we’ll see a lot of tools being built internally. I am talking about tools that make security teams more efficient doing what they are actually spending the most time on every day.
I have previously written that most of the security teams’ work has nothing to do with chasing advanced adversaries. In that piece from several years ago, I explained that “While many people join cybersecurity after being inspired by the idea of hacking, the vast majority of security work is far removed from actively trying to catch adversaries. Working on a cybersecurity team at an enterprise is similar to working on any other team, in that a disproportionately large amount of time is spent on:
Communication, which includes meetings, sending and responding to emails and messages in Slack or Teams, answering questions, preparing reports and status updates, tracking key performance indicators (KPIs), and coordinating with other departments.
Cross-functional collaboration, which includes reading and writing documentation, understanding how different departments do their work, coordinating complex initiatives spanning multiple teams, explaining the importance of various controls to employees and functional leaders, and negotiating the minimum realistic security measures.
Security evangelism, which includes explaining why passwords cannot be saved in the spreadsheet or sent via text (even via encrypted messaging platforms), why service accounts cannot have domain admin rights, why people should use Yubikeys instead of the SMS-based MFA, and the like. Most importantly, all this needs to be done without becoming a bottleneck for people trying to do their work and achieve company revenue goals, and without ruining relationships with everyone at the organization.
Buying and maintaining security tooling, which includes conducting gap analysis, testing new security solutions, periodically assessing the implementation of existing security tools, addressing issues surrounding configuration, and deciding which policies are appropriate to be implemented in what parts of the environment.
Resource planning, which includes negotiating budgets and headcount, justifying investment in specific areas of security, structuring, organizing, and re-organizing teams, and working with human resources and recruiters to develop hiring and employee compensation plans.
Training and onboarding, which includes reading and writing documentation, and guiding new employees to get up to speed with how things are done at the organization.
Many would argue that these things are boring, but such is the nature of office jobs, and work in general - a lot of what we do are mundane tasks that just need to get done and not the most exciting initiatives that use all of our skills and abilities. Every office job has a part that lives in Excel spreadsheets and PowerPoint presentations.” - Source: Most of the security teams’ work has nothing to do with chasing advanced adversaries
This is the stuff that security teams should be automating, all this undifferentiated heavy lifting as Amazon would call it. I firmly believe that unless you’re one of the top 1% of the engineering-driven security teams, things like prevention, detection, response, and recovery are better left to security vendors. Where building stuff with AI comes in for building the glue, the tools that automate everything between. It’s about automating workflows, automating work, and increasing productivity.
In the past, security teams were limited to the capabilities of their Security Orchestration, Automation, and Response (SOAR) platforms, but now they can surpass what these platforms were able to offer. The opportunity today isn’t to get rid of the core security vendors and replace them with vibecoded solutions; it is to build custom productivity tools to make security teams more effective.
Increasing personal productivity and eliminating glue work is where security teams should focus their AI-driven automation efforts. Doing this well requires deep knowledge of internal processes, procedures, and all the shortcuts and edge cases that exist in a real environment - context no external vendor can fully replicate.
At the same time, security teams shouldn’t try to rebuild mature products that require a lot of engineering expertise, security know-how, or that benefit from network effects just to “save money”. Maintaining any meaningful in-house tool almost always costs more than paying for a subscription once you factor in engineering time, upkeep, and ongoing improvements. Let’s be honest - we’ve seen how this plays out before with scripts. What starts as a quick productivity win slowly turns into another brittle system to maintain. The real opportunity for savings and impact lies in automating what’s truly unique to the organization: its unique systems, workflows, and institutional knowledge.
Update: After publishing this article, my friend Guillaume Ross sent me a note that summarizes things better than I ever could: “Shell scripts are now very easy to build, but building a data lake is not. Sure, Bay Area-style security teams can now build anything, but others are probably better off using Claude Code to generate SOAR playbooks, parse logs, or create detections as code. They should stay away from trying to build anything serious that collects a lot of data and requires infrastructure.”



Arguably, SOAR is a product too, and automations are synonyms for products. If it has persistence, reliability, and availability requirements it is a product, and it is exactly what customers shouldn’t own unless they run internal software houses. In fact, I see startup pitches that exactly focus on different types of glue problems. It maybe less compelling because the moat is low and customer ingestion is not trivial but there is an R&D organization willing to take it on.
Great thoughts Ross! There's a cost to DIY security...and some costs I hadn't thought of until reading this. Thanks!!