Discover more from Venture in Security
Tools alone won't save us but if we have tools - why don't we at least use them?
Most attacks could have likely been prevented by the tools companies are already paying for
Welcome to Venture in Security! Before we begin, do me a favor and make sure you hit the “Subscribe” button. Subscriptions let me know that you care and keep me motivated to write more. Thanks folks!
Thanks for supporting Venture in Security!
Those who read Venture in Security regularly know that I am a big believer that security is growing up as a profession. The industry is maturing and moving from promise-based to evidence-based approaches and the way security products are built is evolving towards more transparency and interoperability. On top of that, we are witnessing how the rise of security engineering is changing the way security is done, and how hands-on security practitioners are gaining more and more power in the buying process.
I previously discussed at length why building security products is hard and why skilled security practitioners are the only way to achieve an advantage over the adversary. While I stand by all of these observations, the fact of the matter is that security tools are critical to enabling people and companies to defend their data. In this piece, I am zooming in on security tools and the question of why so many companies get breached even though they have all the latest and greatest capabilities, and what to do about it.
Simply buying tools isn’t the solution
Every week we hear about new cyber breaches and witness some of the largest and most authoritative companies become victims of cyber attacks. Whenever that happens, one starts to wonder: “Are we not buying enough tools? Is it that the attacks are so advanced and unexpected that we simply can’t predict where they will happen? Are there new zero days being exploited so fast that security teams and vendors can’t keep up with the speed of attackers? Or, is there something else at play that we aren’t talking about?”
The most intuitive answer to these questions is, without a doubt, “it depends”. There are surely cases when an obvious and well-understood solution, if purchased, could have prevented a large breach. There are also highly targeted attacks, especially those by nation-states, that take advantage of zero days and that most likely could not have been easily detected by conventional security tooling. While all this is true, what is also true is that:
Most enterprises do have all the tools they would need to prevent, detect, or reduce the impact of a cybersecurity attack. Similar to how companies that suffered security incidents check all the right compliance boxes (SOC 2, ISO/IEC 27001, FedRAMP, PCI DSS, and more), the vast majority of them do have the latest and greatest tooling.
Most organizations aren’t currently being targeted by the nation-states and instead are attacked by financially-motivated ransomware groups and criminal syndicates.
I have a strong conviction that most cybersecurity vendors do their very best job to protect their customers. The ever-growing number of security incidents highlights a simple truth: tools alone are not enough to protect our data. Security tools are as good as their implementation, and it is the implementation that fails much more often than the products themselves.
Why we fail to properly implement security tools
The ever-growing complexity of IT infrastructure and security tooling
Two decades ago, IT environments were much simpler. The cloud adoption, proliferation of SaaS applications, rise of the IoT, and remote work have made it nearly impossible to gain full visibility into the organization’s environment, not to mention effectively controlling the attack surface. In turn, security tooling is also getting more complex: every product today has hundreds of configurations, options, and knobs that security practitioners need to turn a certain way to achieve a particular outcome.
Cybersecurity is not unique; all areas of technology trend toward complexity. The second law of thermodynamics states: “As one goes forward in time, the net entropy (degree of disorder) of any isolated or closed system will always increase (or at least stay the same).” As time goes by, the systems, products, configurations, and IT infrastructure will only get more complex, and we have little choice but to learn how to navigate this complexity. In his post on security entropy, Phil Venables summed up the consequences of the problem well: "I’ve spent a lot of time thinking about security entropy over the years and I’m still surprised that it is not more widely discussed. I’m also puzzled that people, admittedly outside of security, are not aware that many breaches are the result of unintended control lapses rather than innovative attacks or true risk blind spots. There are some notable exceptions, of course, especially with respect to exploits of zero-day vulnerabilities and whole classes of attacks where the control flow logic of application or API calls are manipulated."
Not allocating enough time to let the tooling mature
Many people think that deployment is the same as implementation; this could not be further away from the truth. Products aren’t magic - they need to be fine-tuned to the unique customer’s infrastructure, trained to distinguish between what is normal in this specific environment and what isn’t, connected to other components of the security stack, and adjusted to take into account exceptions and special requirements. Deploying a security solution in the organization isn’t the end of work, it is the beginning. The challenge is that too many companies lack the resources to fully implement security solutions after rolling them out.
Security vendors know well what buying cycles in the enterprise look like, so every one-and-a-half or two years (or right when a new CISO joins the company) they show up at the doorstep pitching amazing opportunities if only the company agrees to rip and replace “that old tool”. Security teams understand that every solution has its gaps, but because after one or two years they would already be rightfully fed up with the limitations of the current vendor, they often end up buying products that aren’t that much better.
The never-ending buying cycle doesn’t give security teams enough time to mature their existing tooling and get it to the point when organizations can truly start benefiting from their investment. Security practitioners develop what I would describe as “an online dating syndrome”: there are too many choices, it is easy to feel like the next vendor has the potential to be a much better option, and making a long-term commitment based on limited information is hard. At the same time, similar to dating, to reap benefits one must be transparent about what they need and what they have to offer from day one, and they should be willing to invest in a long-term, mutually beneficial relationship.
Underestimating the total cost of ownership
The total cost of ownership is like an iceberg. The upfront, short-term cost of deploying the solution as well as the annual price, are the expenditures buyers can plan for from the beginning. The true cost of most security tools includes costs that are hard to forecast and estimate, such as the effort required to implement the solution and maintain it so that it continues to deliver value - a month, a year, and a few years after the initial deployment.
During the deployment, vendors have all the right incentives to make sure that customers are happy with the experience, and that they can get up and running quickly. After the agreements have been signed, and the initial implementation is complete, customers are often left on their own, with little guidance and support from the vendor. Security teams must understand what it takes to keep the newly adopted product up and running effectively in the long term, not just what’s required to get started. When there are 20+ security tools, and each of them needs maintenance, updating, and testing every week, this is bound to result in trade-offs, taking the team’s time from other work it could be focused on, and increasing the total cost of ownership.
Lack of the ability to configure security stack at scale
Historically, security products were built for power users, on the assumption that they will “figure it out”. As the infrastructure became more complex, the number of configurations security professionals were expected to deal with grew exponentially. Moreover, as startups moved to expand their areas of focus and cover more and more use cases beyond their initial area of focus, their products turned into massive, complicated enterprise platforms. This ever-growing complexity makes security tools hard to operate, and subsequently hard to keep up to date.
The vast majority of cybersecurity solutions today need to be configured manually. Since configurations always change, to understand what settings are in place, one needs to spend hours making sense of the product, sifting through libraries of help documentation, and opening support tickets with the vendor. Without the ability to leverage security as code (infrastructure as code, detections as code, etc.), problems like versioning, keeping tens of security tools in sync, deploying changes across multiple tenants, tracking changes to product configurations, and integrating with CI/CD pipelines cannot effectively be solved at scale.
Leveraging tools security teams already have
Most enterprise security teams have all the tools they need to successfully strengthen their environment. What’s lacking is the operationalization of these tools - making sure that they are properly configured, testing that they work as intended, and verifying that findings from each of the tools are being acted upon at scale. It’s too common to hear stories about companies paying a lot of money for a solution, say a data loss prevention (DLP) tool, only to realize that 30% of detections have been disabled, or security teams ignore their vulnerability reports because they don’t have enough resources to patch vulnerabilities and upgrade their infrastructure.
One of the first people who raised the topic of making the most of the tools security teams already have was Phil Venables who wrote a great blog about security controls back in 2019. In 2022, Jeremiah Groisman posted a question on Twitter that sparked a solid discussion: “In your experience, when security controls should have stopped a breach, but didn’t… is it usually because the control is ineffective or was the control was misconfigured / ignored?”. In March 2023, Phil Venables again published a post titled “Fighting Security Entropy”, putting forward the idea that “Adopting a control reliability engineering mindset by continuous control monitoring is essential to counter the inevitable decay of control effectiveness”. Two weeks later, David Spark, the producer of the CISO Series, Geoff Belknap, CISO, LinkedIn, and Kenneth Foster, VP of IT governance, risk, and compliance at FLEETCOR, had a great discussion around Jeremiah's tweet and this problem at large.
The economic downturn and the understanding that we aren’t going to magically find an army of senior security people (and neither do we have the budgets to hire them) are pushing the industry to look for ways to make use of the tools they already have. The vendor market has quickly responded to this need: security consultancies such as Optiv and value-add resellers such as World Wide Technology are helping companies evaluate and optimize their existing security tool stack. Product startups have a role to play as well and there seem to be multiple approaches to solving the problem: Reach Security and Enterprise Security Profiler help security leaders and practitioners to measure, manage, and increase the value they're getting from their existing security investments. Reach Security, for instance, describes the problem it solves in simple terms: “Security teams work diligently on a daily basis to combat sophisticated threats that target the organization, but lack the time and resources to tailor their security products to the environment they’re defending”.
Each of these and other companies may take different approaches to security, but their focus stays similar: to help companies make the most of the tools they already have in place. Different approaches have different degrees of success; one example of that is what some call the “last mile” problem. Many security consultancies, for instance, specialize in assessing the organization’s controls, identifying the gaps, and recommending ways to address them. When the customer receives the report, they need to go and implement fixes - something that many have no resources, no time, and no expertise to do. Services and products that focus on at least getting configurations staged for review by the customer are going to make the most impact; recommendations alone rarely end up moving the needle.
Focus on what matters: making the most of security investments
Get out of the “Pokemon mentality”
There is a lot of evidence that a compliance-first mindset makes companies over-invest in controls that check the boxes and get auditors to issue positive reports but under-invest in effective security measures that can protect them against cyber criminals. In other words, instead of defending against attackers, many organizations choose to defend against auditors.
It is the compliance-first mindset that causes the so-called “Pokemon mentality" when security leaders are buying more and more tools in the hope that it will make them safer. Adding products to the stack without first considering what problems the company needs to focus on solving won’t make it more secure. Instead, it often leads to the opposite, increasing the attack surface and leading to a false sense of safety.
To get out of the “Pokemon mentality”, security teams need to embrace the security-first mindset. This includes:
Keeping in mind that tools, no matter what they promise, aren’t a substitute for fundamentals like asset management, patching, vulnerability management, and backups.
Paying attention to how the controls are implemented, what they are doing, and how effective they are, not just whether or not they are present.
Continuously testing and validating the organization’s defenses, looking for gaps, and addressing the findings.
Regularly evaluating what controls and capabilities are offered by the products the company already has in place. Before signing a contract to purchase a new tool, consider if the existing solutions are already solving, or could potentially be solving the same problem equally well.
Remembering that less is more: the more products the organization buys, the larger its attack surface becomes. Security teams need to consider the trade-off between these two factors and avoid adding unnecessary complexity to their environment.
Configure the tooling for your environment & continuously validate it
Product implementation is equally if not more important than product selection. Out of the box, security tools are generic and do not take into account the uniqueness of the customers’ environment. Since every organization is different, the one-size-fits-all approach is going to yield poor results. Spending a year for product selection and a month for its implementation is a recipe for suboptimal results including unnecessary noise which over time makes security teams ignore any alerts from the tool.
Most companies do a decent job with the initial configuration of the newly purchased solutions but all of that upfront investment goes to waste because of the poor long-term maintenance. Vendors regularly ship new features and if they are not enabled or properly configured, the company could be utilizing less than 50% of the value the product can offer. The customer’s environment is also constantly evolving, so having a process to continuously test, validate, harden, and adjust the security controls is critical. It is equally important to establish processes for ensuring that the tool’s findings and recommendations are going to be surfaced, prioritized, and addressed by the security team. Without operationalizing security tooling, every product the company buys can turn into that overwhelming vulnerability scanner nobody wants to look at.
Unfortunately, most security products are built as black boxes, so the customers can’t get full visibility into their coverage. There are several issues with black box detections:
When the security team gets an alert, it needs to spend time and involve other parts of the organization to investigate what exactly the alert was triggered by. Since detections trigger daily, the time spent on trying to get more context adds up, making it clear that black box solutions add a lot of unnecessary and costly overhead.
Because security teams can’t easily confirm if they are covered against a specific threat, and if the way the vendor is detecting a specific behavior is optimal for their environment, they need to spend a lot of time testing, validating controls, and building their own detections.
As we go into the future, I think we will see more transparency and less black-box solutions. The new generation of security solutions such as Panther, Sublime, and LimaCharlie where I lead product make it easy for customers to gain full visibility into their detection coverage. Players such as SnapAttack and SOC Prime built their business models around developing effective detection content and delivering it in a way that is transparent and can be applied across different tools at scale. Atomic Red Team, a library of tests mapped to the MITRE ATT&CK® framework, along with Prelude and FourCore, to name some, enable customers to validate their defenses quickly and effectively. Until the principle of transparency gains mainstream adoption, security teams need to do their best with what they already have in place.
Choose the right partners
Many cybersecurity products have been built without interoperability in mind, in a way that prevents them from easily integrating with other parts of the security stack. Vendors that only allow connecting to a select list of pre-approved partners, or those who lock customer data in their ecosystem, make it hard to maximize the return on the customer’s investment. To make the problem worse, many tools make it hard to understand what capabilities are enabled, and how to start leveraging the ones that need to be manually activated.
Choosing the right partners is critical; security products of the future should be scalable, testable, engineering-centric, interoperable, extendable, intuitive, and transparent to make the lives of security practitioners easy.
Security vendors should guide customers to pick the most suitable configurations for their environment, not simply throw a stack of features and wish them “good luck”. Most importantly, both security vendors and infrastructure providers need to set defaults to the most secure configurations. For instance, it’s not acceptable that it took Amazon many years until it finally added default privacy settings to S3 buckets to stop the epidemic of data leaks caused when AWS customers would accidentally leave their S3 buckets wide open to the internet.
Remember that the quality of the vendor depends on the quality of its people
Security tools are just that - tools, and without mature security practitioners who know what they are doing, no product is going to make a material difference. I have previously explained what it means to buy a “security product” and how it works.
“Fundamentally, when we say “a product X is securing my company”, what we mean is “another company, the technology it builds, and the people it employs are securing my company”. This explanation is better than simply saying “product”, but it still lacks deeper context. To illustrate how it actually works, let’s look at a specific example.
When a customer pays for an endpoint detection and response (EDR) tool, it typically expects the vendor to provide comprehensive security coverage. That coverage doesn’t come out of anywhere or is generated by AI alone. The EDR provider, to develop detection and response logic, hires large teams of security practitioners - threat intelligence professionals and researchers to stay on top of the new threats and prioritize what coverage the company should build, detection engineers who build and test individual rules and detection logic, and the like. This simple explanation should make it clear that security practitioners are at the core of how security is done even if they are shielded from direct contact with the customers behind the “product”.
Many factors distinguish one product vendor in the industry from another, but the quality of security talent the company can attract - vulnerability researchers & exploit developers, threat hunters, malware analysts, media exploitation analysts, OSINT investigators, detection engineers, architects, and others - is definitely one of the top five”. - Source: Why building security products is hard and why skilled security practitioners are the only way to achieve an advantage over the adversary
It is worth keeping this in mind when we get overly excited about the new shiny tooling.
In the past decade, we have been witnessing a continuing increase in the number of security breaches. Even though we are spending more and more on innovative defensive solutions, something is still missing. It would not be reasonable to suggest that there is a single reason that accounts for the root causes of 99% of all security incidents; instead, there are many:
Missing security fundamentals.
Lack of well-defined security processes.
Human errors & people not following the policies and procedures.
Relying on tools for solving problems they were never designed to solve.
Not configuring security tools for the customer’s environment or not maintaining the current configuration.
Failure of the properly configured security tools to prevent, detect, respond, or recover from the incident.
Companies looking to future-proof their security operations will do well by adopting the control reliability engineering mindset described by Phil Venables in his blog. As I discussed before, “Mature security professionals know that security is a process, not a feature. The best way to build a security posture is to build it on top of controls and infrastructure that can be observed, tested, and enhanced. It is not built on promises from vendors that must be taken at face value.” - Source: Future of cyber defense and move from promise-based to evidence-based security
Security tools alone, without the hard work of dedicated, talented, highly motivated security practitioners are not going to save us. However, since tools are very much needed, and we are more open to buying tools than hiring people, we need to start using them. Many of the known security breaches could have been prevented with the solutions customers already paid for. It’s time that we operationalize what we already have, before going on another shopping spree and adding more solutions to the already complex environments.