Discover more from Venture in Security
The role of trust in cybersecurity, and how what is happening in the industry leads to the tragedy of the commons
Looking at the problem of distrust, factors that cause it, and the tragedy of the commons in cybersecurity
Welcome to Venture in Security! Before we begin, do me a favor and make sure you hit the “Subscribe” button. Subscriptions let me know that you care and keep me motivated to write more. Thanks folks!
Thanks for supporting Venture in Security!
Everything in the industry relies on trust
I have recently looked into the reasons why there are so many cybersecurity vendors. One of the conclusions I arrived at was eye-opening: it is one of the consequences of the fact that everything in cybersecurity relies on trust. Because the impact of security products is high, and it takes time to build trust,
we see long sales cycles, in-depth trials, the need to test every solution in a home lab before bringing it to work, and small-scale initial deployments
startups in the industry can’t blitzscale - meaning they cannot grow fast & quickly capture the market
by the time a new approach finally catches on, we see many vendors trying to solve the same problem
Product-led growth (PLG) in cybersecurity doesn't have the same shape as in other industries (you don't deploy a new EDR the same way as you get started with a project management tool Asana)
I can keep going on and on, but I know that most people working in the industry will agree that trust is, indeed, of critical importance. Among all the reasons why that is the case, there is one unique to security: in other fields, a customer is evaluating products against known or reasonably foreseen present and future requirements. When a finance department is looking for a new forecasting tool, it knows exactly what it needs to do today, and it can envision some great functionality for tomorrow. In cybersecurity, it is typically hard to make sense of what the vendor is offering to address the present-day threats, and it is not at all possible to predict future ones. A purchasing decision in security is a leap of faith: “Knowing what I know, do I think this company will be on top of new issues that neither of us can even begin to imagine when we’re signing the contract?”
Cybersecurity vendors use all sorts of tools and techniques to build trust. In the part that follows, I will cover some of them, looking at what works, what doesn’t, what is hard, and what are some of the risks and second-order consequences that may be caused (intentionally or unintentionally) by the vendor's actions. None of this is intended as a criticism of the actions of any specific company, but rather a discussion about the industry as a whole.
The sales methods employed by cybersecurity companies
The evolving nature of direct sales
A lot has been said about the fact that traditional sales methods such as email outreach, cold calling, and funneling everyone who shows interest into a long buying process no longer work. If one only attends practitioner-focused industry events, it is indeed tempting to believe that “sales is dead”. The truth is a bit more nuanced: companies in all industries spend a lot of time and effort optimizing their sales process, tracking sources of leads, and attributing new customers to specific events; if outbound sales would not work at all, it would already have been eliminated. There are still people who buy the old way, and that’s why it is too early to declare the end of the traditional process.
Selling cybersecurity used to involve a CISO from the buyer side, and a Head of Sales from the seller side. Over time, as many security leaders started to run away from anyone with “sales” in their title, vendors rushed to rename their Heads of Sales to Heads of Revenue, and some - to things that sounded vague and generic such as Head of Customer Enablement. The promise of this change was that hearing the “Head of Customer Enablement” does not immediately put prospective buyers into a defensive mode, and enable them to build more genuine relationships. CISOs could see right through this strategy, knowing that most “good relationships” are there just to make a sale, so vendors came up with a new idea.
It is easy to notice that an ever-growing number of cybersecurity companies started hiring security leaders and giving them titles of CISOs. This is one of the most creative ways to build trust and help the vendor reach potential buyers, but it is also very delicate and fragile. On one hand, an established CISO joining a security startup can create a unique distribution channel and build trust by being able to tell their peers “hey, instead of talking to an account executive, why don’t we just hop on Zoom for a catch-up?”. On the other hand, a CISO working for a vendor would have to be very smart about the ways they do it. Nobody wants to ruin the relationships they built over decades of work in the industry by bluntly pitching security products to their peers. I think that hiring CISOs to sell to CISOs is not a bad model as it aligns the incentives quite well: a tenured security leader with a good reputation working for the vendor is more likely to refrain from unethical sales practices and blunt pitching so that they can preserve their reputation. I would imagine rare cases when this could backfire, for instance, if vendors start hiring people with experience in the industry but little desire to continue to maintain trust with their peers. Even in that situation, it is arguably better to have people involved in the sales process be able to speak the same language and constructively debate available solutions.
If having peers recommend solutions to one another was the only new invention, there would be no problems. Unfortunately, another thing we have been seeing in the space is the emergence of “consultants” who offer security vendors “unique opportunities” to sponsor private events where a few CISOs can get their undivided attention for a few hours to build relationships and talk about products. It is not my place to offer judgment about anyone’s integrity, intentions, or the way companies choose to select security solutions - these are questions I deem outside of the scope of this piece. It may very well be that a few hours on a private yacht won’t do much harm to the industry, but the fact that this happens can sometimes cast a shadow on the way security products are bought.
Selling to security professionals
Security professionals are notorious for not being interested in talking to salespeople. That is fair as the practitioners typically have an idea of what they are looking for, and specific questions that sales teams are unable to answer; either because of the way the sales process is structured (in the case of pricing questions) or their area of expertise (in the case with product questions). The way many vendors design their sales process does justify the anger commonly expressed on Reddit, Twitter, Mastodon, and other social media. There is no reason that in order to see what the product actually does or to learn about pricing, anyone has to sit through five demos, get “qualified” by showing that they can meet minimum spend requirements, sign a letter of intent, a non-disclosure agreement, and so on. This is why any elements of product-led growth such as transparent pricing and free tiers are being received so well by the practitioners, as do open source security tools.
Security professionals are more than capable of seeing through the fluff of marketing sites, industry jargon, and never-ending abbreviations. Companies that understand it, and look to build trusting relationships with users of their products, commonly invest in hiring industry practitioners as sales engineers and solution architects and give them the ability to help prospective customers solve their problems, instead of demanding that they commit to buying new tools within unreasonably limited time.
Image source: Unsplash
Using external validation to build trust
Trusting peers and reference customers in cybersecurity
Because the sales process in cybersecurity heavily relies on trust, having the ability to showcase who already uses the product can make a big difference in customer acquisition, especially for an early-stage startup. This is easier said than done. As Lital Asher - Dotan rightfully pointed out in one of our discussions on LinkedIn, “It is tough to get customers to publicly endorse a product/solution they use - they prefer to keep their security tech stack as a secret. In other industries, customers are happy to share testimonials- sometimes for free because they love the product; sometimes for a perk like a discount”.
This is yet another example of how the most simple ways of building trust so common in other industries, are often inaccessible to cybersecurity companies who must look for other methods.
Trusting external security assessors
Since cybersecurity is the only field where as a part of the buying process, the prospect is trying to “break” the product and find gaps in the company’s tech stack, it is no wonder that vendors are expected to explain their own security posture to the potential customers. Buyers are essentially saying: “before you get the chance to help secure my organization, I want to know that you can secure yourself”. Imagine a patient expecting the surgeon to be fully healthy before agreeing to have him perform surgery, or an accountant having to make his own finances audited and meet a certain net worth requirement to be allowed to consult others. Pretty unusual, to say the least.
While potential buyers can test the product, it would not be sustainable if every buyer needed to also assess the company’s security posture as a whole. This is one of the reasons why asking for the SOC II compliance designation has become a standard part of the buying process in security. While no compliance measure can guarantee security (and “compliant” by no means is the same as “secure”), the fact remains that SOC II, if done well, does promote maturation when it comes to organizational controls. The problem is that a whole industry of companies offering to get compliant “quickly and cheaply” has emerged to satisfy the demand, and that puts the integrity and the real value of the designation in question.
Word of mouth and brand ambassadors
Few things are as effective for startup growth as word of mouth. Because everything is based on trust, the most genuine and valuable are referrals and recommendations made without any incentives at play. The cybersecurity community is a tight-knit one, and we see again and again that companies doing the right thing can indeed earn the trust of leaders and practitioners who will then spread the word in their networks.
Fortunately for us and the industry overall, word of mouth cannot be bought. Unfortunately for companies, growing on organic referrals alone is hard, so vendors are trying to “hack” word of mouth. Some are inviting CISOs to become advisors or join advisory boards, while others are joining organizations that promise to get them in front of security leaders.
The most practitioner-centric startups are looking for ways to reward their active community champions and incentivize more loyal users to evangelize about them with their networks. A great example is the ambassador programs, such as that run by Snyk. Snyk’s program leads with the simple message: “Snyk Ambassadors are just as passionate about security as Snyk is — and they share their interest, expertise, and excitement within their communities to help other developers and engineers build secure software”. The company provides its ambassadors with a budget plan that they use for personal development tools, like books, courses, and other learning methods, sponsors any travel needed for speaking activities that feature Snyk as the topic, sends exclusive swag, and more.
As long as people’s affiliation is transparently disclosed, I think both advisory roles and ambassador programs can only have a positive impact on the space. There are no misaligned incentives, and I can hardly imagine someone not genuinely passionate about what Snyk is doing becoming an active community contributor to access professional development opportunities. Most importantly, these initiatives allow security practitioners to form communities of practice, network with peers, and advocate for solutions they believe in. It’s worth noting that not every company can build a successful ambassador program. Only those that solve real problems, and can get people excited about their vision of the future, can pull it off without causing harm to their brand.
Industry awards & pay-to-play recognition
Every few months, we see hundreds of companies blast their social media and issue press releases celebrating new awards as “#1 X in the industry”. Receiving public recognition has become one of the ways to get press coverage, and most importantly - build trust with customers. The challenge is that very few awards in the industry are not pay-to-play: SC Awards, awards by SANS, Pwnie Awards, SINET16, and a few more.
When industry recognition is for sale, it damages trust and puts in question the integrity of companies that choose to look for these kinds of shortcuts. It is common to see the pay-to-play “award” hosts target early-stage founders desperate to get some awareness, struggling to get their actual customers to agree to provide public testimonials, and hopeful that getting a digital medal can help them find early adopters.
Press coverage: either rare or paid for
Press coverage is one of the most coveted ways to earn trust as it can enable companies to get exposure to a wide audience at no additional cost. The challenge is that it is easier said than done.
Most journalists covering cybersecurity are looking for events that can be of interest to either a very broad audience (for mass media) or a very narrow one (such as media that cover new vulnerabilities). Security vendors typically do not fit either category and as a result - are struggling to attract any meaningful media attention. One exception is companies with great research teams that offer interesting newsworthy information about new vulnerabilities, attack groups, and the like. While hundreds and thousands of companies bombard journalists with “important” announcements daily, the bar for having a reporter write about a security company is very high. This is why whenever we do see good, comprehensive coverage about a security vendor, it is typically either because the company is a market leader, it has raised a significant amount of capital to do something that has not been successfully done before, or one of the startup’s executives is directly relevant to the story and can offer deep insights (such as an ex-high-ranking government employee).
Since many companies would love external press coverage, but few can access it, a new offering sprung up - pay-to-play media coverage. Pay-to-play “magazines” help vendors get their thought leadership, ideas, and perspectives wrapped into news-like “featured” content that can then be reused in company marketing, and is mostly intended to boost its search engine positions (SEO). It appears that the trend with vanity articles and features in magazines is on the rise; offerings in the area are commonly bundled with the ability to be named as the “top vendor in X area”.
“Cui bono?” and the tragedy of the commons in cybersecurity
It is almost ironic that in an industry so heavily reliant on trust, it is hard to find trustworthy mediums to understand problems, learn about potential solutions, or even at a more basic level - understand the mechanics of the industry. This is no surprise as humans, by definition, cannot be purely objective. “Cui bono?”, the Latin for "to whom is it a benefit?", is a question that anyone reading security-related materials (or anything else for that matter) should learn to ask themselves.
The cybersecurity industry is a textbook example of the tragedy of the commons. The tragedy of the commons occurs when individuals with access to a shared resource (a common), unrestricted by shared norms or formal rules that regulate the use of this resource, act in their own interest and, by doing so, ultimately deplete the resource. There are many examples of this issue - from traffic congestion to fast fashion and overfishing. In cybersecurity, the resource that gets depleted when everyone acts in their own interest is trust.
I have described how the buying process and the incentives and play lead to distrust. Security vendors look at the space and think “if I just do that, that should be fine”. And so, the capabilities get overstated, marketing messaging gets more and more blurry, and sales tactics get grayer and grayer so that the targets can be met, “just for this quarter”. Companies think they are the only ones who are so smart, but because it’s almost everyone who thinks that, we end up with “BS bingo” games at the RSA conference. The tragedy of the commons at its finest.
This is, however, just the tip of the iceberg, as the problems of trust extend beyond vendor-buyer relationships.
Some sources of information are inherently biased: any content produced by a security vendor is an investment to achieve a business objective. Whether we are talking about blog posts, surveys, or whitepapers, it is naturally not possible to find a report suggesting that the vendor who commissioned it is not doing a good job or not solving a critical problem. Some consultants and market research firms are eager to produce on-demand white papers as well; the level of objectivity for these is typically dependent on the supplier’s own moral compass and internal guidelines.
Several not-for-profit organizations and independent industry news media are often able to maintain high levels of objectivity, but even they are not immune to all the challenges. For instance, whenever statements such as “X% of CISOs say that”, there is often a possibility that some of these CISOs are working for vendors, or have made investments in specific companies which, in turn, creates an incentive to drive a certain narrative. That is not to say that the sole fact that a person has invested or works for a service provider makes them biased, but knowing if a conflict of interest might be at play would most certainly help.
Then, there is the selection bias. To understand how it manifests itself, let’s take industry surveys as an example: those conducted by vendors themselves will naturally feature responses from people in their reach, meaning those who are more likely to be aligned with the company’s view of the world. On the other hand, surveys conducted by market research firms, typically rely on responses from people who have specifically signed up to be compensated for participating in surveys - a fact that may impact the results in different ways. Even when there is no evidence that people are being dishonest, the mere possibility that someone may have had a conflict of interest is enough to question their stance on a problem. Think about industry analyst firms such as Gartner, Forrester, and the like: while leading players in the space have rigorous processes for getting their reports, quadrants, and other output peer-reviewed, vetted, and rooted in quantitative research, the fact that there is a mere probability of bias, and often - an apparent conflict of interest - makes many industry players question their objectivity.
While questionable and misaligned incentives always have the potential to lead to undesirable results, when so many parties are trying to maximize their own outcomes in so many ways, the tragedy of the commons is inevitable.
Going into the future: getting out of distrust
It is all too tempting to start thinking that everything is doom and gloom, but I don’t think it is. Trust is a critical component of cybersecurity, but trust is not the same as blind trust. The “trust but verify” approach, on the other hand, is not the same as distrust; it is about expecting to see the proof before forming a trusted relationship. That is why for us to evolve as an industry, we need to focus on transparency and integrity.
The increasing importance of vendor transparency
In the past, it was a common practice for vendors to provide the absolute minimum information, and force customers to talk to a sales rep when they showed any degree of curiosity. The typical flow looks like follows: visit the generic marketing site that offers little information to evaluate the solution, and get presented with the “talk to the sales rep” form at every stage of the process. Interested in seeing pricing? “Contact us for a quote”. Interested in understanding what the product does? “Schedule a demo”. Want to know if we integrate with a solution you care about? “Talk to a sales rep”. This, however, is starting to change.
There are many layers to transparency when viewed in the context of vendor evaluation:
Can this vendor solve my problem? Better yet - can this vendor solve multiple of my problems?
How does the product work?
How much does it cost?
With any solution, there will always be different nuances, and given the importance of trust in cybersecurity, there will always be a time and a place for a conversation. However, there shouldn’t be a need to talk to sales to get access to a basic API doc or to understand if the product falls under the ballpark budget the security team can afford.
When all vendors in the industry play under the same rules and hide all information, forcing buyers to book sales calls, then, despite the horrific experience for security teams, providers have little incentive to change. In 2023, this is not, however, the case. In most industry segments, we are starting to see vendors that do make their pricing, products, and documentation transparent. If, as I sincerely hope, they will manage to attract customers and grow big enough to pose a threat to traditionally secretive providers, those who force everyone to “talk to sales to get a quote” will be forced to change.
Another factor that is making vendor transparency possible is the move from promise-based to evidence-based security I have discussed in depth before:
“The best way to build a security posture is to build it on top of controls and infrastructure that can be observed, tested, and enhanced. It is not built on promises from vendors that must be taken at face value. This means that the exact set of malicious activity and behavior you’re protected from should be known and you should be able to test and prove this.” - Source: Future of cyber defense and move from promise-based to evidence-based security.
Transparency around information sources, signals of trust, and industry participants
We need to work towards ensuring transparency when it comes to the information we consume. Such is the nature of the business that those building solutions are also the ones incentivized to talk about the problems. However, just because you learn about the problem from someone who gets paid for solving it, it does not mean the problem doesn’t exist. Instead of distrusting doctors’ diagnoses, we have built systems and tools to make health measurable; the same needs to happen in security.
When we read an industry report, it should be clearly stated who developed it, who paid for it, who was involved in its production, and what potential conflicts of interest there are. I have previously discussed this in the context of analysts:
The same transparency is needed for other signals of trust such as industry awards. How many companies applied for an award, how much they paid for it, how many companies received it, what were the criteria, and who was the jury? In the meantime, there is no reason why as an industry, we cannot maintain a list of predatory award providers, or pay-to-play magazines for that matter.
The strongest weapon against the pay-to-play space is indifference: if people in the industry will stop cheering these “winners” and “authors” on social media, the frenzy will quickly die down simply because businesses won’t pay for methods that don’t drive the desired results. If we, however, keep congratulating one another on things that are clearly not an accomplishment, the number of pay-to-play offerings will keep growing, thus growing our trust issues.
Transparency around vendor selection
Transparency in cybersecurity has many layers: there is the need for vendors to be open and upfront about the capabilities they actually offer and their price, the need for all market participants to disclose conflicts and interests and incentives that could be affecting the objectivity of their reviews, and transparency for boards and leadership of organizations around how a certain vendor was selected, and factors that impacted the evaluation process.
While the first two are discussed by almost everyone and everywhere, the last one is most commonly forgotten. There are several reasons for this. First, the boards and the company leadership have little to no understanding of security issues, and therefore they have to trust CISOs to make the right decisions. Not all security leaders are equally equipped to explain how vendor selection in their organization is done, partly because it is not possible to easily compare where different vendors actually stand in terms of their capabilities, and partly because security today is both an art and a science. There have also been cases when those in charge of IT purchasing would choose tools not based on merit, their capabilities, and the fit for the organizational needs, but based on misaligned incentives - be it in-kind or financial rewards going to their own pockets. A case in point is this Netflix story where a VP of IT Operations was convicted of fraud and money laundering for pay-to-play payments from tech startups seeking Netflix contracts.
I know that this is an incredibly rare case in the industry, and by no means I am not trying to imply that this is a common practice. Having said that, it is enough to know that there is a mere possibility of such a situation occurring for the boards and leadership teams to ask for more transparency and visibility around vendor selection. Most importantly, with the number of security breaches, CISOs themselves have all the incentives to conduct in-depth product evaluation and ensure that vendors they choose to work with, can help secure their organization’s environment. With the move to evidence-based security, this should become if not easy, then at least somewhat possible to accomplish.
For the industry, even a rumor about the ability to “get in” through the side door can cause a lot of damage. Given the ever-growing number of vendors eager to get attention, and the amount of funding poured into the space, it is inevitable that if such an opportunity exists, and if we accept that it is a normal buying practice, we will quickly move away from merit and towards something else, which won’t help us protect our people and organizations.
Building, restoring, and preventing the erosion of trust is critical for the future of cybersecurity. Quite often, talking to security leaders and practitioners reveals a sense of pessimism about the direction the industry is going in. I am personally convinced that the future can be bright. If the community will continue to push for changes, learn, share best practices, and preserve integrity while doing all of this, we will be in a good place to defend against the adversary.
While there are no magic tools to solve all the industry’s problems, transparency and integrity are as close as they can get. It will take time to weed out those who want to keep people and organizations in a state of unrest so they can continue to sell magic boxes, but I am sure that too is a solvable problem, as long as we have a sense of integrity and our hearts are in the right place.