Security awareness won’t save us, and people will continue clicking on links (as they should)
Getting people to help protect their organizations: shortcomings of security awareness, why employees will continue to click on malicious links, and what we need to do about it
In March, I published a piece about consumer-focused security, explaining why people don’t pay attention to security, which types of B2C security products have seen adoption and which didn’t, and what the future of consumer-focused security looks like.
When we discuss the role of employees in protecting their companies, it’s often said that people are the weakest link and that no matter what we do, users will continue to “fail” security.
In this article, I am looking at the “people and security” problem from a different, company-centric angle. I will discuss what makes employees not pay attention to security, why security awareness will continue to fail, and how we can get people to become better at security.
My book, ‘Cyber for Builders’, has been named a finalist for the SANS Institute Difference Makers Award. If you got value from Venture in Security or ‘Cyber for Builders’, or if you’re a fan of what I'm doing, I’d be very grateful for your vote here (it takes less than a minute).
Voting is open now through Friday, October 4 at 5:00 PM EDT (UTC-4). Thanks for your support!
Before we begin, do me a favor and make sure you hit the “Subscribe” button. Subscriptions let me know that you care and keep me motivated to write more. Thanks folks!
Reasons why historically, employees and business leaders haven’t been paying attention to security
The struggle of securing companies has the same roots as the struggle of securing individuals
Although security practitioners like to think that the security problems of consumers have little to do with the security problems of the enterprises, this could not be further away from reality. Most security teams understand what makes individuals carefree when it comes to securing their personal data, but they will somehow assume that the same people will become “human firewalls” the moment they log into their work machines. While this doesn’t usually happen, humans are indeed great at compartmentalizing: I have met countless engineers who don’t use MFA and password managers in their daily lives even though they follow all known security best practices at work.
The struggle of securing companies has the same roots: people don’t like dealing with friction and will do anything possible to avoid it. Enterprise security teams tend to introduce a lot of friction, banning different tools, creating complex approval processes, forcing people to go through lengthy compliance paperwork just to implement a simple change to their workflow, and so on. What we call shadow IT, shadow identity and other behaviors are all forms of avoidance which is a natural response to complexity. It is what people do to do their jobs in what they see as the most efficient ways.
Incentive structure in the business world doesn’t encourage security
The way companies set goals and push for growth incentivizes and amplifies the already natural behavioral patterns. For example:
Sales teams are evaluated based on their ability to achieve quota, not on their security habits. If they can find a way to do it faster and more efficiently, they will be seen as more successful than their peers.
Engineering teams are evaluated based on their ability to ship high-quality software fast, not on their security habits. If they can find a way to do it faster and more efficiently, they will be seen as more successful than their peers.
This list can go on and on. When goals are ambitious, and security isn’t a part of company culture and subsequently, people’s performance evaluations, it is not at all surprising that most have good reasons to look for the most efficient, and not the most secure ways of accomplishing their goals.
Security tends to result in more processes, and more processes reduce the speed and agility with which people and organizational functions can achieve their goals. Because of that, there seems to never be the “right time” to do security.
An early-stage, pre-product-market fit startup needs to move as fast as it can to validate the problem and get to the product-market fit. If it adds too many processes and reduces the speed of iterations in order to prioritize security, it may run out of money before it can find a viable business model. If that happens, it won’t matter how secure the company’s minimum viable product (MVP) is because the company will have to shut down. On the other hand, if the breach occurs, the company stands to lose relatively little: it doesn’t have much money, a strong brand, an established customer base, etc. Moreover, the blast radius of a potential incident would also likely be small because very few organizations are using the company’s product.
Once a company has found product-market fit, it needs to move fast to achieve growth and scale so that it can get as much market share as possible before its competitors realize what’s happening and catch up. If it adds too many processes and reduces the speed of iterations in order to prioritize security, it may run out of money before it can scale, or it will be outcompeted and end up with a much smaller market share than it could otherwise attain.
Once a company becomes a market leader, goes public, or gets acquired by a large player, it cannot slow down too much as it needs to continue growing. By now, the stakes are much higher, the expectations are much higher, and the amount of capital that needs to be invested in security is also much higher. At this stage, the cost of a security incident is high, but so is the opportunity cost of funding a security program. Let’s say, a security team makes an argument that investing an additional $2.5 million into security would substantially reduce the probability of an incident. The executive team will be faced with a gamble. Should it spend $2.5 million on security? Or, should it invest $2.5 million in sales and product development, turn it into $25 million within a year, cross its fingers that it won’t get breached, and if it does - that the amount it needs to pay will be lower than $25 million. If it ends up paying, say $10 million in ransomware, then the decision to not invest the $2.5 million in security would yield a $12.5 million in profit ($25 million minus the original $2.5 million minus the $10 million ransom). I am sure every security professional reading this must be screaming inside “No, this isn’t that simple as people’s sensitive data is at stake!”, but that’s an oversimplified version of real calculations every executive team has to go through. Historically, the impact of security incidents on customer loyalty has been minimal (ask yourself how many of the people you know stopped using Uber or Microsoft software when they were hacked?). The impact of cyber breaches on the companies’ stock prices has also been negligible. All this justifies these math equations to executive leadership.
The struggle of trying to strengthen an organization’s security posture through security awareness
For several decades, security teams have been trying to strengthen an organization’s security posture by turning people into, how it's often described, “human firewalls”. Every year, companies get people to do security awareness training, organize phishing simulations to assess what they learned, and so on.
Several products and product categories are competing in the space of security education and awareness training. Some companies take a newer approach and send tailored just-in-time notifications through Slack and other messaging apps to the right people at the right time, while others continue to film the same “phishing 101” videos and make them impossible to skip or play in a separate window (although as we all know, few people have become more enthusiastic about security after being forced to watch a boring training video).
I have little doubt that ChatGPT and large language models (LLMs) are going to reshape the way security training is delivered. It will be more personalized, and more contextual, and it will take into account what a specific individual needs to know. That said, while security awareness training is certainly useful, it has major shortcomings regardless how it is delivered. First and foremost, behavioral change is hard, and because human nature doesn’t change simply because we learn new facts, awareness is never enough. As someone smart said, “If awareness alone was enough, nobody would be smoking”. More importantly, as adversaries improve their techniques, and phishing, vishing, and smishing attempts will become more and more personalized and convincing. People will find it increasingly harder to tell the difference between, say, a real email from their colleagues and business partners and a phishing attempt.
Going into the future: recipes for solving the “people problem”
Accepting and embracing human nature as is
In the article about consumer-focused security, I argued that we must accept human nature as is, and design defensive measures for it, not against it. The same mindset is needed to protect the enterprise.
“Although the business of protecting individuals and getting individuals to help protect businesses look like separate problems, they have common causes: our human nature. There is a long list of cognitive biases we are affected by, coupled with our innate drive to avoid pain and run away from friction, and our inability to understand risk, to name a few. For decades, cybersecurity discipline focused on solving technical problems, dismissing people as “the weakest link” and assuming that simply forcing them to become more “aware” of security is going to magically change their nature.
This is not going to happen. One of the core reasons adversaries win against security teams is because they understand people, their fears, motivations, and behavioral drivers, and they learn how to exploit them to achieve their goals. When people are urged to act quickly, when they are threatened, when they are curious or afraid, they tend to act irrationally. No security awareness is going to change that.
The first step for us to design a more secure future is to embrace human nature and design security for it, not against it. First and foremost, this means designing security measures with a full understanding that people will always look for the easiest way to accomplish their goals. It also means accepting the fact that we cannot expect consumers to buy separate security tools - we need to build security into the current infrastructure and make it a default (and the only) option.” - Source: The business of protecting individuals: realities of consumer-focused security, why we cannot expect that people will buy security tools, and what we need to do instead.
Securing businesses: future of getting employees to secure their companies
Learning psychology and developing a discipline of cybersecurity UX
For the longest time, security has been seen as purely a technology problem. Security practitioners focused on building technical controls to strengthen the environment and reduce the probability and potential impact of cyber incidents. People were viewed as simply one of the sources of risk and instructed to follow a long list of policies and procedures in order to ensure that their behavior was “appropriate” and “secure”. We are slowly starting to realize that we cannot simply train and police every employee to do the right thing. Instead, we need to learn how humans behave, what drives them, and how to incorporate it into the way we design our security controls.
Although security is slow to learn the importance of behavioral design and user experience, we have a great example of how this was done in the field of software engineering. Not too long ago, software was designed to be technology-centered, and people were expected to read long manuals, sift through hundreds of technical documents, and even memorize specific commands to interact with a specific tool. Later down the road, we learned that this way of building software results in great barriers to adoption, that people forget how the manufacturer wants them to interact with the tool and make mistakes, and that they will inadvertently break something if they cannot easily tell how to accomplish a task on their own. These revelations made software engineering teams start thinking about how to design products that people would find easy to use, instead of teaching users how to interact with their complex machines. Fast forward to today, and user experience (UX) design is a thriving knowledge area and a professional discipline that has changed the way technology is built.
We need to take a similar approach to security if we want to truly make people and organizations more resilient. In 2024, writing and enforcing hundreds of pages of security policies is the equivalent of expecting people to read a 100-page manual before interacting with a new software tool. Even those who have enough patience to sit through this experience, are likely to forget what they’ve read in the first part of the document by the time they are mid-way into it.
We need to develop a discipline of cybersecurity UX. To change the way security is delivered we need to start by learning about people, their psychology, how they behave, and what drives their decisions. Every security practitioner who hopes to build effective defenses needs to know what cognitive biases are and how they affect individuals. They need to seek to understand humans the way they are, instead of aspiring to make them behave as rationally as software.
Making it easy and frictionless for people to choose the most secure behavior
Security awareness is important but we must not place responsibility for security mistakes on people. Instead, knowing that people will do anything to avoid friction, we need to design security controls in such a way that makes the most secure behavior also the most efficient and frictionless.
Jason Chan, former VP of Security at Netflix, often talks about guardrails and paved roads:
The concept of guardrails is an attempt to move away from the traditional view of security as a gate, blocker, or bottleneck. Guardrails such as automated and integrated controls enable people to keep moving fast but also keep things safe.
Paved roads are a way to make the most secure behavior the easiest one to pick. A person could certainly bushwhack and make their way through the woods, but if they have a smooth paved road that gets them to their desired destination, they are likely to use it.
I believe these concepts need to become core pillars for designing the security of the future. We know that people will do anything to avoid pain and run from friction. What follows is that by making it easier to choose a more secure path, we will greatly decrease the chances of someone accidentally deviating from it. Currently, the opposite is usually true: choosing a more secure behavior means being okay with obstacles it introduces.
Design security measures assuming that people will make mistakes and click malicious links
Even with guardrails and paved roads, security incidents are inevitable. A large part of our security defenses today are built on the assumption that people are not going to make a mistake. In reality, the opposite is true: it is almost inevitable that someone is going to click on a link they were not supposed to click on, push a button they were not supposed to push, or let a bug through that they were supposed to catch. The question isn’t whether this is going to happen; the question is what will the impact be when it inevitably occurs.
Knowing what we know today, we need to design security measures with the basic assumption that people will make mistakes. It is important to take the right approach so that whenever it happens, the whole organization’s infrastructure doesn’t crumble like a house of cards.
Closing thoughts
I am not going to debate whether or not people are, as they say, “the weakest link”. Based on what I’ve seen, discussions around this topic don't usually enrich our body of knowledge. The most important part to remember is that people are just that - people. They are emotional and easy to manipulate, prone to make silly mistakes, and willing to do anything to cut corners, avoid pain, and reduce friction. Although this may not sound as hopeful as some would like, this is the reality and it is this reality we should be designing our security defenses for.
This one was worth sharing on Linkedin!