There are only two sources of security issues: software bugs and configuration mistakes
This piece discusses the idea in detail, explains why this is the case, concludes that the “shift left” movement is not going to work, and shares a vision for the future
Earlier this year, I was fortunate to attend LocoMocoSec 2024, the Hawaiʻi Security Conference focused on product security. At the event, Ron Perris, co-founder of the conference and a software security engineer, and I got into a discussion about sources of vulnerabilities. The insight that came out of that conversation is that there are only two major sources of security issues: software bugs and configuration mistakes. While neither of us came up with this concept, we both agreed that this is a very insightful angle.
This piece discusses the idea in detail, explains why this is the case, concludes that the “shift left” movement is not going to work, and shares a vision for the future. I am grateful to Ron for the great discussion at the conference and after it as this article would most certainly not become possible without these friendly brainstorms and thought exchanges.
This issue is brought to you by… Vanta.
Automate SOC 2 and ISO 27001 compliance: join Vanta’s live product demo
Proving trust is more important than ever. Especially when it comes to your security program.
Vanta helps centralize program requirements and automate evidence collection for frameworks like SOC 2, ISO 27001, HIPAA, and more, so you save time and money—and build customer trust.
And with Vanta, you get continuous visibility into the state of your controls.
See it in action: join Vanta’s live product demo on December 5 at 10am PT to learn how Vanta automates the work for security and privacy frameworks and helps you move towards a state of continuous compliance.
Welcome to Venture in Security! Before we begin, do me a favor and make sure you hit the “Subscribe” button. Subscriptions let me know that you care and keep me motivated to write more. Thanks folks!
There are two sources of security issues: software bugs and configuration mistakes
Software bugs
Bugs are software mistakes where the software was written in a way that doesn’t enforce the intended security mechanism. For example, JSON Web Tokens (JWTs) allow engineers to specify the signing algorithm, such as RSA or HMAC, in the JWT header. However, some implementations also permit using the ‘none’ algorithm, which means the token is not signed. If an application accepts the algorithm specified in the JWT header without verification, this could allow an attacker to set the algorithm to ‘none’, making the token appear valid without a signature. To prevent this vulnerability, receivers should enforce a predefined algorithm on their end, ignoring the algorithm specified within the JWT header. Not doing so would introduce a software bug that can lead to a security incident.
Configuration mistakes
Too often what we see instead are cases when the software itself has all the right settings but someone misconfigured it in a way that makes the compromise possible or highly likely.
For example, a common misconfiguration occurs when SSH is set to allow password-based authentication for all users, and the root account has an empty password. This combination allows anyone to log in as root without needing credentials, which is a serious security risk. Another common example of a misconfiguration is with WiFi access points: some manufacturers ship WiFi access points with insecure default settings. For instance, these devices may be set to broadcast on public networks by default, run with root privileges, and allow unrestricted access without requiring a password. In both cases, all the right settings are there but someone configures them in a way that makes a compromise highly likely.
An easy way to discriminate between the two
An easy way to tell the difference between the two is to see what it takes to fix a vulnerability in question. If a software change is required, then it’s a bug; if someone needs to make changes to the settings, then it’s a misconfiguration.
Illustrating these ideas with MITRE ATT&CK framework
To illustrate the thesis that there are two sources of security issues, software bugs, and misconfigurations, we can look at the MITRE ATT&CK framework.
MITRE ATT&CK, a globally accessible knowledge base of adversary tactics and techniques, is attempting to bring rigor to the idea that we can identify and describe steps adversaries can take to achieve their goals. It summarizes the knowledge in a matrix that describes the high-level stages an adversary would go through to achieve their goals (Reconnaissance, Resource Development, Initial Access, Execution, Persistence, etc.), as well as tactics they would employ at each stage.
Arguably the most critical stage is gaining access to the target system. MITRE calls it “Initial Access” and describes it as follows: “Initial Access consists of techniques that use various entry vectors to gain their initial foothold within a network. Techniques used to gain a foothold include targeted spear phishing and exploiting weaknesses on public-facing web servers. Footholds gained through initial access may allow for continued access, like valid accounts and use of external remote services, or may be limited-use due to changing passwords.”
Image Source: MITRE ATT&CK
If we look closely at the techniques bad actors can use to gain their initial foothold within an organization, it quickly becomes clear that they are possible due to either software bugs or misconfigurations:
Content Injection. OWASP explains it as follows: “When an application does not properly handle user-supplied data, an attacker can supply content to a web application, typically via a parameter value, that is reflected back to the user”. The key here is that the application doesn’t handle the data the way it should, which is a software bug.
Drive-by Compromise. Although there are many ways in which this attack can be accomplished. It is only possible because of security bugs in the browser.
Exploit Public-Facing Application. A public-facing application can only be exploited if it has something exploitable, which is a security bug.
External Remote Services. In this case, it’s either a bug in a remote service, or it has been misconfigured to allow users to log in when they should not be able to do that as it has previously happened with tools like VPNs, Citrix, and others.
Hardware Additions. Despite the fact that this technique as the name makes it clear relies on hardware, it can be argued that software should be written in such a way that it doesn’t trust computer accessories, networking hardware, or other computing devices without verifying that they can be trusted. It is therefore still a software bug that leads to exploitability.
Phishing. If a login system can be compromised using stolen credentials, there is software or configurations that can be added to make it more difficult or not possible to exploit.
Replication Through Removable Media. This technique is enabled by misconfigurations of operating systems.
Supply Chain Compromise. As MITRE explains, “While supply chain compromise can impact any component of hardware or software, adversaries looking to gain execution have often focused on malicious additions to legitimate software in software distribution or update channels.” The fact that there are sometimes no controls in place to handle these cases is a clear example of software bugs.
Trusted Relationship. Systems have to be configured in a way to trust some people and the “true zero trust” (even if such a thing exists) can be hard to accomplish. That said, the tooling is available. The problem is that we frequently see misconfigurations that assume trust where it should not be assumed, and a lot of software is written in ways that allow adversaries to subvert the system into doing what it wasn’t designed to do (bugs).
Valid Accounts. These are typically misconfiguration problems. For example, MITRE says “In some cases, adversaries may abuse inactive accounts: for example, those belonging to individuals who are no longer part of an organization.” There are a plethora of tools that make it possible for IT and security teams to solve these problems, so when issues remain, it is not for the lack of solutions.
What follows is that attackers cannot get into a machine unless there are misconfigurations or software bugs. And, since Initial Access is a prerequisite for all other actions they can take (Privilege Escalation, Lateral Movement, Exfiltration, and so on), eliminating software bugs and misconfigurations would make cyber attacks impossible.
Lack of incentives is why these problems persist
As an industry, we have tried a lot to solve the problems of software bugs and misconfigurations. So far, we haven’t been able to arrive at a viable solution. The reason for that is simple - lack of incentives.
Lack of incentives for tech vendors
In order for technology companies to invest in security, there needs to be demand on the market for secure software. In other words, the market has to reward security. So far, it doesn’t appear that it is happening. If it’s any consolation, it’s not just security but software quality in general that doesn’t get rewarded as much as we would hope. Performance, maintainability, accessibility, and other attributes of quality code do not directly compel customers to pay more.
It gets even more interesting when we realize that rather than investing enormous resources into building software securely, companies can just add security products to their portfolio and get their customers to pay more. In extreme examples, not only does the market not punish tech companies for creating insecure code, but it actually rewards them for selling the solutions to problems they themselves have created. And, since the creators of the problems usually understand them better than anyone else, they are also best positioned to design products that solve them better than others.
Another factor worth mentioning is that there is simply never the right time to focus on security. When a startup is small, it needs to focus on rapid iteration and finding product-market fit (and there are no customers and no valuable intellectual property so nothing to protect anyway). If it is successful, it needs to focus on scaling which naturally requires full focus. Once the company goes public, it needs to defend its valuation, stay lean, and ensure that revenue per person stays high, so it’s also not the right time to add more security people. It doesn’t take long to see that there never seems to be the right time to focus on security.
Lack of incentives for software engineers
Although the security industry has been talking about “shift left”, the reality is that software engineers didn’t have any incentives to focus on security a decade ago, and they still don’t have any reasons to do it now. The incentive misalignment is at the core of why the “shift left” movement failed: software engineers don’t get paid more for writing code more securely, and don’t get promoted for reducing the number of vulnerabilities.
Industry veterans like Jeremiah Grossman observed this reality over a decade ago, and little has changed since then.
Image Source: Jeremiah Grossman
Lack of incentive for security practitioners
Security practitioners also lack incentives to solve the problems at the core. Two to three decades ago, security enthusiasts were trying to do the right thing, focusing on problems from first principles. As what was once a space for hobbyists grew into one of the largest markets in technology, the reality has changed. Today, there are thousands of vendors and hundreds of thousands of security practitioners whose very existence depends on security problems to continue in perpetuity. This is not to say that people are dishonest: individually, the vast majority of security practitioners are mission-driven and extremely passionate people. However, when taken all together as a workforce, they most certainly want the industry to exist (nobody wants to be made redundant and have decades of their hard work devalued in an instant).
Going into the future: finding scalable solutions
It may sound like everything is doom and gloom, but it shouldn’t. Similar to how we have many reasons to be optimistic about the industry at large, there are plenty of reasons to be optimistic about us solving the problems of software bugs and misconfigurations. What matters is that we continue to test our assumptions, do more of what works, and stop doing what doesn’t.
Solving the problem of software bugs
Shift left isn’t going to work
Step one to solving the problem of software bugs is finally admitting to ourselves that shift left is not going to work (not without realigning incentives, and that isn’t happening anytime soon). When earlier this year, OWASP Global, one of the largest and the most-known application security conferences had to cancel their DevDay, it became clear that there are no software engineers we “shift” anything to. To put it differently, if we weren’t able to find enough software engineers interested in learning about security in the Bay Area, the world’s home of technology, it’s time to rethink some of our core assumptions.
We can and should focus on designing things secure from the ground up. We can and should do our best to address the vulnerabilities earlier in the software development lifecycle. And yet, we have to also be realistic: if today we were to get 100 software engineers and 100 security practitioners in a room and ask people who can build products securely to raise their hands, we would probably have at most 10 or 15 hands up. The rest either can’t code (not enough to build products) or don’t understand security.
If you are interested in reading more on this topic, check out "Shift Left" is Starting to Rust.
Building secure defaults and making them easy to adopt
We can’t hope that every developer will learn security and be able to implement XSS defense, build an Auth system, or do JWT verification. Instead, we should have security engineers build secure defaults and make them easy to adopt. Since as early as 2013, Jason Chan for example has been talking about the concept of guardrails and paved roads - something that we are now finally starting to discuss and adopt, even if slowly.
The good news is that the vendor market is slowly starting to catch up with companies such as Chainguard and Resourcely paving the way to secure defaults.
Only if security experts focus on building systems that provide secure defaults and that are hard to misuse, will we be able to move the needle in making software more secure.
Investing in product security
Ron Perris, a long-time software security engineer, has a great perspective on product security. In his view, application security is about finding individual bugs and driving their resolution, individually or at scale. Product security, on the other hand, is about eliminating the whole class of bugs such as when React got rid of XSS on many websites. This is what LocoMocoSec, a security conference he co-founded, was designed to do.
Image Source: Lukas Weichselbaum, LocoMocoSec keynote slides on "Google's Recipe for Scaling (Web) Security"
We need to come up with technologies, standards, and tools that make it possible to eliminate the entire class of bugs making our software insecure.
Solving the problem of misconfigurations
To solve the problem of misconfigurations, we need to build software that is secure by default. Chris Hughes has a great summary of how CISA defines that: “CISA emphasizes the role of configurations when discussing Secure-by-Default, where they say Secure-by-Default includes examples such as:
MFA
Eliminating Default Passwords
SSO
Secure Logging
CISA even introduces a new concept they call “Loosening Guides”, where rather than “Hardening Guides” for products/services/software, organizations would receive an inherently secure and hardened product and then need to loosen the product by changing default configurations which were baked-in to prevent malicious activity or putting end users at risk of exploitation and incidents.
We all know the experience of having products and then needing to go apply CIS Benchmarks, DISA STIG’s, Vendor Guidance and so on to harden the product/software to ensure we reduce its attack surface.
This paradigm shift would pivot from that model and have products arrive in a hardened state and require customers to roll them back, or loosen the hardened configurations to tailor it to their needs. This of course would cause friction when it comes to user experience, where users feel the need to tailor a product to do exactly what the business needs or wants it to do, even if some of that functionality may not be in the best interest of security.
One of the most notable examples of Secure-by-Default that comes to mind for me when thinking about cloud for example is the AWS Simple Storage Service (S3). We saw several high profile broadly impactful security incidents due to publicly exposed AWS S3 buckets, often exposing sensitive data.” - Source: Secure-by-Design vs. Secure-by-Default: What's the Difference?
We cannot hope that we can train millions of people and organizations around the globe how to configure software securely; software vendors should be taking responsibility for building products hardened from day one.
A note about humans and social engineering
While there are indeed only two major sources of security vulnerabilities, there are actually three major attack vectors. Aside from software bugs and configuration mistakes, a big source of security issues is social engineering. The topic is too broad to be discussed in this piece, but I’ve previously written about some of its aspects: Security awareness won’t save us, and people will continue clicking on links (as they should)
Ross,
Great post. I find most folks have heard of Mitre ATT&CK, but haven't heard of MITRE CAPEC.
https://capec.mitre.org/about/attack_comparison.html
If you are focusing on application security risks then you should look at Capec. If you are looking at network defense then look at Mitre ATT&CK
People are the cybersecurity vulnerability whether it's coding, configuration, or behavior, and we spend all our time trying to patch our way out of it with software updates or new tech. Too bad we can't patch people's behavior.