You were probably in grade school when I had this exact same conversation for the first time in 2008 or so. It isn't a hard problem btw but a large part of the issue is the historical data which is so valuable is not structured and normalized properly in order for this type of use. Much of the underwriting data such as the qual/quant is soft data in application forms and claims files that is not easily extracted.
The data is actually not complex or difficult to model at all. The hard part is getting industry to build a standard after AC$$D is no longer trusted by many including me. I can build a base model in a few days and have built three insurance standards. But where do we go? There has to be a caretaker organization.
D3FEND is certainly a good starting point and to measure Security controls in the field.
When we take tools, processes and people into the equation, I'm a big fan of the SIM3 v2 Assessment by the Open CSIRT foundation: https://sim3-check.opencsirt.org/#
Where the medical metaphor has weakness is the fact that human bodies are standarized (more or less) which enables topographical analyses. I’ve seen many network diagrams that were nowhere near what the topography was intended to illustrate.
Networks are also private. Getting data on their structure, locations, and ownership and placing that in datbases could make those databases a target for unsrupulous actors. That could become a nightmare scenario.
Cybersecurity needs better taxonomy AND an improved focus on making decisions based on data and findings, either qualitative or quantitative. Some of the work on the taxonomy will help turn what is now perceived as qualitative into quantitative, and therefore enable faster and more objective decisions around security priorities, posture, and investments.
Coming from the computer and electronics, I'm still surprised how little the abundant data is captured and then used. Maybe it's the fact that cybersecurity became "mainstream" investment in the last 10 years or so, and the computer / electronics industry is much older, which means - good news - that it will eventually come around and have data-driven approaches to certain things.
Overall I sense that the quantitative "data-driven" sense is essentially coming from the alerts and remediation count, and that the qualitative sense is the burn out or burning topics within the organization. It seems legacy from the reactive stance taken by the industry and the vendors that take advantage of it to create more "acronyms" that create a sense of patching those burning needs while holding security professionals hostage to it.
A common taxonomy for cybersecurity would help us see things more objectively. Having experience in creating and managing taxonomy in the past, I can tell you that the first step is to provide something simple that could be valuable, and thus, would gain adoption. And then come back for more.
Ross...
You were probably in grade school when I had this exact same conversation for the first time in 2008 or so. It isn't a hard problem btw but a large part of the issue is the historical data which is so valuable is not structured and normalized properly in order for this type of use. Much of the underwriting data such as the qual/quant is soft data in application forms and claims files that is not easily extracted.
The data is actually not complex or difficult to model at all. The hard part is getting industry to build a standard after AC$$D is no longer trusted by many including me. I can build a base model in a few days and have built three insurance standards. But where do we go? There has to be a caretaker organization.
Mica
D3FEND is certainly a good starting point and to measure Security controls in the field.
When we take tools, processes and people into the equation, I'm a big fan of the SIM3 v2 Assessment by the Open CSIRT foundation: https://sim3-check.opencsirt.org/#
Where the medical metaphor has weakness is the fact that human bodies are standarized (more or less) which enables topographical analyses. I’ve seen many network diagrams that were nowhere near what the topography was intended to illustrate.
Networks are also private. Getting data on their structure, locations, and ownership and placing that in datbases could make those databases a target for unsrupulous actors. That could become a nightmare scenario.
Cybersecurity needs better taxonomy AND an improved focus on making decisions based on data and findings, either qualitative or quantitative. Some of the work on the taxonomy will help turn what is now perceived as qualitative into quantitative, and therefore enable faster and more objective decisions around security priorities, posture, and investments.
Coming from the computer and electronics, I'm still surprised how little the abundant data is captured and then used. Maybe it's the fact that cybersecurity became "mainstream" investment in the last 10 years or so, and the computer / electronics industry is much older, which means - good news - that it will eventually come around and have data-driven approaches to certain things.
Overall I sense that the quantitative "data-driven" sense is essentially coming from the alerts and remediation count, and that the qualitative sense is the burn out or burning topics within the organization. It seems legacy from the reactive stance taken by the industry and the vendors that take advantage of it to create more "acronyms" that create a sense of patching those burning needs while holding security professionals hostage to it.
A common taxonomy for cybersecurity would help us see things more objectively. Having experience in creating and managing taxonomy in the past, I can tell you that the first step is to provide something simple that could be valuable, and thus, would gain adoption. And then come back for more.