In this post, I will outline the main reasons why you should stop using Risk Matrices, as explained in the book “How to Measure Anything in Cyber Security Risk” by Douglas W. Hubbard and Richard Seiersen.
The Psychology of Scales
Ordinal scale are inherently ambiguous. Several research studies have shown that for different individuals the expressions such as “Likely”, “Unlikely”, “Very Likely” have widely different probability meanings. Richards J. Heuer showed for example that, for a group of NATO officers, the words “Very Likely” could mean a probability of any value from 50% to 100% , and the words “Unlikely” could mean any probability from “5% to 35%”. Budescu found similar patterns in his research related to the ambiguity of terms used to describe Climate Change risks.
Some individuals also tend to let their aversion to a given risk influence their probability assessment, with a more or less conscious thought such as: “this risk is too serious, I will increase my likelihood assessment because, we must avoid it !”
Too often, the time period is not given with a Risk Matrix: the probability of a risk happening next year, or in the next ten years should not be the same !
Finally, the definition of the scale is shown to have a large influence on the way individuals assess: the number of levels, the direction (high to low, or low to high), or if there are words associated with the levels. For exemple, individuals will select more often a value of 1 if the scale is 1 to 5, compared to if the scale is defined from 1 to 10, even if the value 1 is given the exact same definition.
How Risk Matrices distort reality
When we use 2 ordinal scales and represent risks in a matrix (Likelihood / Impact), we bring in even more ambiguity.
The first problem is range compression: this is the extreme compression of continuous values to a single level value. For example, in a Risk Matrix, the 2 following risks could be classified in the same cell, whereas the expected loss from Risk B is 100 times larger: - Risk A: Likelihood is 2%, impact is $10 million - Risk B: Likelihood is 20%, impact is $100 million
The definition of the matrix could also lead to completely distorted outcomes for the risk ranking. - Risk A: Likelihood is 50%, impact is $9 million - Risk B: Likelihood is 60%, impact is $2 million
Risk A could be considered a “Medium” risk, while Risk B could be considered a “High” risk. This is the exact contrary to the expected loss as would be calculated by an actuary.
Risk matrices are silent about the correlations between events. They ignore vulnerability, threats, and correlation of events.
As a result, risk matrices do not reduce uncertainty, they are sometimes worse than random !
Amplifying effects
Other effects amplify the ambiguity with risk matrices. The first one is clustering : research has shown that when evaluating risks in a 5x5 matrix, individuals tend to cluster their answers in the mid-range. Therefore, what was initially supposed to be a 5x5 matrix, becomes in practice a 2x2 matrix !
A large research in the Oil & Gas industry has shown that the definition of the matrix (the scale, the direction, etc) has a large impact on the risk rankings. They computed the “Lie Factor” (a coefficient invented by E. R. Tufte to assess how a given visual display can distort the data) that result from various risk matrices definitions. They could find a factor up to 100, when a factor of 15 is already considered a gross lie !
Conclusion
There are ample solid research results to demonstrate that ordinal scales and Risk Matrices are ineffective, and - worse - are dangerous, as they augment uncertainty instead of diminishing it. Therefore, their widespread use in the Risk Management practices of many industries, and in many international standards alike, should now be urgently abandonned for quantitative scientific methods, as we find in Decision Science.
Sources:
How to Measure Anything in Cybersecurity Risk - Douglas W. Hubbard, Richard Seiersen
Improving Communication of Uncertainty in the Reports of the Intergovernmental Panel on Climate Change, Psychological Science 20, no. 3 (2009) - Budescu, Broomell, Por
Psychology of Intelligence Analysis - CIA - Richards J. Heuer
What’s Wrong with Risk Matrices?, Risk Analysis 28, no 2 (2008) - Tony Cox
Problems with Scoring Methods and Ordinal Scales in Risk Assessment, IBM Journal of Research and Development 54, n 3 (April 2010) - D. Hubbard and D. Evans
The Risk of Using Risk Matrices, Society of Petroleum Engineers Economics & Management 6, n 2 (April 2014) - P. Thomas, R. Bratvold, J. E. Bickel
The Visual Display of Quantitative Information - E. R. Tufte, P. Graves-Morris