As we approach the 170th birthday of Thomas Edison this February, how can we not acknowledge him as the father of intelligent risk-taking? After all, Edison summed things up quite well after numerous failed attempts on the electric light bulb and said: “I have not failed. I've just found 10,000 ways that won't work.”
But unfortunately the great majority of businesses today are built around a culture in which failure is not tolerated and is feared by employees. This severely limits the value that employees could deliver in new, breakthrough ideas. In an increasingly volatile, uncertain, complex, and ambiguous world, the need to assess and take intelligent risks could not be more pressing.
Assessing the probability of something happening is not easy. But a client of mine has a risk assessment process they are very proud of. It produces a set of colourful ‘heat maps’ that highlights what events are most likely to happen and evaluates the likelihood of happening. It is all collated into a management dashboard so that senior leaders can see at a glance the business’s exposure to risk. And it has been validated by one of the U.K.’s leading auditors, so you can see why they are proud of it.
There is only one problem, and it’s a quite serious one: It’s no good. Well, to be fair, it might be good, but we cannot know because its methodology is fundamentally flawed.
It has a common flaw in risk assessment that you might recognize. Key people in the business were asked to assess the risk factors of various events and rate them on a five-point scale, with one being not very likely to occur and five being almost certain to happen. A large group of people were surveyed and the mean average of their responses were used for an aggregated view.
A mean average? Their assessments are in the form of ordinal data; they were simply ranking the likelihood subjectively. Generating a mean average is simply not valid. But, it seems, no one who created the process (or any of the consultants that validated it) took Statistics 101.
Imagine rating a film on a five star scale. You give it four stars. I give it three, and our friend gives it two. The mean average is three. But what does it mean? I don’t know your criteria for a four-star movie, nor do you know mine for my three-star review, and neither of us know our friend’s for a two-star movie. Her two stars could be the same as my three or your two. Confused? Me too, and so would any claim that our little system assessed films accurately. In crude terms, this is the same process that generates many heat maps that adorn PowerPoint presentations on risk.
If the processes are a bit questionable, there is no question that our world is a volatile, uncertain, complex and ambiguous place. Dealing with complexity is a daily challenge and intelligent risk assessment tools rooted in robust methods can help us.
Take the rule of five, for example. Let’s imagine that you are interested in how long people take to commute to work. If you ask five people at random, you should note the shortest and longest commutes they report. You now know enough to predict with great certainty (well 93.75 percent certainty) that everyone else in the company will fall in that range. The example from Douglas Hubbard shows that using statistics needn’t be difficult and help us navigate a path through complex problems. Equally, we can reduce “noise” and assess risk better if we unpick some of the component parts of the complexity that we encounter.
We can start by identifying what kind of complexity we are addressing.
Problems that are of low complexity can be solved simply by piecemeal, backward looking and authoritarian means: deal with the problem logically. One thing at time. What worked before will work now. “Listen to me. I know what I’m doing.” However, high complexity problems require systematic, emergent, and participatory approaches. The decision-maker needs to understand the relationship between the context and the problem, and the problem and the context and, as issues reveal themselves, be prepared to alter course and act differently, give ownership of the problem to as many relevant perspectives as possible, talk openly, and listen reflectively. This isn’t just assessing and managing risk; it’s anticipating it.
It’s also good to know about your personal default setting when it comes to risk. We know those are influenced by a range of personality factors: Are we motivated mostly by achieving success or avoiding failure? How open are we to new experiences? How sensitive are we to how others see us? How tolerant are we of ambiguity?
We are also influenced by situational factors. By familiarity: Have we done this before? How imaginable is failure? How much control do you believe you have over events? The intelligent risk-takers know themselves and their environment.
Even with that insight and some robust thinking, there is no guarantee they’ll make the right call every time. But the chances of success are far better. The fear of failure will be the greatest single roadblock to a developing dynamic workplace that unleashes breakthrough thinking that gives an edge over your competitors.
Had Thomas Edison not grasped intelligent risk-taking, we’d be sitting in the dark.
Whether you want to identify risk, appraise capabilities or evaluate a programme, the Kaplan Diagnostics range can address your business needs.
Dr Ian Stewart, Head of Leadership and Organisational Practice, has over 25 years’ experience of leadership development in the public and private sector. Prior to joining Kaplan, Ian ran the Behavioural Science department at the Royal Military Academy, Sandhurst.