Struggling with volatility, uncertainty, complexity and ambiguity (VUCA)

The future is hard to make sense of and likely to pose difficult challenges. It always has and probably always will. Throughout history, people probably always thought that the future they faced was the most challenging ever. This can be a dangerous idea if it makes them conclude that the future is so hard to predict that it cannot even be planned for. It is often times of greatest unpredictability that need the sharpest strategic response.  So, how do we best plan for the future?

The VUCA framework suggests that the future is volatile, uncertain, complex and ambiguous. This, according to the Michael Skapinker in the Financial Times, is merely an empty buzzword and a vacuous concoction. According to Nathan Bennett in the Harvard Business Review (article and video), however, VUCA provides a practical, analytical framework for ‘identifying, getting ready for, and responding to’ four distinct types of future challenges.

For starters, let me be clear that I’m more with Nathan Bennett than Michael Skapinker, for two reasons: firstly, if we can categorise different types of future risks, we can make sense of them in different ways and secondly, we can ‘get ready for’ and ‘respond’ to them in different ways.  This is really powerful, compared to bundling all future risks together and trying to deal with them in identical ways.

The key to this, however, is ensuring VUCA does actually provide clean, crisp categories, that can be used to distinguish robustly between a wide variety of future risks.  And this is where I’m struggling.

Nathan Bennett defines volatility, uncertainty, complexity and ambiguity as follows in his Harvard Business Review Article.

At first glance, this looks informative and intuitive.  It has two axes (predictability and knowledge), each with a high and low value, giving us a familiar two-by-two matrix. Unfortunately, we don’t have clear distinguishing criteria separating them all. Take the difference between volatility and complexity, for example. From their relative locations on the two axes, we expect the difference between the two to be that we have more knowledge about volatility and less about complexity: they don’t appear to differ in the extent to which we can predict the results of our actions. Volatility is characterised as having ‘knowledge … often available’.  Complexity also has ‘information … available or … predictable’ – just in a ‘volume or nature’ that ‘can be overwhelming to process’.   This is a weak and hard-to-apply distinction and, probably more importantly, it doesn’t capture the key difference between something that is complex and something that is volatile. If something is volatile, we may know everything about it apart from when it will begin and how long it will last. Something complex, on the other hand is certainly less well known, as the diagram suggests but it’s complexity means it could have emergent properties and the same starting conditions could give rise to very different outcomes.  This means that complexity should be a lot lower than volatility on the ‘How well can you predict the outcome of your actions?’ axis.

The same applies to ambiguity and uncertainty. Ambiguity is characterised by ‘causal relationships’ being ‘completely unclear’, whereas uncertainty means ‘the event’s basic cause and effect are known’. Surely this means that ambiguity should be a lot lower than uncertainty on the ‘How well can you predict the outcome of your actions?’ axis.

So, maybe volatility, uncertainty, complexity and ambiguity should be arranged in a less simple but more meaningful pattern.  Maybe something like this …

Even now, we still have issues of boundary definition. Is it, for example possible to have something that is both uncertain and volatile. The image, as drawn above, suggests they are mutually exclusive.

Maybe we need a new approach altogether.  Here is one I’ve been thinking about.

In order to be able to make sense of and respond well to future challenges, we need the following sequence to happen:

  1. Clear signals from the environment
  2. Attentiveness to signals
  3. Challenge to be recognisable
  4. Challenge to be recognised
  5. Challenge to be manageable
  6. Challenge to be managed

This nicely separates organisational / systems issues from human / behavioural issues.  From an organisational / systems point of view, we need clear signals from the environment, these signals need to make the challenge recognisable and, once recognised, the organisation needs to be able to make that challenge manageable. In addition, human / behavioural responses are needed to attend to environmental signals, to actually recognise the challenge when signals are presented and to take appropriate action to manage the challenge, once recognised.  Each of these then have distinct failure modes, as outlined below.

Clearly, this is still a work-in-progress and one I will be returning to at some point in time.  To be continued …

Comments are closed.