Positional scarcity

Great post from Alex Danco on Positional Scarcity, which I summarise as follows.

In conditions of abundance, relative position matters a great deal. Success in conditions of abundance means standing out from the crowd – finding those scarce positions that enable you to be noticed, valued and chosen over and above your competitors.

There are three ways of achieving positional scarcity: access, prestige and curation.

Access means you have a better route to your customer’s attention and ultimately to their wallet than your competitors. Abundance inevitably causes congestion and access is the form of positional scarcity that enables you to navigate this congestion to your competitive advantage.

Prestige means you stand out from the abundance because of your perceived quality – you appear to be better than your competitors.  This could be because of genuine product quality or branding or both.

Curation means that you are selected and featured by others, giving you the positional scarcity you wouldn’t have had without this curation effect.

Interesting stuff!  My gut-feel suggests positional scarcity my come from more than these three sources but that’s a contemplation for another day.

And just a final thought: In 1675 Isaac Newton wrote, in a letter to Robert Hooke “if I have seen further, it is by standing on the shoulders of giants”.  The positional scarcity version of this is “if I reach further, it is by standing on the shoulders of giants”.

Great strategists: Sun Tzu

Sun Tzu was a Chinese general, military strategist, writer and minister to one of the ancient kings of China in the 5th century BC. He is attributed to be the author of the Art of War, probably the best known and oldest book on military strategy, although the actual authorship of that book is subject to considerable historical debate.   He has been described as  a master of “soft power” and the father of “agile warfare.”

James Clear summarises the Art of War in just three sentences:

  1. Know when to fight and when not to fight: avoid what is strong and strike at what is weak.
  2. Know how to deceive the enemy: appear weak when you are strong, and strong when you are weak.
  3. Know your strengths and weaknesses: if you know the enemy and know yourself, you need not fear the result of a hundred battles.

Great Strategists: Carl von Clausewitz

Carl von Clausewitz was a Prussian General during the Napoleonic Wars, whose book On War is acclaimed, along with Sun Tzu’s Art of War, as one of the original, defining texts on military strategy.

William Pietersen from Columbia Business School has written an excellent short (~1500 words) summary of von Clausewitz’s relevance to contemporary strategists.  Here are the five key points he highlights:

  1. Why do we need strategy?  According to von Clausewitz “The talent of the strategist is to identify the decisive point and to concentrate everything on it, removing forces from secondary fronts and ignoring lesser objectives”.  In other words strategy is the necessary response to the inescapable reality of limited resources.
  2. The strength of any strategy lies in its simplicity. Quoting von Clausewitz, “Simplicity in planning fosters energy in execution. Strong determination in carrying through a simple idea is the surest route to success. The winning simplicity we seek, the simplicity of genius, is the result of intense mental engagement.” Pietersen suggests from this that “No strategy document should ever be longer than 10 pages.”
  3. Strategy needs to be dynamic. One of von Clausewitz’s most famous quotes is “no strategy ever survives the first engagement with the enemy.”  He also said “strategy [must] contain the seeds of its constant rejuvenation — a way to chart strategy in an unstable environment.”  Pietersen adds that strategy should never make the mistake of thinking that competitors are standing still.
  4. Strategy is all about adoption. According to von Clausewitz, “it is at moral, not physical strength that all military action is directed … Moral factors, then, are the ultimate determinants in war”.  Pietersen add a quote from Henri Amiel: “Without passion man is a latent force, like the flint, which awaits the shock of the iron before it can give forth its spark.”
  5. Strategy and planning are not the same thing. Strategy is about picking the right battles. Tactics are about successfully executing those battles.  Strategy’s key role is to define a winning proposition, a rallying call from which all decisions and activities can then be planned. Strategy first, planning afterwards.

Dimensionless KPIs for strategy measurement

We are all used to KPIs that are measured in real things, such as clicks, orders or £s.  Such KPIs are specific, tangible and usually highly measurable.  They are great for measuring the fine-grain detail of business performance but often not so great for measuring strategic change.

The broad sweep of strategy often involves change to organisational culture, customer centricity, innovation or evidence-based decision-making – things that don’t have handy SI units for measuring them.

Dimensionless quantities are used widely in maths, physics, chemistry and economics and are also known as bare, pure or scalar quantities.  Well known examples include π (Pi – the ratio of a circle’s circumference to its diameter) and e (Euler’s number – the base of the natural logarithm) as well as ratios, proportions and angles.

Dimensionless KPIs are typically proportions indicating how close a set of goals are to their targets. Let’s consider a hypothetical example of innovation. In order to be more innovative, we need to develop ways to:

  1. capture more innovative ideas,
  2. screen those ideas and selecting the most viable,
  3. rapidly prototype these viable ideas,
  4. test those prototyped ideas,
  5. evaluate the tests and
  6. invest in the commercial development of those that tested well.

Each of these 6 goals can have targets (e.g. a target number of innovative ideas per month, a minimum proportion of captured ideas that are deemed to be viable ideas etc). At any point in time, progress towards these targets can be measured. Given a period of time over which performance is being measured (e.g. a 3 month or 12 month period), the level of progress achieved can be translated into ‘proportional progress’ – what proportion of the change expected over the specified time period has been achieved so far? And if we have proportional progress for each of the 6 goals leading to greater innovation, we can aggregate them to give a single measure of progress towards innovation.

Making sense of the Business Model Canvas

The Business Model Canvas, devised by Alexander Osterwalder and Yves Pigneur is a huge success with over 5 million downloads of the canvas graphic and over a million book sales in 36 languages. Part of its success is its simplicity and the way it makes intuitive sense.

What is less widely commented upon is the rigour of the thinking behind it and the way the canvas is constructed.   Alexander Osterwalder referred to the canvas as having three zones: desirability, feasibility and viability.

I’ve never been too keen on his labels but I think the three zones are genius. Here’s my take on them.  The business model canvas is actually comprised of models of three distinct aspects of business performance: marketing, operations and finance.

The market model shows how a value proposition matches the needs of customers, how customers get to know about the value on offer (via customer relationships) and how that value is offered and delivered (via channels). The operating model shows which activities are key to meeting the needs of the market model and how these activities come about, through key partnerships and use of key resources. The financial model shows how delivering value to customers in this way is financially sustainable, given the costs and revenues involved.

This rigour and logic is probably what makes the business model canvas so intuitive and so powerful. Indeed, it is probably this power that has led to it having been adapted and modified so often to suit different needs and situations. Some of these adaptations fit well with the design and logic of the original version, some less so.  Here are three examples.

Firstly, Steve Blank worked with Alexander Osterwalder to develop a variant of business model canvas for military / intelligence applications, where the objective was not to generate revenue but to complete a mission. This was achieved by simply re-labelling some of the original components (show in red below).

Secondly, the CASE Knowledge Alliance developed the Sustainable Business Model Canvas to integrate sustainability into the core of a business and optimise its environmental and social impact.

This is achieved by expanding the bottom of the business model canvas to evaluate environmental and social costs and value alongside economic costs and revenue.

The third example of an adapted business model canvas is one that I struggle with. It is a canvas for designing and evaluating charities, something that could do great good for the world if charities ended up being run more efficiently and effectively.   My struggle is that whilst its graphical presentation clearly resembles the original business model canvas, its design and content is so radically different that it loses most of the value coming from the business model canvas principles.

Here are my main issues with it:

  1. It is a lot more complicated – this has 18 elements compared to only 9 in the original business model canvas;
  2. The notion of value proposition, that is so central to the original, is missing.  What value is the charity providing?
  3. It features ‘key cause differentiators’ which form part of value proposition thinking – how do I make sure my particular value proposition remains distinctive? This key cause differentiators element is on the left of the canvas, an area that, in the original canvas was focused on operating model. Cause differentiators are, however, clearly part of the marketing model, although that clustering is lost in the charity model canvas.
  4. ‘Customers’ in the original business model canvas are replaced with ‘audience’. That’s fine because a key issue for many charities is that the audience for their messaging is different from their beneficiaries. The charity model canvas, however, makes no mention of beneficiaries – surely something that must be at the heart of any charity model.
  5. In the original business model canvas, costs were on the left of the model and revenue was on the right. This is logical because costs will be more related to operations (also on the left of the canvas) and revenue more related to the market (which are also on the right of the canvas). In the charity model canvas, however, the position of costs and revenue have been reversed, for no apparent reason.
  6. And finally, income streams are broken down in the charity model canvas into values for year 1, year 2 and year 3. This appears to be out of place in a business model canvas.  The purpose of producing a canvas is to ensure the key elements of a business (or charity) are aligned and coherent. If they are, the business (or charity) would appear (on paper at least) to look viable and the organisation can then move on the more detailed business planning, such as revenue growth, cash flow and investment requirements, all of which are much better modelled in a spreadsheet than squeezed into a charity model canvas.

My point here is not to ‘have a go’ at Manifesto, the team who produced the charity model canvas. It is to make a much more general point that frameworks such as the business model canvas have a lot of knowledge or wisdom ‘baked in’ and it is often not at all clear why elements of the framework have been designed the way they have. More relevantly to this particular discussion, it is also not clear which bits of business model canvas are safe to alter and customise to meet particular needs and which ought to be left as they are. The business model canvas was published in the book Business Model Generation in 2010, yet, as far as I can see, it wasn’t until 2017 that Alexander Osterwalder pointed out that the elements of the canvas clustered into three zones (desirability, feasibility and viability), as mentioned above.

Business model canvas is a wonderful tool but it needs to be used wisely and modified cautiously.

Strategy mapping avoids the peril of evidence overload

Evidence overload? Can too much evidence be a bad thing?

Well, thanks to David Perell who had a link in his newsletter to a post by Matt Mullenweg, which featured a long quote from Adam Robinson that was published in Tim Ferris‘s book Tribe of Mentors and talked about an experiment undertaken by Paul Slovic in the 1970’s it turns out that too much evidence can indeed be a bad thing. Don’t you just love how these connections pop into your life?

Here’s the experiment.

Eight professional horse handicappers were recruited to predict which horses would win races when different amounts of information on the horses were available to them: 5, 10, 20 or 40 facts on each horse from past performance charts.  All of the races had 10 horses competing, so the probability of picking a winner by random selection was 0.1. The results showed that the professional horse handicappers picked winners with a probability of 0.17, a 70% increase over random selection.  This didn’t change with more or less information available on past horse performance.  What did change, however, was the confidence with which they judged their predictions, as shown in Paul Slovic’s original graph (p24 of this pdf), copied below.

With more evidence, people get over-confident about the accuracy of their predictions!

What can we do about it?  One possible answer is to diminish the ‘illusion of explanatory depth’.  Rozenblit and Keil first demonstrated this illusion in a series of experiments. Subjects were asked to evaluate how well they understood everyday objects, such as a zipper, a flush toilet or a sewing machine. Using a 7 point scale, the average level of understanding was a score of 4 on a 7-point scale. When asked to explain the object and how it worked, their self-assessment fell to just above 3 and when given a diagnostic question about the device, their self-assessment fell again to below 3. Here is the data from p9 of this pdf.

The illusion of explanatory depth is that we think we know more than we do. Since Rozenblit and Keil’s original work, this has been found to be both a powerful and pervasive influence on the way we think.  Sloman and Fernbach (in their book The Knowledge Illusion) took understanding of this illusion a layer deeper. They presented hot political topics of the day (e.g. US imposing unilateral sanctions against Iran) and again asked people to judge how well they understood that topic. They were then asked the following “Please describe all the details you know about the impact of instituting unilateral sanctions against Iran, going from the first step to the last step and providing causal connections between the steps”. Again, they showed the illusion of explanatory depth: they thought they knew more about the issue until asked to explain it, at which point they realised their understanding was less deep than they’d previously thought. Sloman and Fernbach then asked another question; how strongly are they for or against the issue. The more they realised they didn’t really understand the issue, the more their strength of opinion on the subject moderated. This effect applied equally to both sides of the political spectrum. People with strongly held views both in favour of sanctions and against sanctions felt less strong in their opinions once their illusion of explanatory depth had been revealed to them.

And how do we help people get a better understanding of the mechanics underpinning a device, a political issue or even a strategic issue in a business setting? With strategy mapping, of course!   It will help them avoid over-confidence in their decisions / judgements and will help them moderate extreme positions for which they don’t have the necessary justification.

Struggling with volatility, uncertainty, complexity and ambiguity (VUCA)

The future is hard to make sense of and likely to pose difficult challenges. It always has and probably always will. Throughout history, people probably always thought that the future they faced was the most challenging ever. This can be a dangerous idea if it makes them conclude that the future is so hard to predict that it cannot even be planned for. It is often times of greatest unpredictability that need the sharpest strategic response.  So, how do we best plan for the future?

The VUCA framework suggests that the future is volatile, uncertain, complex and ambiguous. This, according to the Michael Skapinker in the Financial Times, is merely an empty buzzword and a vacuous concoction. According to Nathan Bennett in the Harvard Business Review (article and video), however, VUCA provides a practical, analytical framework for ‘identifying, getting ready for, and responding to’ four distinct types of future challenges.

For starters, let me be clear that I’m more with Nathan Bennett than Michael Skapinker, for two reasons: firstly, if we can categorise different types of future risks, we can make sense of them in different ways and secondly, we can ‘get ready for’ and ‘respond’ to them in different ways.  This is really powerful, compared to bundling all future risks together and trying to deal with them in identical ways.

The key to this, however, is ensuring VUCA does actually provide clean, crisp categories, that can be used to distinguish robustly between a wide variety of future risks.  And this is where I’m struggling.

Nathan Bennett defines volatility, uncertainty, complexity and ambiguity as follows in his Harvard Business Review Article.

At first glance, this looks informative and intuitive.  It has two axes (predictability and knowledge), each with a high and low value, giving us a familiar two-by-two matrix. Unfortunately, we don’t have clear distinguishing criteria separating them all. Take the difference between volatility and complexity, for example. From their relative locations on the two axes, we expect the difference between the two to be that we have more knowledge about volatility and less about complexity: they don’t appear to differ in the extent to which we can predict the results of our actions. Volatility is characterised as having ‘knowledge … often available’.  Complexity also has ‘information … available or … predictable’ – just in a ‘volume or nature’ that ‘can be overwhelming to process’.   This is a weak and hard-to-apply distinction and, probably more importantly, it doesn’t capture the key difference between something that is complex and something that is volatile. If something is volatile, we may know everything about it apart from when it will begin and how long it will last. Something complex, on the other hand is certainly less well known, as the diagram suggests but it’s complexity means it could have emergent properties and the same starting conditions could give rise to very different outcomes.  This means that complexity should be a lot lower than volatility on the ‘How well can you predict the outcome of your actions?’ axis.

The same applies to ambiguity and uncertainty. Ambiguity is characterised by ‘causal relationships’ being ‘completely unclear’, whereas uncertainty means ‘the event’s basic cause and effect are known’. Surely this means that ambiguity should be a lot lower than uncertainty on the ‘How well can you predict the outcome of your actions?’ axis.

So, maybe volatility, uncertainty, complexity and ambiguity should be arranged in a less simple but more meaningful pattern.  Maybe something like this …

Even now, we still have issues of boundary definition. Is it, for example possible to have something that is both uncertain and volatile. The image, as drawn above, suggests they are mutually exclusive.

Maybe we need a new approach altogether.  Here is one I’ve been thinking about.

In order to be able to make sense of and respond well to future challenges, we need the following sequence to happen:

  1. Clear signals from the environment
  2. Attentiveness to signals
  3. Challenge to be recognisable
  4. Challenge to be recognised
  5. Challenge to be manageable
  6. Challenge to be managed

This nicely separates organisational / systems issues from human / behavioural issues.  From an organisational / systems point of view, we need clear signals from the environment, these signals need to make the challenge recognisable and, once recognised, the organisation needs to be able to make that challenge manageable. In addition, human / behavioural responses are needed to attend to environmental signals, to actually recognise the challenge when signals are presented and to take appropriate action to manage the challenge, once recognised.  Each of these then have distinct failure modes, as outlined below.

Clearly, this is still a work-in-progress and one I will be returning to at some point in time.  To be continued …

Strategy mapping for strategy adoption: contract boundaries

One of the key ways strategy mapping can help with strategy adoption across the organisation is providing a rich and practical definition of the contract boundary between leadership (owners of the strategy) and front-line teams (the strategic change-makers). Contract boundaries are an important concept in software engineering: defining the way in which software modules can interact in compliant ways.

Reconciling Ethical Dilemmas

We face a great many ethical dilemmas in both our personal and professional lives yet I have struggled to find a practical step-by-step framework for reconciling them.  This is my first tentative step towards exploring such a framework.

The trolley problem

The trolley problem (pdf) is a well known and extensively studied problem in both philosophy and psychology. We, therefore, have considerable insights into its ethical foundations.

A runaway railway trolley is accelerating down a track towards five workers who will almost certainly be killed unless the trolley is stopped. You, the observer of this runaway trolley, have the option to stop the trolley and save the five lives but only by sacrificing someone else’s life. Do you save five lives at the expense of one?  The answer, it turns out, depends on the type of action you need to take. If you have to pull a lever to divert the trolley into a side-track, killing the one person working there, most people say they would be prepared to do so.  If, however, you had to throw a person from a bridge onto the track to stop the trolley, most people say they would not be prepared to do so.  Research has revealed that the key difference is how directly your action is connected to the harm that results from it. Pulling the lever to redirect the trolley has the side-effect of killing the single worker. Throwing a person off a bridge, on the other hand, connects your action to the harm it causes very directly. It is difficult to argue you didn’t mean to kill the person you just pushed off a bridge. And acting-with-the-intention-of-killing is one of those thing we are just not meant to do.

The male chick problem

Male chicks are an unfortunate by-product of the egg-industry. In order to replenish our global stock of egg-laying hens, we need to rear around 7 billion female chicks every year. This means 7 billion male chicks are also produced but cannot be used for egg-laying. Since these male chicks are from specialised egg-laying breeds they cannot simply be reared for meat and are typically killed soon after they have hatched and been sexed.

A start-up company from Jerusalem called eggXYt (pronounced “exit”) has invented a solution to the male chick problem. By gene-editing the hen’s sex chromosomes, male eggs can be made to glow under fluorescent light and can then be destroyed much earlier and (arguably) more humanely.

Here, we have two ethical issues in opposition to each other. Do we undertake genetic modification to reduce an animal welfare problem?  Answers are likely to vary depending on the sensitivities and priorities of the individual giving the answer.

The brother and sister problem

Here is a story people were asked to respond to in a psychology experiment (pdf)

‘Julie and Mark are brother and sister. They are travelling together in France on summer vacation from college. One night they are staying alone in a cabin near the beach. They decided that it would be interesting and fun if they tried making love. At the very least it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom too, just to be safe. They both enjoy making love but decide not to do it again. They keep that night as a special secret between them, which makes them feel even closer to each other. What do you think about that, was it OK for them to make love?’

Most people are clear and firm in their judgement that this is wrong. Siblings shouldn’t make love. But when asked why, they  struggled. The risk of inbreeding was mitigated by contraception and the risk of psychological harm was mitigated by their decisions and actions after the event. So they ended up with a strongly held ethical judgement for which they could find no rational explanation.  Clearly, this was a contrived story designed to test ethical judgements under very particular circumstances. This experiment, and many others like it, have, however, revealed two more insights into ethical dilemmas.  Firstly, ethical judgements we feel strongly about need not necessarily have rational explanations. Secondly, even when we intuitively feel we have rational explanations for our morality, these explanations may not be the cause of our morals; they might simply be our retrospective rationalisation of an intuition that arose independently.

The drugs for art problem

In the world of museums, there are few philanthropists that have made more high-profile donations than the Sackler Family. Sackler is a name appended to museums the world over, including the V&A , the Serpentine, the Tate Modern and the Royal Academy in London, the Guggenheim, the American Museum of Natural History and the Metropolitan Museum of Art in New York, the Smithsonian in Washington DC, the Jewish Museum in Berlin, The Louvre in Paris and the Sackler Museum in Beijing.  Yet over the course of 2019, the Sacklers have turned from heroes to villains. Three of the biggest museums in New York  and two of the biggest in London have announced they will no longer accept donations from the Sacklers. The Louvre has begun a process of eradicating their name completely. The reason?  The $13 billion Sackler fortune was made in the pharmaceutical industry, a substantial proportion of it from sales of OxyContin. This is the drug most associated with the opioid epidemic in the US that claims more lives per year than car crashes or gun crime.

Principles for reconciling ethical dilemmas

  1. Let’s begin with a general principle of utilitarianism. The principles of utilitarianism suggest we ought to do the greatest good to the most people whilst doing the least harm.
  2. Next, we need to recognise that utilitarianism will sometimes be constrained. The trolley problem shows that categorical judgements can sometimes interfere with simple utilitarian thinking: killing one person by pushing them off a bridge to save five others is deemed unacceptable by most people.
  3. We also need to take account of possible conflicts between opposing ethical judgements. The male chicks problem seeks to resolve whether genetic modification should be undertaken for the sake of animal welfare?
  4. Then we need to acknowledge that the constraints on utilitarianism are not necessarily rational. The brother and sister problem shows one type of categorical judgement that is intuitive and not rationally justifiable.
  5. Finally, we need to be aware that the ‘court of public opinion’ may swing decisively towards one side of two opposing ethical judgements. The drugs for art problem shows that prevailing public opinion sees the opioid epidemic as so harmful and so closely associated with the Sackler family that it makes their philanthropy unacceptable.

Tom Arnold on Roadmaps as Strategic Tools

Two posts from Tom Arnold …

The first suggests that roadmaps are better than plans, especially for agile ways of working.

  • Something for everyone – it is an artefact that traditionalists recognise enough as a ‘plan’ and agilists recognise as ‘not a gantt chart’.
  • Focuses on outcomes not deliverables –it promotes the right sort of strategic conversations within teams and with stakeholders.
  • Provides stability but evolves – it sets a clear, stable sense of direction but I find teams and stakeholders feel more comfortable discussing change resulting from iterative delivery and learning.
  • Promotes buy-in –good people will feel out of control and disengaged when deliverables are dictated up-front and/or from above. Self organising, multi-disciplinary teams love to own and be empowered to meet outcomes in creative ways.
  • More coherent – it allows you to knit all aspects of your programme together (e.g. software, infrastructure, ops, policy, security, estates, human resources, procurement) without freaking out any horses or doing up front requirements specs or giving false certainty.
  • Better performance metrics –outcome based planning allows programmes to more easily measure the evolution of a service through early stage delivery into full blown operation and iteration. Some metrics will be a constant throughout, others will only have relevance in later stages. This approach keeps you iterative and chasing incremental improvement. It also makes for joyful dashboards.
  • Better governance –roadmaps work well with time-boxed or target-based governance gates — you choose.

His second post goes on the suggest 7 questions to ask in building a roadmap:

  1. What are we trying to learn or prove?

  2. Who are the users?

  3. What are we operating?

  4. What are we saying?

  5. What are our assumptions?

  6. What are our dependencies?

  7. What capabilities do we need?