Frank Ashe
The benefit of having an ability to take a long-term view is why we evolved the big brain we have. All that added grey matter is there for the long-term thinking. But thinking like that is hard, and we easily switch it off.
Frank Ashe
Sydney, Australia,

CHALLENGE - Don't say "proof" - you have correlation

The Harvard Business Review had a recent article, authored by McKinsey consultants, with the following headline: Finally, Proof That Managing for the Long Term Pays Off”. I hope it was the sub-editors that put the headline in and not the authors, because it’s wrong. It’s the sort of claim that gives business academics a bad name.

Don’t get me wrong here; I’m all in favour of taking a long-term view for business decisions, investing, risk management and general life decisions. The benefit of having an ability to take a long-term view is why we evolved the big brain we have. All that added grey matter is there for the long-term thinking. But thinking like that is hard, and we easily switch it off.

Why do people throw around the word "proof" when no proof exists? Okay, that was a rhetorical question of course. We all know the answer, and it isn’t pretty. Journalists have known the need for snappy headlines, what we now call clickbait, ever since newspapers began. And some things look like a plausible proof when they’re not.

This wouldn’t be a problem except that we know from our own experience, and backed up by many academic studies, that people are affected by the headlines, and that people look for things that confirm their already held views.

But, if you want to make good business and investment decisions then it’s best to have unbiased, objective views of what’s going on in the world.

To quote from the article:

Companies deliver superior results when executives manage for long-term value creation and resist pressure from analysts and investors to focus excessively on meeting Wall Street’s quarterly earnings expectations. This has long seemed intuitively true to us. We’ve seen companies such as Unilever, AT&T, and Amazon succeed by sticking resolutely to a long-term view. And yet we have not had the comprehensive data needed to quantify the payoff from managing for the long term — until now.

What are they doing in the study? They get 600+ US companies with 15 years of history, 2000-2014, and break them into two groups – those that they think have a long-term focus, and the others. They then look at various metrics of success over the same 15 year period. So far, so good.

What’s wrong?

Firstly, we're only looking at correlation, not causation. Maybe companies with faster growing revenue and earnings (one of their success metrics) do more capex (one of their long-term focus metrics)?

By the way, it was the HBR that threw around the term "proof", even though the authors of the article did produce the original report. MGI correctly say in the original report " it does not enable us to assert causality".

They also say it's not an econometric analysis. Sorry, but if you're throwing around charts and numbers like they do, and drawing conclusions, then it is an econometric analysis, just a bad written one, unless it has a lot more explicit caveats.

Secondly, no comment was made on the uncertainty surrounding these results. How confident are we that there is a real difference between the two groups of companies?

A simple t-test shouldn't be beyond somebody at McKinsey! There is a difference in absolute returns, over the period, but what’s the degree of uncertainty? Averages can be easily distorted by extreme values, what’s the breakup of the returns in the two classes of companies? These are all trivially easy questions to answer with the data they have, and any paper which purports to look at these question should give the basic stats.

For those who say that this was an article for quick consumption, and so it should be dumbed down, when you refer to the original report available to the public there is a page on methodology but without any quantitative description of the data.

Also, there’s “dumbing down” versus misguiding. The article gives much more confidence in the results than is warranted by the underlying analysis. The innocent reader is going to come away with the idea that the relationship has been “proved” – not so.

Thirdly, (a bit nerdish) they include companies as being long-term focussed that "clearly" moved to being long-term in the second half of the sample. This is a big no-no in any statistical analysis. Maybe the long-term success metrics, in the first half of the period, actually led the companies to be able to adopt a long-term focus (as defined by the authors) in the second half of the period.

This makes the "long-term" group markedly heterogeneous, and, without further analysis, any correlations observed are suspect.

How could we simply improve this?

One simple test they could do with their data (right now!) is to look at the subsequent performance of companies that are judged to be long-term in the first half of the sample. What are the chances of changing focus i.e. how many are long-term focussed in the first half and stay long-term etc. What are their relative returns? This is proper robustness checking!

Do this and then it could be a good analysis.

(PS They can give me the data and I’ll do the analysis for free.)

About the author

Frank Ashe, Sydney, Australia

Independent Consultant

Behavioural risk management