The Drunkard’s Walk by Leonard Mlodinow.

The Drunkard’s Walk: How randomness rules our lives.

By Leonard Mlodinow.

A book review by Duncan Brett

 This is a book that works on two levels. The first is an entertaining and anecdote laden tour on the history of probability theory. The second is a reminder of just how much success and failure is down to random chance. The author tells us we must keep trying, since the number of times we try is the one thing we can control.

The most universal characteristic of successful people is that they never give up.

The book is written with the statistical layman in mind, but you probably still need an interest in the workings of statistics to appreciate it. If you work in statistics or any quantitative field for that matter, you should enjoy learning about the origins of your everyday statistical tools.

Mlodinow starts by outlining the principles of randomness and how chance events can separate the winners from the also-rans, despite the winner’s penchant for taking the credit. It can be very difficult to tell what part came from skill and what from luck.

He proceeds to outline the basic laws of probability, with their Romans origins. Since the Greeks believed in the will of the gods and absolute proof, they never developed any theories. It fell to the more practical Romans to espouse it. Cicero wrote that probability is the guide of life, and it was incorporated into laws of evidence. Their application was somewhat problematic though. For the man in the street, two half proofs constituted a whole proof and a conviction. Things are a little fuzzy on what 3 half proofs mean. Priests had an out due to the requirement of 72 witnesses before they could be convicted of a crime.

Mind you, misapplication of probability statistics continues to have legal ramifications, as people are still being convicted on badly calculated probabilities.

Early drivers of probability theory came from gamblers. The church was fixated on the will of God, and saw no need for thinking about random occurrence and the impact of chance. Geralmo Cardono paid for his 16th century medical education by gambling and in the process introduced the concept of a sample space. Galileo wrote a treatise on thoughts about dice games for the Duke of Tuscany who wanted to know why when you threw 3 dice the combination of 10 came up more often than 9. In fact it comes up 8% more, and it is because there are 27 ways to roll 10 and only 25 to roll 9.

In a classic tale Mlodinow illustrates the wide lack of understanding of probability, even today. Marylin vos Savant is one of the cleverest people ever to have lived. She has an IQ of 228. She posed a question based on the TV show “Lets Make a Deal” which worked as follows. A contestant is asked to choose one of three doors. Behind one door is a Maserati, behind the other two is a goat, or possibly the complete works of Shakespeare in Siberian. After the contestant picks a door, the host opens one door to reveal a Siberian goat reading Hamlet.

The host then offers the contestant the chance to switch doors or stick with their original choice. What should the contestant do?

Assuming they want the Maserati, Marilyn said they should switch. Boy, did that cause a firestorm. Over 10,000 people including a 1,000 PhD’s wrote in to tell her she was an idiot. Except she wasn’t the idiot…here’s why.

The role of the host matters. That’s what they all forgot. There are 3 doors, 1 has the car, and 2 do not. So you have a 1/3 chance of being right and a 2/3 chance of being wrong. If the host intercedes and takes away 1 door, you still have a 1/3 chance that your original choice was in the ‘correct’ guess scenario and a 2/3 chance that your choice was in the ‘wrong’ guess scenario. The host has removed 1 door from the ‘wrong’ guess scenario, leaving the remaining ‘wrong’ guess door having a 2/3 chance of housing a car, and your original choice only a 1/3 chance.

Blaise Pascal solved a gambler called De Mere’s problem of how to split the pot if a game of chance ended prematurely. He developed Pascal’s triangle, which helped calculate the number of possible permutations. It is sill widely used in game theory to show the chance of a weaker team winning. It is a bit surprising to note that even if a stronger team beats a weaker team 2 out of 3 times, the weaker team will still win 20% of a 7 game series. Which just goes to show that those sporting upsets are really not such a surprise.

We are also given a more prosaic market research example. Suppose you want to quantify results from a 6-person focus group as to whether they are for or against your new idea. With 2 options (for/against) and 6 people, there are 64 permutations. Let’s pretend that the “real answer” is 3 for and 3 against. There are 20 permutations where this can occur and 44 ways it can’t. This means you have a 2/3 chance the answer will be “wrong” and mislead you. Not saying the answers are random, just don’t assume it is significant.

Not everyone understands the concept of sample space, and if you don’t you can get caught out. Like the Australian lottery that offered $27 million in prize money for the ticket that had 6 correct numbers out of a possible 44. There are only 7 million permutations and tickets cost $1. All you need to do is buy all the permutations. A syndicate figured this out…and won.

Pascal later became deeply religious and made one last contribution known as Pascal’s wager. There either is or isn’t a God. You either should or shouldn’t live a pious life. The infinite gain from a life piety with a God is larger than the small loss of living piously only to find no deity is around.

Mlodinow then takes us through the history of sampling from American philosopher Charles Sanders Pierce who found that a random sample drawn over and over indefinitely would draw the same set of instances as often as any other, to Jakob Bernoulli whose golden theorem showed how confident you could be that a sample reflected the underlying population.

Thomas Bayes contributed the theory of conditional probability, which extended the theories of probabilities to events that are connected. The simplest way to think of this is to write down all the possibilities, and then cross out all the ones the condition eliminates, which prunes the sample space as you go along.

Don’t forget about false positives though. As Mlodinow notes, it is easy to design a test that picks up 100% of drug use, but the true test of accuracy, is how many times it identifies drug use falsely. This is a problem and because of it the chances of a sportsman being found guilty of doping, actually being guilty is not as high as one might think. The probability that A will occur if B has occurred is very different to the probability that B will occur if A has occurred.

Mlodinow then outlines the law of errors. He reminds us that a rating is not a description of quality, but rather a measurement of it. This subtle distinction is important because measurement always carries uncertainty and this is rarely discussed when quoting measurements. Remember this when quoting rating scales in your next presentation.

You really do need to understand the nature of variation caused by random error. The distribution of error is often expressed in a bell curve, also known as a normal or Gaussian distribution. The shape depends on 2 parameters, which determine the location of the peak, and the spread of distribution or its standard deviation. With a normal distribution about 68% of data points fall within 1 standard deviation, 95% within 2, and 99.7% within 3 standard deviations.

Although Mlodinow doesn’t cover it, one needs to remember that you can’t follow it blindly. Not everything fits into the Gaussian world, and some issues are susceptible to what Nassim Taleb calls Black Swans, which can mess with your curve. Take a financial crisis no-one saw coming. Or a tsunami. Stick to the distribution of income in your neighbourhood and you should be on safe ground though.

In market research we use the term margin of error to describe the uncertainty that random error may produce. So when a researcher tells you the survey’s margin of error is plus or minus 5%, with a 95% confidence he means that 19 times out of 20, the ‘correct’ answer will be within 5% of the result. Any variation within that margin should be ignored. You also take your chances with that 5%.

And if you’ve ever wondered what the central limit theorem has to do with this, it tells us the probability that the sum of a large number of independent random variables will take on any given value is distributed according to the normal distribution.

Before you form the opinion that randomness renders measuring things pointless, we are taken through how although an individual point can be random, the aggregate is often highly predictable.

John Graunt and William Petty realised that inferences could be made from a limited sample about a whole population. This insight led to a chain of clever people developing the modern field of statistics, including Charles Darwin’s first cousin, Francis Dalton. He introduced statistical thinking to biology and so launched the field of eugenics. He contributed two mathematical concepts central to statistics. The first was regression to the mean (where things tend to revert to the average), and the second was the co-efficient of correlation (which measures the relationship between two variables).

Karl Pearson, a disciple of Galton, contributed the chi-square test, which is still widely used in market research, to tell us if an uneven distribution of results is due to preference or pure chance.

There were many others, but it took Einstein to really shake things up.

According to Mlodinow, his 1905 paper on statistical physics is his most cited work in scientific literature. Einstein applied a statistical approach to physics to describe Brownian motion and the idea that matter is made of atoms and molecules. The idea that molecules fly and change direction at random points is known as the drunkards walk. Teetotalling statisticians call it the random walk. And this book got a title.

This leads us to a big issue in looking at data, and a common problem in research reports. Even if something is random, patterns still do appear. Avoiding the illusion of meaning is difficult. As humans we tend to make judgments on incomplete information and then declare the picture clear.

The mathematician George Spencer-Brown pointed out that there is a difference between a process being random and the product of that process appearing to be random. So if you toss a coin several dozen billion times, you would expect sequences of up to a million heads in a row.

Yet we love to analyse data on sports, markets, and business performance, and make judgments on very little data. Most of it is just random. The financial services industry would have a problem if we realized this.

Take the stock market. Leonard Koppet predicted the direction of the market correctly 19 years out of 20. Great analysis? Not really, he based the call on who won the Superbowl. What about Bill Miller? He made a fortune beating the S&P 500 14 years in a row. Everyone thought he was great.

CNN put the odds on doing that at 372,529 to 1. Mlodinow shows that we are looking at this from the wrong angle. The odds on Bill Miller doing it are low, but the odds on someone beating the market 14 years running over a 40 year period, given the thousands of fund managers trying is actually 75%. Bill just happened to be the lucky guy.

This sort of issue led to RA Fisher developing a technique called statistical significance in the 1920’s. It is a formal procedure for calculating the probability of our having observed what we observed if the hypotheses we are testing is true. Now isn’t that a mouthful that quantitative researchers love to throw about.

The thing is once we see a pattern we don’t let go. Instead of searching for ways to prove our ideas wrong, we usually attempt to prove them correct. Psychologists call this the confirmation bias and it is a major impediment to our ability to break free from the misinterpretation of randomness. It also explains why research reports go down like a lead balloon if they don’t confirm the clients view.

Even if data is significant at the 95% level, there is still a 5% chance it lead us astray, and in real life we actually make decisions on much lower significance. It is human nature to look for patterns and assign them meaning when we find them. Kahneman and Tversky called the shortcuts we employ in assessing patterns in data and making judgments heuristics. Generally heuristics are useful, but like optical illusions, heuristics can lead to a systematic error, which they called biases.

In his final chapter Mlodinow steps away from the statistical side of things and provides a commentary on how randomness affects us. How much you believe in randomness has some rather big implications for market researchers and their research philosophy. Laplace expressed the idea of determinism that holds the idea that the state of the present determines precisely the manner in which the future will unfold. Believing success can be achieved by anticipating consumer preferences is a deterministic view of the marketplace – the intrinsic qualities of the person or product determine success.

There is another view, that success (or outcomes) is the product of a lot of factors, some of them random such as luck, some minor with what meteorologist Edward Lorenz called the butterfly effect. Evidence tends to support the latter view. Which makes one wonder about the clever pundits…

We place too much confidence in the overly precise predictions of people who claim a track record. Mlodinow notes that the path of a molecule is virtually impossible to predict before the fact, even though it is relatively easy to understand afterward. This is true of everyday life. We love a good story that fudges the random motion and makes sense of the path.

The chapter leaves those who have not quite reached the pinnacle of success that their chance may just be round the corner if they just keep trying. And the world is full of examples to show that is true. I wish you the best of luck.

 

Leave a Reply