Saturday, December 17, 2016

The Ideas of Nassim Nicholas Taleb

If you prefer to listen rather than read:



Or download the MP3



At the moment I find myself in the middle of two books by Steven Pinker. The first, Better Angels of Our Nature, has been mentioned a couple of times in this space and I thought it a good idea to read it, if I was going to continue referencing it. I’m nearly done and I expect that next week I’ll post a summary/review/critique. The second Pinker book I’m reading is his book on writing called The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century. When the time comes you’ll see that my review of Better Angels of Our Nature is full of criticisms, but a criticism of Pinker’s writing will not be among them. Sense of Style is his book of writing advice, and I am thoroughly enjoying it. It continues a wealth of quality advice on non-fiction writing.


One piece of advice in particular jumped out at me. Pinker cautions writers to avoid the curse of knowledge. This particular example of bad writing happens because authors are generally so immersed in the topics they write about that they assume everyone must be familiar with the same ideas, terms and abbreviations they are. You see this often in academia and among professionals like doctors and attorneys. They spend so much of their time talking about a common set of ideas and situations that they develop a professional jargon, within which  acronyms and specialized terms proliferate leading to what could almost be classified as a different language, or at a minimum a very difficult to understand dialect. This may be okay, if not ideal, when academics are talking to other academics and doctors are talking to other doctors, but it becomes problematic when you make any attempt to share those ideas with a  broader audience.


Pinker illustrates the problems with jargon using the following example:


The slow and integrative nature of conscious perception is confirmed behaviorally by observations such as the “rabbit illusion” and its variants, where the way in which a stimulus is ultimately perceived is influenced by poststimulus events arising several hundreds of milliseconds after the original stimulus.
Pinker points out that the entire paragraph is hard to understand and full of jargon, but that the key problem is that the author assumes that everyone automatically knows what the “rabbit illusion” is, and perhaps within the author’s narrow field of expertise, it is common knowledge, but that’s almost certainly a very tiny community, a community to which most of his readers do not belong. Pinker himself did not belong to this community despite the fact that the quote was taken from a paper written by two neuroscientists and Pinker, himself, specializes in cognitive neuroscience as a professor at Harvard.


As an aside for those who are curious, the rabbit illusion refers to the effect produced when you have someone close their eyes and then you tap their wrist a few times, followed by their elbow and their shoulder. They will feel a series of taps running up the length of their arm, similar to a rabbit hopping. And the point of the paragraph quoted, is to point out that the body interprets a tap on the wrist differently if it’s followed by taps farther up the arm, then if it’s not.


This extended preface is all an effort to say that in the past posts I may have have fallen prey to the curse of knowledge. I may have let my own knowledge (meager and misguided though it may be) blind me to things that are not widely known to the public at large and which I tossed out without sufficient explanation. I feel like I have been particularly guilty of this when it comes to the ideas of Nassim Nicholas Taleb, thus this post will be an attempt to rectify that oversight. It is hoped that this, along with a general resolve to do better about avoiding the curse of knowledge in the future will expulcate me from future guilt. (Though apparently not of the desire to use words like “expulcate”.)


In undertaking a survey of Taleb’s thinking in the space of a few thousand words, I may have bitten off more than I can chew, but I’m optimistic that I can at least give you the 10,000 foot view of his ideas.


Conceptually Taleb’s thinking all begins with the idea of understanding randomness. His first book was titled Fooled by Randomness, because frequently what we assume is a trend, or a cause and effect relationship is actually just random noise. Perhaps the best example of this is the Narrative Fallacy, which Taleb explains as follows:


The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.


Upon initially hearing that explanation you may be thinking of the previous paragraph about the “rabbit illusion”. I think Taleb’s writing is easier to understand, but the paragraph is a little dense, so I’ll try and unpack it. But first, what’s interesting, is that there is connection between the “rabbit illusion” and the narrative fallacy. As I mentioned the “rabbit illusion” comes because the body connects taps on the wrist, elbow and shoulder into a narrative of movement, in this case a rabbit hopping up the arm. In the same way the narrative fallacy comes into play when we try to collect isolated events into a single story that explains everything, even if those isolated events are completely random. This is what Taleb is saying. It’s almost impossible for us to not try and pull events and facts together into a single story that explains everything. But in doing so we may think we understand something when really we don’t.


To illustrate the point I’ll borrow an example from Better Angels, since I just read it. The famous biologist Stephen Jay Gould was touring the Waitomo glowworm caves in New Zealand, and when he looked up he realized that the glowworms made the ceiling look like the night sky, except there were no constellations. Gould realized that this was because the patterns required for constellations only happened in a random distribution (which is how the stars are distributed) but that the glowworms actually weren’t randomly distributed. For reasons of biology (glowworms eat other glowworms) each worm kept a minimum distance. This leads to a distribution that looks random but actually isn’t. And yet, counterintuitively we’re able to find patterns in the randomness of the stars, but not in the less random spacing of the glowworms.


It’s important to understand this way in which our mind builds stories out of unconnected events because it leads us to assume underlying causes and trends when there aren’t any. The explanations going around about election are great examples of this. If 140,000 people had voted differently (125k in Florida and 15k in Michigan) then the current narrative would be completely different. This is, after all, the same country who elected Obama twice, and by much bigger margins. Did the country really change that much or did the narrative change in an attempt match the events of the election? Events which probably had a fair degree of randomness. Every person needs to answer that question for themselves, but I, for one, am confident that the country hasn’t actually moved that much, but how we explain the country and it’s citizens has moved by a lot.


This is why understanding the narrative fallacy is so important. Without that understanding it’s easy to get caught up in the story we’ve constructed and believe that you understand something about the world, or even worse that based on that understanding that you can predict the future. As a final example, I offer up the 2003 Invasion of Iraq, which resulted in the deaths of at least 100,000 people (and probably a lot more). And all because of the narrative: Islamic bad-guys caused 9/11, Sadaam is only vaguely Islamic, but definitely a bad guy. Get him! (This is by no means the worst example of deaths caused by the narrative fallacy, see my discussion of the Great Leap Forward.)


Does all of this mean that the world is simply random and any attempts to understand it are futile? No, but it does mean that it’s more important to understand what can happen than to attempt to predict what will happen. And this takes us to the next concept I want to discuss, the difference between the worlds of Mediocristan and Extremistan.


Let’s start with Mediocristan. Mediocristan is the world of natural processes. It includes things like height and weight, intelligence, how much someone can lift, how fast they can run etc. If you’ve ever seen the graph of a bell curve this is a good description of what to expect in Mediocristan. You should expect most things to cluster around the middle, or the top of the bell curve, and expect very few things to be on the tail ends of the bell curve. In particular you don’t expect to see anything way off to the right or left of the curve. To put it in numbers for anything in Mediocristan 68% will be one standard deviation from the average, 95% will be within two standard deviations and 99.6% will be within three standard deviations. For a concrete example of this let’s look at the height of US Males.


68% of males will be between 5’6” and 6” tall (I’m rounding a little). 95% of males will be between 5’3” and 6’3” and only one in a 1.7 million males will be over 7’ or under 4’7”. Some of you may be nodding your heads and some of you may be bored, but it’s important that you understand how the world of Mediocristan works. The key points are the average, and the median are very similar. That is that if you took a classroom full of students and lined them up by height the person standing in the middle of the line would be very close to the average height. The second key point is that there are no extremes, there are no men who are 10 feet tall or 16 inches tall. This is medicrostan. And when I said it’s more important to understand what can happen, than attempting to predict what will happen, in Mediocristan lots of extreme events can not happen. You’ll never see a 50 foot tall woman, and the vast majority of men you meet will be between 5’3” and 6’3”.


If the whole world was Mediocristan, then things would be fairly straightforward, but there is another world in which we live. It takes up the same space and involves the same people as the first world, but the rules are vastly different. This is Extremistan. And Extremistan is primarily the world of man-made systems. A good example is wealth. The average person is 5’4”, the tallest person ever was 8’11” tall. But the average person in the world has a net worth of $26,202 while the richest person in the world (Currently Bill Gates) has a net worth of $75 billion which is 2.8 million times the worth of the average person. Imagine that the tallest person in the world was actually 2,800 miles tall, and you get a sense of the difference between Mediocristan and Extremistan.


The immediate consequence of this disparity is that the exact opposite opposite rules apply in Extremistan as what applies in Mediocristan. The average and the median are not the same. And some examples will be very much on the extreme. In particular you start to understand that in a world with this sorts of extremes in what can happen it becomes very difficult to predict what will happen.


Additionally Extremistan is the world of black swans, which is the next concept I want to cover and the title of Taleb’s second book. Once again this is a term you might be familiar with, but it’s important to understand that they form a key component in understanding what can happen in Extremistan.


In short a Black Swan is something that:


  1. Lies outside the realm of regular expectations
  2. Has an extreme impact
  3. People go to great lengths afterword to show how it should have been expected.


You’ll notice that two of those points are about the prediction of black swans. The first point being that they can’t be predicted and the third point being that people will retroactively attempt to show that it should have been possible to predict it. One of the key points I try and make in this blog is that you can’t predict the future. This is terrifying for people and that’s why point 3 is so interesting. Everyone wants to think that they could have predicted the black swan, and that having seen it once they won’t miss it again, but in fact that’s not true, they will still end up being surprised the next time around.


But if we live in Extremistan, which is full of unpredictable black swans what do we do? Knowing what the world is capable of is one thing, but unless we can take some steps to mitigate these black swans what’s the point?


And here we arrive at the last idea I want to cover and the underlying idea behind Taleb’s final book, Antifragility. As I mentioned the concept of Antifragility is important enough that you should probably just read the book, in fact you should probably read all of Taleb’s books. But for the moment we’ll assume that you haven’t (and if you have you why have you even gotten this far?)


Antifragility is how you deal with black swans and how you live in Extremistan. It’s also your lifestyle if you’re not fooled by randomness. This is why Taleb considered Antifragile his mangum opus because it pulls in all of the ideas from his previous books and puts them into a single framework. That’s great, you may be saying, but you’re unclear on what antifragility is.


At it’s core antifragility is straightforward. To be antifragile is to get stronger in response to stress. (Up to a point.) The problem is when people hear that idea it sounds magical, if not impossible. They imagine cars that get stronger the more accidents they’re in or software that becomes more secure when someone attempts to hack it, or a government that gets more stable with every attempt to overthrow it. While none of this is impossible, I agree, that when stated this way the idea of antifragility seems a little bit magical.


If instead you explain antifragility in terms of muscles, which get stronger the more you stress them, then people find it easier to understand, but at the same time they will have a hard time expanding it beyond natural systems. Having established that Extremistan and black swans are mostly present in artificial systems antifragility is not going to be any good if you can’t extend it into that domain. In other words if you explain antifragility to people in isolation their general response will be to call it a nice idea, but they may have difficulty understanding the real world utility of the idea, and it’s possible that my previous discussions on the topic have left you in just this situation. Which is why I felt compelled to write this post.


Hopefully by covering Taleb’s ideas in something of a chronological order the idea of antifragility will be easier to understand. And it comes by flipping much of conventional wisdom on it’s head. Rather than being fooled by randomness, if you’re antifragile you expect randomness. Rather than being surprised by black swans, you prepare for them, knowing that there are both positive and negative black swans. Armed with this knowledge you lessen your exposure to negative black swans while increasing your exposure to positive black swans. All of this allows you to live comfortably in Extremistan.


If this starts to look like we’ve wandered into the realm of magical thinking again, I don’t blame you, but at it’s essence being antifragile is straightforward, for our purposes antifragility is about making sure you have unlimited upside, and limited downside. Does this mean that something which is fragile has limited upside and unlimited downside? Pretty much, and you may wonder if we’re talking about man-made systems why would anyone make something fragile. This is an excellent question. And the answer is that it all depends on the order in which things happen. In artificial systems fragility is marked by the practice of taking short term, limited profits, but having the chance of catastrophic losses. On the opposite side antifragility is marked by incurring short term limited costs, but having the chance of stratospheric profits. Fragility assumes the world is not random, assumes there are no black swans and ekes out small profits in the space between extreme events. (If this sounds like the banking system then you’re starting to get the idea.) Antifragility assumes the world is random and that black swans are on the horizon and pays small manageable costs to protect itself from those black swans (or gain access to them if they’re positive).


In case it’s still unclear here are some examples:


Insurance: If you’re fragile, you save the money you would have spent on insurance every month, a small limited profit, but risk the enormous cost of a black swan in the form of a car crash or a home fire. If you’re antifragile you pay the cost of insurance every month, a small limited cost, but avoid the enormous expense of the negative black swan, should it ever happen.


Investing: If you put away a small amount of money every month you gain access to a system with potential black swans. Trading a small, limited cost for the potential of a big payout. If you don’t invest, you get that money, a small limited profit, but miss out on any big payouts.


Government Debt: By running a deficit governments get the limited advantage of being able to spend more than they take in. But in doing so they create a potentially huge black swan, should an extreme event happen.


Religion: By following religious commandments you have to put up with the cost of not enjoying alcohol, or fornication, or Sunday morning, but in return you avoid the negative black swans of alcoholism, unwanted pregnancies, and not having a community of friends when times get tough. If you don’t follow the commandments you get your Sunday mornings, and I hear whiskey is pretty cool, but you open yourself up to all of the negative swans mentioned above. And of course I haven’t even brought in the idea of an eternal reward (see Pascal’s Wager.)


Hopefully we’ve finally reached the point where you can see why Taleb’s ideas are so integral to the concept of this blog.


The modern world is top heavy with fragility, and the story of progress is the story of taking small limited profits while ignoring potential catastrophes. In contrast, Antifragility requires sacrifice, it requires cost, it requires dedication and effort. And, as I have said again and again in this space, I fear that all of those are currently in short supply.



By donating you get a chance to be antifragile: a small fixed cost, which allows you to reap massive rewards, avoid a large catastrophe, or possibly neither, I'm not actually sure.

2 comments:

  1. So what do you do personally to make yourself/your family anti-fragile? That would be an interesting post.

    ReplyDelete
    Replies
    1. Much of what the church already recommends makes you anti-fragile. Avoiding debt, 72 hour kits, food storage, extramarital sex, abstaining from drugs. Beyond that anything that resembles insurance is a good idea, disability insurance, life insurance, etc.

      From an investment standpoint Taleb recommends 80-90% of your money should be in the safest thing you can find, and 10-20% should be in the riskiest (but potential highest return) stuff you can find.

      But I'll see about doing a full post on it.

      Delete