Support This Blog On Patreon!

Support this Blog!

All I ask is a $1 a month. (But more is great too.) If you find this content to be beneficial, interesting or just a fascinating peek into true insanity please donate.

Saturday, January 7, 2017

Predictions (Spoiler: No AI or Immortality)

If you prefer to listen rather than read:



Or download the MP3



Many people use the occasion of the New Year to make predictions about the coming year. And frankly, while these sorts of predictions are amusing, and maybe even interesting, they’re not particularly useful. To begin with, historically one of the biggest problems has been that there’s no accountability after the fact. If we’re going to pay attention to someone’s predictions for 2017 it would be helpful to know how well they did in predicting 2016. In fairness, recently this trend has started to change, driven to a significant degree by the work of Philip Tetlock. Perhaps you’ve heard of Tetlock’s book Superforcasting (another book I intend to read, but haven’t yet, I’m only one man) But if you haven’t heard of the book or of Tetlock, he has made something of a career out of holding prognosticators accountable, and his influence (and that of others) is starting to make itself felt.


Scott Alexander of SlateStarCodex, makes yearly predictions and, following the example of Tetlock, scores them at the end of the year. He just released the scoring of his 2016 predictions. As part of the exercise, he not only makes predictions but provides a confidence level. In other words, is he 99% sure that X will/won’t happen, or is he only 60% sure? For those predictions where his confidence level was 90% or higher he only missed one prediction. He predicted with 90% confidence that “No country currently in Euro or EU announces plan to leave:” And of course there was the Brexit, so he missed that one. Last year he didn’t post his predictions until the 25th of January, but as I was finishing up this article he did post his 2017 predictions, and I’ll spend a few words at the end talking about them.


As an aside, speaking of posting predictions on the 25th, waiting as long as you can get away with is one way to increase your odds. For example last year Alexander made several predictions about what might happen in Asia. Taiwan held their elections on the 16th of January, and you could certainly imagine that knowing the results of that election might help you with those predictions. I’m not saying this was an intentional strategy on Alexander’s part, but I think it’s safe to say that those first 24 days of January weren’t information free, and if we wanted to get picky we’d take that into account. It is perhaps a response to this criticism for Alexander to post his predictions much earlier this year.


Returning to Alexander’s 2016 predictions, they’re reasonably mundane. In general he predicts that things will continue as they have. There’s a reason he does that. It turns out that if you want to get noticed, you predict something spectacular, but if you want to be right (at least more often than not) than you predict that things will basically look the same in a year as they look now. Alexander is definitely one of those people who wants to be right. And I am not disparaging that, we should all want to be more correct than not, but trying to maximize your correctness does have one major weakness. And that is why, despite Tetlock’s efforts, prediction is still more amusing than useful.


See, it’s not the things which stay the same that are going to cause you problems. If things continue as they have been, than it doesn’t take much foresight to reap the benefits and avoid the downside. It’s when the status quo breaks that prediction becomes both useful and ironically impossible.


In other words someone like Alexander (who by the way I respect a lot I’m just using him as an example) can have year after year of results like the results he had for 2016 and then be completely unprepared the one year when some major black swan occurs which wipes out half of his predictions.


Actually, forget about wiping out half his predictions, let’s just look at his, largely successful, world event predictions for 2016. There were 49 of them and he was wrong about only eight. I’m going to ignore one of the eight because he was only 50% confident about it (that is the equivalent of flipping a coin and he admits himself that being 50% confident is pretty meaningless). This gives us 41 correct predictions out of 48 total predictions, or 85% correct. Which seems really good. The problem is that the stuff he was wrong about is far more consequential than the stuff he was right about. He was wrong about the aforementioned Brexit, he made four wrong predictions about the election. (Alexander, like most people, was surprised by the election of Trump.) And then he was wrong about the continued existence of ISIS and oil prices. As someone living in America you may doubt the impact of oil prices, but if so I refer you to the failing nation of Venezuela.


Thus while you could say that he was 85% accurate, it’s the 15% of stuff he wasn’t accurate about that is going to be the most impactful. In other words, he was right about most things, but the consequences of his seven missed predictions will easily exceed the consequences of the 41 predictions that he got right.


That is the weakness of trying to maximize being correct. While being more right than wrong is certainly desirable. In general the few things people end up being wrong about end up being far more consequential than all things they’re right about. Obviously it’s a little bit crude to use the raw number of predictions as our standard. But I think in this case it’s nevertheless essentially accurate. You can be right 85% of the time and still end up in horrible situations because the 15% of the time you’re wrong, you’re wrong about the truly consequential stuff.


I’ve already given the example of Alexander being wrong about Brexit and Trump. But there are of course other examples. The recent financial crisis is a big one. One of the big hinges of investment boom leading up to the crisis was the idea that the US had never had a nationwide decline in housing prices. And that was a true and accurate position for decades, but the one year it wasn’t true made the dozens of years when it was true almost entirely inconsequential.


You may be thinking from all this that I have a low opinion of predictions, and that’s largely the case. Once again this goes back to the ideas of Taleb and Antifragility. One of his key principles is to reduce your exposure to negative black swans and increase your exposure to positive black swans. But none of this exposure shifting involves accurately predicting the future. And to the extent that you think you can predict the future it makes you less likely to worry about the sort of exposure shifting that Taleb advocates, and makes things more fragile. Also, in a classic cognitive bias, everything you correctly predicted you ascribe to skill while every time you’re wrong you put that down to bad luck. Which, remember, is easy trap to fall into because if you expect the status quo to continue you’re going to be right a lot more often than you’re wrong.


Finally, because of the nature of black swans and negative events, if you’re prepared for a black swan it only has to happen once, but if you’re not prepared then it has to NEVER happen. For example, imagine if I predicted a nuclear war. And I had moved to a remote place and built a fallout shelter and stocked it with a bunch of food. Every year I predict a nuclear war and every year people point me out as someone who makes outlandish predictions to get attention, because year after year I’m wrong. Until one year, I’m not. Just like with the financial crisis, it doesn’t matter how many times I was the crazy guy from Wyoming, and everyone else was the sane defender of the status quo, because from the perspective of consequences they got all the consequences of being wrong despite years and years of being right, and I got all the benefits of being right despite years and years of being wrong.


All of this is not to say that you should move to Wyoming and build a fallout shelter. Only to illustrate the asymmetry of being right most of the time, if when you’re wrong you’re wrong about something really big.


In discussing the move towards tracking the accuracy of predictions I neglected to engage in much of a discussion of why people make outrageous and ultimately inaccurate predictions. Why do predictions, in order to be noticed, need to be extreme? Many people will chalk it up to a need for novelty or a requirement brought on by a crowded media environment, but once you realize that it’s the black swans, not the status quote that cause all the problems (and if you’re lucky bring all the benefits) you begin to grasp that people pay attention to extreme predictions not out of some morbid curiosity or some faulty wiring in their brain but because if there is some chance of an extreme prediction coming true, that is what they need to prepare for. Their whole life and all of society is already prepared for the continuation of the status quo, it’s the potential black swans you need to be on the lookout for.


Consequently, while I totally agree that if someone says X will happen in 2016, that it’s useful to go back and record whether that prediction was correct. I don’t agree with the second, unstated assumption behind this tracking that extreme predictions should be done away with because they so often turn out to not be true. If someone thinks ISIS might have a nuke, I’d like to know that. I may not change what I’m doing, but then again I just might.


To put it in more concrete terms, let’s assume that you heard rumblings in February of 2000 that tech stocks were horribly overvalued, and so you took the $100,000 you had invested in the NASDAQ and turned it into bonds, or cash. If so when the bottom rolled around in September of 2002 you would still have your $100k, whereas if you didn’t take it out you would have lost around 75% of your money. But let’s assume that you were wrong, and that nothing happened and that the while the NASDAQ didn’t continue its meteoric rise that it continued to grow at the long term stock market average of 7% then you would have made around $20,000 dollars.


For the sake of convenience let’s say that you didn’t quite time it perfectly and you only prevented the loss of $60k. Which means that the $20k you might have made if your instincts had proven false was one third of the $60k you actually might have lost. Consequently you could be in a situation where you were less than 50% sure that the market was going to crash (in other words you viewed it as improbable) and still have a positive expected value from taking all of your money out of the NASDAQ. In other words depending on the severity of the unlikely event it may not matter if it’s unlikely or improbable, because it can still make sense to act as if it were going to happen, or at a minimum to hedge against it. Because in the long run you’ll still be better off.


Having said all this you may think that the last thing I would do is offer up some predictions, but that is precisely what I’m going to do. These predictions will differ in format from Alexander’s. First, as you may have guessed already I am not going to limit myself to predicting what will happen in 2017. Second I’m going to make predictions which, while they will be considered improbable, will have a significant enough impact if true that you should hedge against them anyway. This significant impact means that it won’t really matter if I’m right this year or if I’m right in 50 years, it will amount to much the same regardless. Third, a lot of my predictions will be about things not happening. And with these predictions I will have to be right for all time not just 2017. Finally with several of these predictions I hope I am wrong.


Here are my list of predictions, there are 15, which means I won’t be able to give a lot of explanation about any individual prediction. If you see one that you're particularly interested in a deeper explanation of, then let me know and I’ll see what I can do to flesh it out. Also as I mentioned I’m not going to put any kind of a deadline on these predictions, saying merely that they will happen at some point, but for those of you who think that this is cheating I will say that if 100 years have passed and a prediction hasn’t come true then you can consider it to be false. However as many of my predictions are about things that will never happen I am, in effect, saying that they won’t happen in the next 100 years, which is probably as long as anyone could be expected to see. Despite this caveat I expect those predictions to hold true for even longer than that. With all of those caveats here are the predictions. I have split them into five categories:


Artificial Intelligence


  1. General artificial intelligence, duplicating the abilities of an average human (or better), will never be developed.


If there was a single AI able to do everything on this list, I would consider this a failed prediction. For a recent examination of some of the difficulties see this recent presentation.


  1. A complete functional reconstruction of the brain will turn out to be impossible.


This includes slicing and scanning a brain, or constructing an artificial brain.


  1. Artificial consciousness will never be created.


This of course is tough to quantify, but I will offer up my own definition for a test of artificial consciousness: We will never have an AI who makes a credible argument for it’s own free will.


Transhumanism


  1. Immortality will never be achieved.


Here I am talking about the ability to suspend or reverse aging. I’m not assuming some new technology that lets me get hit by a bus and survive.


  1. We will never be able to upload our consciousness into a computer.


If I’m wrong about this I’m basically wrong about everything. And the part of me that enviously looks on as my son plays World of Warcraft hopes that I am wrong, it would be pretty cool.


  1. No one will ever successfully be returned from the dead using cryonics.


Obviously weaselly definitions which include someone being brought back from extreme cold after three hours don’t count. I’m talking about someone who’s been dead for at least a year.


Outer Space


  1. We will never establish a viable human colony outside the solar system.


Whether this is through robots constructing humans using DNA, or a ship full of 160 space pioneers, it’s not going to happen.


  1. We will never have an extraterrestrial colony (Mars or Europa or the Moon) of greater than 35,000 people.


I think I’m being generous here to think it would even get close to this number but if it did it would still be smaller than the top 900 US cities and Lichtenstein.


  1. We will never make contact with an intelligent extraterrestrial species.


I have already offered my own explanation for Fermi’s Paradox, so anything that fits into that explanation would not falsify this prediction.


War (I hope I’m wrong about all of these)


  1. Two or more nukes will be exploded in anger within 30 days of one another.


This means a single terrorist nuke that didn’t receive retaliation in kind would not count.


  1. There will be a war with more deaths than World War II (in absolute terms, not as a percentage of population.)


Either an external or internal conflict would count, for example a Chinese Civil War.


  1. The number of nations with nuclear weapons will never be less than it is right now.


The current number is nine. (US, Russia, Britain, France, China, North Korea, India, Pakistan and Israel.)


Miscellaneous


  1. There will be a natural disaster somewhere in the world that kills at least a million people


This is actually a pretty safe bet, though one that people pay surprisingly little attention to as demonstrated by the complete ignorance of the 1976 Chinese Earthquake.


  1. The US government’s debt will eventually be the source of a gigantic global meltdown.


I realize that this one isn’t very specific as stated so let’s just say that the meltdown has to be objectively worse on all (or nearly all) counts than the 2007-2008 Financial Crisis. And it has to be widely accepted that US government debt was the biggest cause of the meltdown.


  1. Five or more of the current OECD countries will cease to exist in their current form.


This one relies more on the implicit 100 year time horizon then the rest of the predictions. And I would count any foreign occupation, civil war, major dismemberment or change in government (say from democracy to a dictatorship) as fulfilling the criteria.


A few additional clarifications on the predictions:


  • I expect to revisit these predictions every year, I’m not sure I’ll have much to say about them, but I won’t forget about them. And if you feel that one of the predictions has been proven incorrect feel free to let me know.


  • None of these predictions is designed to be a restriction on what God can do. I believe that we will achieve many of these things through divine help. I just don’t think we can do it ourselves. The theme of this blog is not that we can’t be saved, rather that we can’t save ourselves with technology and progress. A theme you may have noticed in my predictions.


  • I have no problem with people who are attempting any of the above or are worried about the dangers of any of the above (in particular AI) I’m a firm believer in the prudent application of the precautionary principle. I think a general artificial intelligence is not going to happen, but for those that do like Eliezer Yudowsky and Nick Bostrom it would be foolish to not take precautions. In fact insofar as some of the transhumanists emphasize the elimination of existential risks I think they’re doing a useful and worthwhile service, since it’s an area that’s definitely underserved. I have more problems with people who attempt to combine transhumanism with religion, as a bizarre turbo-charged millennialism, but I understand where they’re coming from.


Finally, as I mentioned above Alexander has published his predictions for 2017. As in past years he keeps all or most of the applicable predictions from the previous year (while updating the confidence level) and then incrementally expands his scope. I don’t have the space to comment on all of his predictions, but here are a few that jumped out:


  1. Last year he had a specific prediction about Greece leaving the Euro (95% chance it wouldn’t) now he just has a general prediction that no one new will leave the EU or Euro and gives that an 80% chance. That’s probably smart, but less helpful if you live in Greece.
  2. He has three predictions about the EMDrive. That could be a big black swan. And I admire the fact that he’s willing to jump into that.
  3. He carried over a prediction from 2016 of no earthquakes in the US with greater than 100 deaths (99% chance) I think he’s overconfident on that one, but the prediction itself is probably sound.
  4. He predicts that Trump will still be president at the end of 2017 (90% sure) and that no serious impeachment proceedings will have been initiated (80% sure). These predictions seem to have generated the most comments, and they are definitely areas where I fear to make any predictions myself, so my hat’s off to him here. I would only say that the Trump Presidency is going to be tumultuous.

And I guess with that prediction we’ll end.



Speaking of predictions with no end date. I predict that I'm going to keep doing this for as long as I can hold off the encroaching dementia, but I'll be less grumpy about it if I get some donations. My children, who always bear the brunt of my grumpiness, thank you.

2 comments:

  1. When I listened to the Economist's predictions for 2017 I wished that they had included how they did on their predictions from 2016.

    ReplyDelete
    Replies
    1. Yeah, I was actually going to use their predictions as an example, but they weren't actually all that useful. And of course everyone's embarrassed by how badly they missed Trump and Brexit.

      Delete