Support This Blog On Patreon!

Support this Blog!

All I ask is a $1 a month. (But more is great too.) If you find this content to be beneficial, interesting or just a fascinating peek into true insanity please donate.

Saturday, January 6, 2018

Predictions Revisited for 2018

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:



Or download the MP3



It’s that time of the year when some people resolve to do better in the coming 12 months than they did in the last 12. Other people use the occasion of the New Year to review the year just passed by releasing best of (and worst of) lists. Still others might take the opportunity to honor those who have died in 2017 or to bemoan all the bad things which happened (most of which appear to involve Trump.)


If you’re of a slightly more analytical bent, you might make predictions for the coming year. And if you’re trying to be scientific, this is also the time when you review how well you did on last year’s predictions. I plan to use this post to do both, but as long time readers might know, my system for prediction is somewhat different than most peoples. The predictions I made last year were permanent predictions. Or at least I was predicting as far out as I thought I had any chance of seeing clearly, which means my predictions aren’t quite permanent, I actually cut things off at 100 years.


I should also mention that last year’s prediction post ended up being one of my favorites, and not because of the predictions, but because it was one of the posts I returned to over and over again when I needed to explain my view of the world. This mostly happened during in-person discussions, but you may have noticed that it made an appearance not that long ago when I was discussing the difference between Bayesian Rationality and Talebian antifragility.


The theme of the previous post that I kept returning to is important enough that it deserves to be repeated here, though I won’t go into quite as much detail as I did in the previous posts.


To start with, recently there has been a big push to make people more accountable for their predictions, to do a better job of checking the accuracy of the predictions after the fact and finally for predictors to apply confidence levels to the predictions they’ve made. The biggest advocate for this is Philip Tetlock of the Good Judgement Project (and also the author of Superforecasting.) In my previous posts, I pointed out that while this effort was laudable that it has a tendency to miss Black Swans. Let me give you an example of what I mean, and instead of using an example of a negative Black Swan, as I so often do, this time I’ll use an example of a positive Black Swan.


The biggest financial story of the year has been the explosion of value in cryptocurrencies. But, for most people, more than its importance as a story was the fact that you could have made an enormous amount of money by investing in something like Ethereum, which went up a staggering 10,000% in 2018. How did the Good Judgement project do on predicting this? As far as I can tell they didn’t even try. I couldn’t find any cryptocurrency related predictions on their site for 2017. (If you do find any please point them out to me.) This is not to say that no one made any cryptocurrency predictions. Scott Alexander of SlateStarCodex uses a similar prediction methodology, and he did make a cryptocurrency prediction. He predicted, with 60% confidence, that bitcoin would end the year higher than $1000. At the time he made the prediction bitcoin was below $1000, but just barely, so essentially what he was saying is that he was 60% sure that you wouldn’t lose money on bitcoin. Not exactly a ringing endorsement, and definitely not something that gave any hint as to what was actually going to happen in 2018. Which is not to fault Alexander in any way. But if you were using this sort of prediction as a guide for how 2017 would look, there were some glaring omissions.


On the other hand you had people like the Winklevoss Twins, who were saying back in 2013 that BitCoin was going to explode, and who spent from December 3rd of 2013, through February of 2017 looking kind of foolish as bitcoin value was basically flat. But now, they’re the first bitcoin billionaires. They would have done terribly as entrants in the Good Judgement Project in 2014, 2015 and 2016, but in the end, even if their timing was off, they made the right bet.


The point of all this is that most of the things which really change the world (like bitcoin or the Election of Trump) have a really low probability of happening. And in fact part of the reason why they’re so impactful is because of this low probability. The impact is greater because no one is prepared for it. But if, like the Good Judgement people, your primary goal is accuracy, then you’re not even going to make predictions about these events. Or if you are they’re going to be very conservative. Like Scott Alexander’s 60% confidence prediction that bitcoin would end the year above $1000.


I said I would keep this short, and the idea of exposing yourself to positive black swans (like bitcoin) and protecting yourself from negative black swans has been covered in many of my posts. But as it relates to my prediction system vs. the Good Judgement system, the point I want to get at is that, despite what people might think, the best way to deal with an uncertain future is not to maximize the accuracy of your predictions. It’s to have a plan for even those things that appear really rare, if the impact of those rare events is great enough. Not because they’re likely but because the consequences are so great. In other words, sometimes it only takes being wrong once, to undo all the times you were right. And sometimes it only takes being right once to undo all the times you were wrong. If the future proceeds how everyone expects, than having those same expectations is not going to get you very much, nor is much intelligence required. It’s when things happen that very few people expected that wisdom and preparation become important.


With that introduction out of the way, let’s see how my permanent predictions have held up over the previous year. So far I’m not wrong about anything, which is good, it would be pretty embarrassing to make a permanent prediction and be wrong less than a year later. Though I will admit that the winds of 2017 have not always blown in my favor.


Artificial Intelligence


1- General artificial intelligence, duplicating the abilities of an average human (or better), will never be developed.
This is one of the areas where there was a lot of exciting developments during 2017. The biggest being when AlphaGo beat the world number 1 Go Player in May. There was further excitement in December when the same engine, now called AlphaZero, apparently mastered Chess as well. Though there are a lot of quibbles with this result.


Last year I linked to a list and said that if a single AI could do everything on the list my prediction would have failed. Reviewing the list now, I think there are a few items where we’re really close, but in almost all cases every item needs a specialized system. That’s why AlphaZero making the transition from Go to Chess was such big news, but even that is still a long way away from any kind of general artificial intelligence.


2- A completely functional reconstruction of the brain will turn out to be impossible.
I haven’t come across anything one way or the other on this one, though it remains a big area of interest for scientists. But to the best of my knowledge, there were no major breakthroughs in 2017. I know Robin Hanson is a big advocate for this technology, so I promise that by next year I will at least read his book Age of Em.


3- Artificial consciousness will never be created.
I’ve seen no evidence that 2017 brought us any closer to understanding consciousness, let alone implementing it artificially. I would argue that we’re really far away from this.


One of the points I’d like to emphasis, as we review these predictions and sets of predictions, is to continually ask, “But what if Jeremiah is wrong?” The AI section is particularly interesting in this regard because it can go either way. Many people think that if we do manage to develop superintelligent AI that it will be the end of all our problems. Other people think the exact opposite, they agree with the first group that a superintelligent AI would be the end all our problems, but only because it will end all of us. On the positive side if I’m wrong, then the consequences are almost entirely beneficial, and other than the hit to my ego, it will matter not at all. On the negative side if I’m wrong then there are grave consequences, which is why, despite arguing it won’t happen, I am nevertheless completely in favor of efforts to ensure friendly AI, and have even donated money to that end.


I could see someone arguing that the antifragile thing to do is assume that we might create a malevolent AI, and that under the precautionary principle we should do everything in our power to stop AI research. I would also be fine with this, though I don’t think, given the way technology works, that this ban would be very effective. Also I understand that by arguing it’s impossible I might weaken attempts to control AI, (oh, that I were that influential) making the negative Black Swan more likely, but in this case I think the bigger danger is putting all of our eggs into the salvation through AI basket, and neglecting other, more worthwhile endeavors.


Transhumanism


1- Immortality will never be achieved.
In the AI section most of the news was not good for me and my predictions. In this section the news out of 2017 is more mixed, especially on this topic. You may have heard that life expectancy in the US has actually gone down, two years in a row. One theory for achieving immortality is that we’ll soon reach a point where life expectancy is increasing by more than one year every year, which would promise a form of immortality, but if we can’t even keep the life expectancy gains we already have, then this gradual method of immortality appears unlikely. Which is not to say there couldn’t be some miraculous breakthrough that brings immortality in one fell swoop, but 2017 brought no sign of that either.


2- We will never be able to upload our consciousness into a computer.
This is closely related to the last section, and most of what I said there applies here. Needless to say by any reasonable estimate we’re a long way away on this.


3- No one will ever successfully be returned from the dead using cryonics.
Another area where the technology is mostly still non-existent, meaning progress is difficult to measure, but one way to get an approximate sense of progress would be to look at the number of people who are in cryonic suspension. And when one of the leading cryonics organizations only added three people in 2017 (down from 6 in 2016 and 10 in 2015) then I have to assume that no significant breakthroughs have been made over the last year.


To be honest this surprises me a little bit. If I was a transhumanist I think I’d definitely sign up for cryonics, but very few people have, and it’s not that people have signed up but just not died yet, the two largest organizations have a total of only 3,410 members of all types (this includes people who’ve submitted DNA, and pets!)


To turn once again to the question, but what if I’m wrong? Well, then we get immortal individuals, living their life in a virtual paradise. And this paradise is available to anyone who coughs up $28,000 for cryonic suspension. (That seems cheap. You can see why I’m surprised more atheists/transhumanists/adventurous souls haven’t taken advantage of it.)


But if I’m right, then we have a bleaker future. We’re going to have to either get our act together, or hope that there’s a higher power out there. If we prepare for a future where we will never escape our problems by uploading our consciousness, and it turns out we’re wrong, then we’ve lost nothing, but if we assume that we can escape in this fashion, and behave accordingly, and then it doesn’t happen we could be in a lot of trouble. And I realize that this caution does not apply to very many people, but I expect that number will go up over time.


Outer Space


1- We will never have an extraterrestrial colony (Mars or Europa or the Moon) of greater than 35,000 people.
Elon Musk continues to push ahead and in 2017 he released an updated version of his Mars plan. Apparently the big thing is that he’s made the rockets and the entire program more efficient. (There’s a good breakdown here.) It continues to be insanely ambitious, and while I hesitate to make short term predictions I predict he won’t hit his goal of landing people on Mars in 2024.


Outside of Musk, there is still NASA, and they do currently have a plan to make it to Mars sometime in the 2030’s. Though my impression is that NASA has had a vague plan to go to Mars for as long as I’ve been alive, and it was always a decade or two in the future, just like now. In other news, Trump just barely signed a directive to go back to the Moon, but as of yet there are no details.


Thus, while we’re still a long way away from having any extraterrestrial colony of any size, if someone wanted to argue that we moved marginally closer in 2017, I wouldn’t fight them on it.


2- We will never establish a viable human colony outside the solar system.
Obviously the path to this runs through Mars, so everything I said above applies here, just multiply the difficulty by several orders of magnitude. Also if you want a very in depth look at this I would once again recommend reading Aurora by Kim Stanley Robinson, I know it was published in 2015, but I think it gives a pretty upto date view on the difficulties.


3- We will never make contact with an intelligent extraterrestrial species.
In some respects this is potentially the easiest of the three predictions for me to be wrong about. It’s possible that aliens will do all the work. And we did see the first (confirmed) object from outside the solar system in 2017, which, interestingly, was shaped very strangely, and therefore may have been artificial. I guess that could be counted as progress towards proving me wrong, but on the other hand, I think that every year that goes by without any contact adds to the probability that there will never be any contact.


If you were paying attention I re-ordered 1 and 2 from their original positions, since there’s no reasonable scenario under which extrasolar colonies aren’t strictly dependent on extraterrestrial colonies. Beyond that, we once again ask, but what if I’m wrong? And, once again, I think there is very little downside if I’m wrong, at least on the first two. (Being wrong about the third prediction creates a future I don’t think any of us can imagine.) And as with many of these predictions I hope I am wrong, but when it comes to leaving the Earth things are a little bit different.


I think there are a significant number of people, especially among the more educated, who assume, on some level, that however bad it gets here, we will eventually be able to leave the Earth behind. And an even greater number of people who assume that progress in general will fix the problems eventually, even if they haven’t thought things all the way through to the necessity of eventually leaving the Earth. And the point I’ve made in several posts is that if leaving Earth is the eye of the needle, if that is going to eventually be the only way forward, then we probably need to take it a lot more seriously, because it’s going to be significantly more difficult than most people think. And as I pointed out in the last post, it’s possible, even likely that taking it seriously enough to make it happen will involve some trade-offs.


War (I hope I’m wrong about all of these)


1- Two or more nukes will be exploded in anger within 30 days of one another.
Just the excitement over North Korea during 2017 has unfortunately brought this prediction much closer to being fulfilled.


2- There will be a war with more deaths than World War II (in absolute terms, not as a percentage of population.)
Another thing I desperately hope is not true. And while the news on the first point is bad. I think that here I haven’t seen much sign that any of the major powers is interested in all out war. Which is good.


3- The number of nations with nuclear weapons will never be less than it is right now.
When introducing this prediction last year I mentioned that the current number is nine: US, Russia, Britain, France, China, North Korea, India, Pakistan and Israel. I haven’t seen any hint that any of the current countries are about to give up their weapons, but I have seen talk about the idea of Japan and South Korea possibly acquiring them in response to North Korea. I think it would take a lot for Japan to decide to develop nukes, but the difficulty would be entirely emotional and psychological, the technical side of it would be easy for them.


More than any other section this is the one where I hope I’m wrong. I even include it in the title. I very much hope Steven Pinker is right and that the world is getting less and less violent. And I am aware of the evidence for this. But, not to beat a dead horse, it bears repeating that the disappointment which comes from expecting a war and not having one, is far better than the disappointment of not expecting a war and getting one.


Miscellaneous


1-There will be a natural disaster somewhere in the world that kills at least a million people
Given the random nature of disasters I don’t think 2017 did much to change the probabilities here, though for those who believe that global warming will cause more hurricanes, and greater climate instability, (I think the jury is still out) every year we don’t do something significant about carbon emissions brings us closer to that sort of climate disaster.


2-The US government’s debt will eventually be the source of a gigantic global meltdown.
It wasn’t that long ago that I covered this at some length so I won’t add much except to say that the deficit was $666 billion in 2017 (I’ll try not to read too much into that number.) And the debt grew by only $600 billion, which was slower than the Obama era average of over a trillion.


3- Five or more of the current OECD countries will cease to exist in their current form.
2017 brought some excitement here with the Catalonian independence vote. And of course there was talk of a Calexit after the election of Trump. Meaning this is another prediction I think is looking pretty good after 2017.


What all of these predictions have in common is that they’re Black Swans, improbable events that would have a major impact. Most of the items on the list are things people are insufficiently protected against, while others (primary the Outer Space section) are things we’re doing a bad job of capitalizing on. Looking forward, I don’t think any of my predictions are going to be proven true or false in 2018, and probably not in 2019, but I do believe that sometime between now and January 1st, 2117, all of them will end up being accurate, and it doesn’t matter if it only happens that one year, if humanity has made the wrong call, in most cases that one time is all it will take.





One final prediction, I’ll continue to try to come up with vaguely humorous appeals for donations and that most of them will largely fail. Despite that consider donating anyway.

4 comments:

  1. I think some measure should be taken with overlapping predictions. For example, if you predict some war type act will kill 1M+ people and then later predict there will be a major war in the upcoming year, you haven't really made two independent predictions but two overlapping ones. You could also predict some natural disaster will kill 1M people and then predict a natural disaster will kill at least 450,000 men. Since half the population is male if you pull off the first you will almost certainly pull off the second. It's also very difficult to see how the 2nd couldn't happen without at least getting very close to the first happening unless you have some natural disaster that targets men (a comet landing on a new Million man March perhaps?).

    At first I thought overlapping predictions were cheating but they are actually increasing your risk. Perhaps they are a negative intellectual black swan for bloggers and podcasters. By making an overlapping prediction, you can 'score two points' if the underlying thing happens but then you also double your loss if it doesn't. You're 'filling two slots' in your limited prediction basket with the same thing.

    Anyway so I suspect under AI 1 and 3 are partially overlapping. If we make artificial consciousness, it would be hard to see how it wouldn't have at least some degree of general intelligence. Maybe not GI equal to any given human but could you have a fully conscious AI that isn't at least holding GI equal to a somewhat dimwitted conscious human? I don't think so. The opposite I think can happen, general intelligence that isn't conscious....although I could see maybe the two are linked and at some level general intelligence requires consciousness.

    Quibble:
    "It wasn’t that long ago that I covered this at some length so I won’t add much except to say that the deficit was $666 billion in 2017 (I’ll try not to read too much into that number.) And the debt grew by only $600 billion, which was slower than the Obama era average of over a trillion."

    Fiscal year runs from Oct 1 to Sep 29th. So the 'only grew by $600B' is is 3 months plus of Obama and the balance Trump. (BTW, how is $671B a 'bit' over $600B? We are now rounding down! a '7'?!) The only fair reading of deficits is decline from the start of Obama's first term with what appears to be a gradual increase starting now.

    BTW, still standing by my 'deficits don't matter' view and I suspect not only could we continue with deficits but we could even move to 100% deficit financing and simply set most taxes to zero.

    ReplyDelete
    Replies
    1. The overlapping predictions point is a good one. As I was reviewing things, I realized that it would take a pretty big conventional war to kill as many people as World War II so the most likely way for it to happen would be nukes.

      I'm missing the part where I say a disaster will kill 450,000 men...

      As far as consciousness and GI, I agree that it'd be tough to get consciousness without GI, but as you say it's conceivable that you could have GI without consciousness. I separated them, I could be wrong about the GI, but if I am I don't think I'm wrong about the consciousness.

      And I didn't expect you to change your position on deficit... ;)

      Delete
  2. Also in terms of outer space, your predictions seem to contradict your stance that the Fermi Paradox is a thing.

    Previously I offered the hypothetical that Earthlife has been around 4B years, let's say it has another 1B to go for a total of 5B. Say our galaxy has 5B planets that will follow earth's history but at any given moment they are randomly scattered around the galaxy and at random points in history. Even in the unlikely case that at this moment all 5B planets are at the '2018' point, our SETI efforts would be unlikely to pick any of them up. In order for Fermi to be a problem, you need to bet the next 1B years will look very different than the last 4B years.

    Just to be clear if my hypothetical is right then at this moment 4/5ths of life bearing planets are somewhere between years -4B and 2018. Only 1/5th of planets are at 2019- 1,000,002,018. In order for Fermi to be an issue at some point history will have to consist of not only extra-solar colonies but a huge number of them in order to fill up the galaxy with at least one lucky planet. But if your prediction is right then our galaxy is filled with plenty of life bearing planets, plenty of intelligent life bearing planets and even a good number of planets with more advanced intelligences but it is perfectly reasonable that our SETI efforts have so far come up naught.

    ReplyDelete
  3. Question for consideration from the link I sent you:

    Can corporations (and also I suppose long run institutions like the Catholic Church), be considered AI's? If so I suspect we could guess they might be AI's that nonetheless do not have consciousness.

    ReplyDelete