Support This Blog On Patreon!

Support this Blog!

All I ask is a $1 a month. (But more is great too.) If you find this content to be beneficial, interesting or just a fascinating peek into true insanity please donate.

Saturday, January 13, 2018

Commentary on Moloch (Now With More Fermi's Paradox)

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:



Or download the MP3



As I mentioned a few weeks ago, in addition to recording my own blog posts and turning them into a podcast, I started doing the same thing with posts from SlateStarCodex, Scott Alexander’s fantastic blog. I would again repeat that if you like SSC and prefer to listen to your content rather than reading it, you should check it out. When I announced the SSC podcast several people requested that I record some of the older SSC posts, the classics if you will, so this week I decided to record his Meditations on Moloch post. And I figured as long as I was doing that, I might as well provide some commentary on it. Because while I think Alexander is largely right on the money, I don’t think he goes far enough, or maybe it’s fairer to say, in my opinion he misses some of the implications.


Of course Alexander’s post is nearly 15,000 words, and my posts are generally around 3,500 words so I don’t know how much of his epic post I’ll be able to cover, but I think there are several ways in which his post ties into themes and subjects I’m interested in, specifically technology, religion, and Fermi’s Paradox, so I’ll be highlighting those connections. But before reading my take on things, I would urge you to read the original post or listen to my recording of it. The name of the post, comes from Allen Ginsberg’s poem Howl, particularly part II, consequently Alexander makes extensive reference to the poem and I’ve used a recording of one of Ginsberg’s recitations of Howl every time Alexander quotes from it. (Which I personally think is super cool.)


For those of you who can’t be bothered, or don’t have the time to read or listen to the original (and as I said it is long) I’ll give a very brief summation of (what I believe to be) Alexander’s point:


The idea of Moloch is the idea of the race to the bottom, and Alexander gives over a dozen examples of it in action, but the one I want to borrow is his analogy of the rats and the island.


Suppose you are one of the first rats introduced onto a pristine island. It is full of yummy plants and you live an idyllic life lounging about, eating, and composing great works of art (you’re one of those rats from The Rats of NIMH).


You live a long life, mate, and have a dozen children. All of them have a dozen children, and so on. In a couple generations, the island has ten thousand rats and has reached its carrying capacity. Now there’s not enough food and space to go around, and a certain percent of each new generation dies in order to keep the population steady at ten thousand.


A certain sect of rats abandons art in order to devote more of their time to scrounging for survival. Each generation, a bit less of this sect dies than members of the mainstream, until after a while, no rat composes any art at all, and any sect of rats who try to bring it back will go extinct within a few generations.


He offers the rat example in the Malthusian Trap section, and in this case it’s a race to the bottom for survival, but you can see a similar race to the bottom with capitalism and profits, democracy and electability, and essentially any system with multiple actors and limited resources. Alexander groups all of these drives to optimize one factor at the expense of others under the heading of Moloch. And makes the, not unjustified claim, that unless we can figure out some way to defeat Moloch that it will eventually mean the end of civilization. And by this he means the end of art, and literature, and science and love. Because just like the rats, in the end those aren’t any of the factors we’re optimizing for.


Presumably very few of my readers will outright disagree with the idea of Moloch, that evolution, and capitalism, and politics are predisposed to engage in a race to the bottom. They will only disagree with how much of a factor it is in this enlightened age, with many granting the existence of Moloch, but convinced that he has already been well and truly beaten. Others, in my opinion, the more realistic, believe that the contest is as yet undecided. And finally there are certainly people who believe that we are destined to lose if we haven’t lost already.


The difference between these groups hinges almost entirely on their views of technology. With the first group feeling that technology has allowed us to progress past things like Malthusian Limits. We may be rats on an island, but technology allows us to turn more of the island into arable land, to build houses on the slopes of the extinct volcano, and best of all, if necessary, to build ships and find new islands.


People in the middle group agree with the benefits of technology, and agree that when the rats first arrived on the island that things were pretty awesome, but they understand that other islands are really hard to get to, and that the current island is not looking so great. Also that while some rats have it pretty good, that there are a lot of rats who already have a sad, miserable life.


In the final group we have people who are convinced that we’ve already wrecked the island we’re on, and that the other islands are horrible inhospitable wastelands. They probably also believe that we’re already on the verge of collapse and we just don’t know it. And, that sure, collapse won’t kill all the rats, but it will leave those that survive envying the dead.


I’m in the middle group, maybe leaning towards the final group, mostly because the majority of rats seem completely unaware of Moloch, and the race to the bottom. But also because I think technology could fail to save use by not being powerful enough, but also by being too powerful. And this is where things like the singularity and in particular artificial intelligence come into play.


On these points Alexander and I are largely in agreement, and at this stage I mostly want to point out how the idea of Moloch ties into the theme of this blog. I claim the summer is past and the harvest is over, but this presupposes that there was a summer and a harvest and also explains why it must eventually end, not because I’m a pessimist, but because there is a implacable force driving things in that direction, Moloch.  Alexander acknowledges the summer and the harvest as well and ties it into Robin Hanson’s idea of Dream Time, the idea that humanity has never believed in more crazy non-adaptive things. In other words they’ve never been less worried about Moloch. Why should this be?


Alexander offers four things that are currently keeping Moloch at bay, and allowing people to engage in a host of activities which don’t have much to do with daily survival.


Excess resources: When the rats first arrive on the island, they don’t need to worry about survival because they have a whole island to themselves. To tie this in to a point I’ve made in the past, it was not that long ago that we discovered the “island” of millions and millions of years worth of stored solar energy, in the form of fossil fuels. And we have kept Moloch at bay through the excess resources of coal, oil and gas, ie cheap and abundant energy, but as I pointed out in a previous post, even if you don’t believe in peak oil, or even if you’re unconcerned by global warming, at some point continued growth runs into hard limits built into the laws of physics.


Physical Limitations: Certain human values, like sleep, have to be preserved because optimizing productivity requires optimizing sleep, or at least not ignoring it. But what if we can use technology to get around physical limitations like the need for sleep? There’s been a lot of worry recently about whether robots or AI will be taking jobs, another thing I have written about in the past. As an example, just today I saw an article in Slate Jack in the Box CEO Says It "Makes Sense" to Consider Replacing Human Cashiers with Robots If Wages Rise. Within the article, there’s the vague hint of disapproval (like the scare quotes around “makes sense”) but if Jack in the Box doesn’t do it, someone else will, and I predict that the time is not that far distant where it won’t take increased wages to make robotic cashiers competitive, they’ll be dramatically cheaper almost regardless of what you pay the human cashiers.


Utility Maximization: This is just Alexander’s way of saying that human values and some of the systems we rely on are currently well aligned. That one of the reasons capitalism and democracy have done a pretty good job, despite being built such that they will eventually descend to the lowest common denominator, is because, for the moment, both are mostly aligned with genuine human values. Customer and employee satisfaction in the case of capitalism and voter satisfaction in the case of democracy. But Alexander points out that this is a temporary alliance and is more likely to be broken by technology than strengthened. (See robots above.)


On this topic Alexander made one comment that really jumped out at me and which I want to pay particular attention to. I don’t know how much this is related to technology, but it is something I’ve been more and more worried about. As I said democracies are almost custom built to engage in a race to the bottom. Partially this is prevented by the need to align themselves with voter satisfaction, and partially this problem is solved, at least in the US, by as Alexander points out:


having multiple levels of government, unbreakable constitutional laws, checks and balances between different branches, and a couple of other hacks.


I entirely agree with this statement, but I also think that the checks put in by the founders to prevent a race to the bottom have been significantly weakened over the last several decades. I talked about DACA/Dreamers a few months ago, and I just saw that a federal judge ordered the Trump administration to partially revive it. We can argue about the morality and ultimate wisdom of DACA all day long, but it’s hard to see where the judiciary ordering the executive to reimplement legislation the legislature failed to pass is anything other than a gross erosion of the checks Alexander mentions.


Returning to the list, the final thing Alexander lists as something which keeps Moloch at bay is coordination. As an example, the rats could all agree to limit the number of children they have. Corporations could all agree to only make people work 40 hours a week (or more likely be forced to do so by the government). And, in theory, under the rule of law, we all agree to abide by certain rules. Alexander repeatedly talks about how things are solvable from a god’s-eye view, but not by individual actors. Coordination allows people access to this god’s-eye view.


In Alexander’s view coordination is the only thing which is a pure weapon in the fight against Moloch. But, as we’re still on the subject of technology, does it increase coordination or does it undermine it? At first glance most people, including Alexander feel that it will increase the ability to coordinate, and certainly there are lots of reasons for thinking this. Better communication being the primary one. But remember the original post was written in 2014. Now that we’re in 2018 and you can look back over the last few years, and the increasingly fractured landscape, are you sure that technology is going to, on net, improve coordination?


Recall that during World War II the Soviet Union was coordinated enough to sacrifice 13.7% of it’s population in a coordinated attempt to defeat Nazi Germany (who were able to sacrifice ~8.5% of their people in a coordinated attempt to do the opposite.) Does anyone feel like we could achieve that level of coordination in modern America? You may argue that that sort of coordination was the bad sort, maybe, the outcome was certainly bad, but does anyone doubt both countries were more coordinated than we are by any measurement? Or you may argue that given there is no Nazi Germany, it’s not necessary for us to possess the coordination of a Soviet Union. This is a good point if the only time we need to coordinate is when we’re in conflict with other nations, but what if we need to coordinate to avoid overfishing, or stem global warming, or to put a colony on Mars?


It occurs to me, that like many things coordination may have a sweet spot, to little and a race to the bottom is unavoidable, to much you create a fractured society of hundreds of ideological groups composed of perfectly coordinated individuals who refuse any level of coordination with other groups.


Without coordination, the problem is that you have millions of individual actors, all maximizing their own welfare at the expensive of the commons. With perfect coordination you have thousands of ideological clusters which end up being more powerful than the individuals they’ve replaced, but no less selfish. Or to put it another way, we’ve moved from the individual to the alt-right and the antifa, but in between those two points we were all just Americans. Which was better. There was, perhaps, less true coordination, but we were more effective (witness our contributions to World War II).


Moving on from technology, the next topic on my list is religion, which is, of course, another very effective method for coordination, but which has lately fallen into disfavor. Alexander doesn’t spend much time specifically discussing religion except to lump it in with the other things that assist in coordination, like traditions, social codes, corporations, governments, etc. But in another post he makes a strong case for religion being the very best of all coordinating institutions. Which ties into a point I frequently bring up: Religion is more important and useful and antifragile than the non-religious (and even some of the religious) realize even in the absence of God. Coordination is just one more example of that.


Alexander brings up another example when he talks about the controversial topic of historical patriarchy and the tradition that women should stay home and bear children. Like most people these days he’s against it, but he brings up the point that it does make a society more resistant to Moloch. (He actually uses the phrase gnon-resistant, but we don’t have the space to get into what gnon is.) And whatever your opinion of it, this is something most religions emphasis, and I offer it up as one more piece of evidence that these beliefs came about because they were useful, not because everyone in the past was a horrible sexist bigot. Whether they continue to be useful is a different topic.


So these are a few of the benefits of religion even in the absence of God, but what if we bring God back into the picture? What does a discussion of Moloch say about whether God exists? In the original post, Alexander repeatedly talks about terrible problems, which disappear in the presence of a “god’s-eye-view”. It’s a phrase he uses repeatedly (17 times, actually.) Thus the obvious solution for Alexander is to build a god that can “kill Moloch dead.” In Alexander’s estimation our best chance at creating this god, is to build a friendly, superintelligent AI. This gives us a god which will not only allow us to engage in perfect coordination, but endow us with infinite resources, remove all physical limitations, and create systems where incentives are perfectly aligned. That is certainly one plan.


Another plan is to hope such a God already exists. (Or to exercise something very similar to hope, faith.) In other words, Alexander’s plan is to hope that we can create a friendly god. Those who are religious have faith that such a God already exists. Is one strategy really that superior to the other. And is there any reason why you wouldn’t hope for both?


If someone is going to place their faith in Alexander’s plan it’s reasonable to ask how likely it is. Well, let’s take a moment to discuss it. Alexander’s god doesn’t get us away from maximization, just as the rats have to maximize for survival, and capitalists have to maximize for profit, the superintelligence will end up having to maximize for something as well. If we want to know how much hope we should have, we need to know what it’s likely to maximize. The cautionary example that most people are familiar with is the paperclip maximizer, which stands in for all sorts of potential maximization. In this specific example the AI just happens to be programmed to produce paper clips, and ends up turning all available matter into paper clips. Of course, the example doesn’t have to be as silly as paper clips. Even if we tell the AI to optimize for intelligence that could still entail turning all available matter into computers, which is less ridiculous than paper clips, but no less deadly for us. Or we could tell it to optimize our happiness, which may just result in the AI plugging an electrode into our pleasure center. A process AI researchers call wire-heading. Viola! Maximum happiness, and who cares if you’re indistinguishable from a heroin addict. This is just a small taste of the difficulties we face in implementing Alexander’s plan, difficulties which have had whole books written about them. (For example Bostrom’s Superintelligence which Alexander references frequently in the original post.)


As an aside, you may at this point be wondering, if everything has to maximize for something what does religion say that God maximizes for? Well I only feel competent to speak about Mormonism, but on that count we know the answer. It comes in Moses 1:39:


For behold, this is my work and my glory—to bring to pass the immortality and eternal life of man.


This sounds like it would align pretty well with human values.


To return to Alexander’s plan, as you might imagine there are a lot of challenges when you decide to build a god. But what about the other plan: religious faith?


Many people will complain that there’s very little evidence for God (to be fair that may be why they call it faith...) and that there are several problems, not the least of which is the problem of evil and suffering. In a marvelous piece of serendipity I believe research into artificial intelligence has given us a great answer to those problems (an answer Alexander himself expressed some admiration for.) But I have yet to talk about Fermi’s Paradox, and I think within the original post we have yet another reason to believe there might be a God.


As Alexander describes Moloch there seems to be no reason why he hasn’t swallowed the universe. In fact Alexander says:


This is the ultimate trap, the trap that catches the universe. Everything except the one thing being maximized is destroyed utterly in pursuit of the single goal, including all the silly human values.


Oh, wait, that’s not Moloch, that’s what happens if we get Alexander’s AI god wrong. But I suppose that’s one form Moloch could take. But he also says when speaking of walled gardens:


Do you really think your walled garden will be able to ride this out?
Hint: is it part of the cosmos?
Yeah, you’re kind of screwed.


But if this is true, if Moloch will eat everything in its path, why hasn’t it already? Why haven’t we been destroyed by the Berserkers of Fred Saberhagen? Turned into computronium by alien AIs? Been enslaved by Kang and Kodos? If your argument is that interstellar travel is hard than why haven’t we seen evidence of the Moloch-style process of creating a Dyson sphere? Or given the prominent place memes have in Alexander’s post, why haven’t any infectious messages been broadcast at us?


As usual, it’s always possible that we are entirely alone. Which I think leads to the worrying conclusion that Moloch is a strictly human creation… It’s also possible that the “Gardener over the entire universe” Alexander says we need, already exists, and our task is to figure out what he wants from us.


Finally, I think Alexander and I both agree that the harvest is past and the summer is ended, and that the path to salvation is a narrow one. He pins his hopes on transhumanism and a friendly AI. I’m pinning my hopes on the existence of God. If you think I’m silly, that’s fine. If you think Alexander is silly, that’s fine. If you think both of us are silly, well then, are you sure you’re not worshipping Moloch without even realizing it?





If you don’t think I’m silly, or if you do, but you find it amusing, consider donating.

Saturday, January 6, 2018

Predictions Revisited for 2018

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:



Or download the MP3



It’s that time of the year when some people resolve to do better in the coming 12 months than they did in the last 12. Other people use the occasion of the New Year to review the year just passed by releasing best of (and worst of) lists. Still others might take the opportunity to honor those who have died in 2017 or to bemoan all the bad things which happened (most of which appear to involve Trump.)


If you’re of a slightly more analytical bent, you might make predictions for the coming year. And if you’re trying to be scientific, this is also the time when you review how well you did on last year’s predictions. I plan to use this post to do both, but as long time readers might know, my system for prediction is somewhat different than most peoples. The predictions I made last year were permanent predictions. Or at least I was predicting as far out as I thought I had any chance of seeing clearly, which means my predictions aren’t quite permanent, I actually cut things off at 100 years.


I should also mention that last year’s prediction post ended up being one of my favorites, and not because of the predictions, but because it was one of the posts I returned to over and over again when I needed to explain my view of the world. This mostly happened during in-person discussions, but you may have noticed that it made an appearance not that long ago when I was discussing the difference between Bayesian Rationality and Talebian antifragility.


The theme of the previous post that I kept returning to is important enough that it deserves to be repeated here, though I won’t go into quite as much detail as I did in the previous posts.


To start with, recently there has been a big push to make people more accountable for their predictions, to do a better job of checking the accuracy of the predictions after the fact and finally for predictors to apply confidence levels to the predictions they’ve made. The biggest advocate for this is Philip Tetlock of the Good Judgement Project (and also the author of Superforecasting.) In my previous posts, I pointed out that while this effort was laudable that it has a tendency to miss Black Swans. Let me give you an example of what I mean, and instead of using an example of a negative Black Swan, as I so often do, this time I’ll use an example of a positive Black Swan.


The biggest financial story of the year has been the explosion of value in cryptocurrencies. But, for most people, more than its importance as a story was the fact that you could have made an enormous amount of money by investing in something like Ethereum, which went up a staggering 10,000% in 2018. How did the Good Judgement project do on predicting this? As far as I can tell they didn’t even try. I couldn’t find any cryptocurrency related predictions on their site for 2017. (If you do find any please point them out to me.) This is not to say that no one made any cryptocurrency predictions. Scott Alexander of SlateStarCodex uses a similar prediction methodology, and he did make a cryptocurrency prediction. He predicted, with 60% confidence, that bitcoin would end the year higher than $1000. At the time he made the prediction bitcoin was below $1000, but just barely, so essentially what he was saying is that he was 60% sure that you wouldn’t lose money on bitcoin. Not exactly a ringing endorsement, and definitely not something that gave any hint as to what was actually going to happen in 2018. Which is not to fault Alexander in any way. But if you were using this sort of prediction as a guide for how 2017 would look, there were some glaring omissions.


On the other hand you had people like the Winklevoss Twins, who were saying back in 2013 that BitCoin was going to explode, and who spent from December 3rd of 2013, through February of 2017 looking kind of foolish as bitcoin value was basically flat. But now, they’re the first bitcoin billionaires. They would have done terribly as entrants in the Good Judgement Project in 2014, 2015 and 2016, but in the end, even if their timing was off, they made the right bet.


The point of all this is that most of the things which really change the world (like bitcoin or the Election of Trump) have a really low probability of happening. And in fact part of the reason why they’re so impactful is because of this low probability. The impact is greater because no one is prepared for it. But if, like the Good Judgement people, your primary goal is accuracy, then you’re not even going to make predictions about these events. Or if you are they’re going to be very conservative. Like Scott Alexander’s 60% confidence prediction that bitcoin would end the year above $1000.


I said I would keep this short, and the idea of exposing yourself to positive black swans (like bitcoin) and protecting yourself from negative black swans has been covered in many of my posts. But as it relates to my prediction system vs. the Good Judgement system, the point I want to get at is that, despite what people might think, the best way to deal with an uncertain future is not to maximize the accuracy of your predictions. It’s to have a plan for even those things that appear really rare, if the impact of those rare events is great enough. Not because they’re likely but because the consequences are so great. In other words, sometimes it only takes being wrong once, to undo all the times you were right. And sometimes it only takes being right once to undo all the times you were wrong. If the future proceeds how everyone expects, than having those same expectations is not going to get you very much, nor is much intelligence required. It’s when things happen that very few people expected that wisdom and preparation become important.


With that introduction out of the way, let’s see how my permanent predictions have held up over the previous year. So far I’m not wrong about anything, which is good, it would be pretty embarrassing to make a permanent prediction and be wrong less than a year later. Though I will admit that the winds of 2017 have not always blown in my favor.


Artificial Intelligence


1- General artificial intelligence, duplicating the abilities of an average human (or better), will never be developed.
This is one of the areas where there was a lot of exciting developments during 2017. The biggest being when AlphaGo beat the world number 1 Go Player in May. There was further excitement in December when the same engine, now called AlphaZero, apparently mastered Chess as well. Though there are a lot of quibbles with this result.


Last year I linked to a list and said that if a single AI could do everything on the list my prediction would have failed. Reviewing the list now, I think there are a few items where we’re really close, but in almost all cases every item needs a specialized system. That’s why AlphaZero making the transition from Go to Chess was such big news, but even that is still a long way away from any kind of general artificial intelligence.


2- A completely functional reconstruction of the brain will turn out to be impossible.
I haven’t come across anything one way or the other on this one, though it remains a big area of interest for scientists. But to the best of my knowledge, there were no major breakthroughs in 2017. I know Robin Hanson is a big advocate for this technology, so I promise that by next year I will at least read his book Age of Em.


3- Artificial consciousness will never be created.
I’ve seen no evidence that 2017 brought us any closer to understanding consciousness, let alone implementing it artificially. I would argue that we’re really far away from this.


One of the points I’d like to emphasis, as we review these predictions and sets of predictions, is to continually ask, “But what if Jeremiah is wrong?” The AI section is particularly interesting in this regard because it can go either way. Many people think that if we do manage to develop superintelligent AI that it will be the end of all our problems. Other people think the exact opposite, they agree with the first group that a superintelligent AI would be the end all our problems, but only because it will end all of us. On the positive side if I’m wrong, then the consequences are almost entirely beneficial, and other than the hit to my ego, it will matter not at all. On the negative side if I’m wrong then there are grave consequences, which is why, despite arguing it won’t happen, I am nevertheless completely in favor of efforts to ensure friendly AI, and have even donated money to that end.


I could see someone arguing that the antifragile thing to do is assume that we might create a malevolent AI, and that under the precautionary principle we should do everything in our power to stop AI research. I would also be fine with this, though I don’t think, given the way technology works, that this ban would be very effective. Also I understand that by arguing it’s impossible I might weaken attempts to control AI, (oh, that I were that influential) making the negative Black Swan more likely, but in this case I think the bigger danger is putting all of our eggs into the salvation through AI basket, and neglecting other, more worthwhile endeavors.


Transhumanism


1- Immortality will never be achieved.
In the AI section most of the news was not good for me and my predictions. In this section the news out of 2017 is more mixed, especially on this topic. You may have heard that life expectancy in the US has actually gone down, two years in a row. One theory for achieving immortality is that we’ll soon reach a point where life expectancy is increasing by more than one year every year, which would promise a form of immortality, but if we can’t even keep the life expectancy gains we already have, then this gradual method of immortality appears unlikely. Which is not to say there couldn’t be some miraculous breakthrough that brings immortality in one fell swoop, but 2017 brought no sign of that either.


2- We will never be able to upload our consciousness into a computer.
This is closely related to the last section, and most of what I said there applies here. Needless to say by any reasonable estimate we’re a long way away on this.


3- No one will ever successfully be returned from the dead using cryonics.
Another area where the technology is mostly still non-existent, meaning progress is difficult to measure, but one way to get an approximate sense of progress would be to look at the number of people who are in cryonic suspension. And when one of the leading cryonics organizations only added three people in 2017 (down from 6 in 2016 and 10 in 2015) then I have to assume that no significant breakthroughs have been made over the last year.


To be honest this surprises me a little bit. If I was a transhumanist I think I’d definitely sign up for cryonics, but very few people have, and it’s not that people have signed up but just not died yet, the two largest organizations have a total of only 3,410 members of all types (this includes people who’ve submitted DNA, and pets!)


To turn once again to the question, but what if I’m wrong? Well, then we get immortal individuals, living their life in a virtual paradise. And this paradise is available to anyone who coughs up $28,000 for cryonic suspension. (That seems cheap. You can see why I’m surprised more atheists/transhumanists/adventurous souls haven’t taken advantage of it.)


But if I’m right, then we have a bleaker future. We’re going to have to either get our act together, or hope that there’s a higher power out there. If we prepare for a future where we will never escape our problems by uploading our consciousness, and it turns out we’re wrong, then we’ve lost nothing, but if we assume that we can escape in this fashion, and behave accordingly, and then it doesn’t happen we could be in a lot of trouble. And I realize that this caution does not apply to very many people, but I expect that number will go up over time.


Outer Space


1- We will never have an extraterrestrial colony (Mars or Europa or the Moon) of greater than 35,000 people.
Elon Musk continues to push ahead and in 2017 he released an updated version of his Mars plan. Apparently the big thing is that he’s made the rockets and the entire program more efficient. (There’s a good breakdown here.) It continues to be insanely ambitious, and while I hesitate to make short term predictions I predict he won’t hit his goal of landing people on Mars in 2024.


Outside of Musk, there is still NASA, and they do currently have a plan to make it to Mars sometime in the 2030’s. Though my impression is that NASA has had a vague plan to go to Mars for as long as I’ve been alive, and it was always a decade or two in the future, just like now. In other news, Trump just barely signed a directive to go back to the Moon, but as of yet there are no details.


Thus, while we’re still a long way away from having any extraterrestrial colony of any size, if someone wanted to argue that we moved marginally closer in 2017, I wouldn’t fight them on it.


2- We will never establish a viable human colony outside the solar system.
Obviously the path to this runs through Mars, so everything I said above applies here, just multiply the difficulty by several orders of magnitude. Also if you want a very in depth look at this I would once again recommend reading Aurora by Kim Stanley Robinson, I know it was published in 2015, but I think it gives a pretty upto date view on the difficulties.


3- We will never make contact with an intelligent extraterrestrial species.
In some respects this is potentially the easiest of the three predictions for me to be wrong about. It’s possible that aliens will do all the work. And we did see the first (confirmed) object from outside the solar system in 2017, which, interestingly, was shaped very strangely, and therefore may have been artificial. I guess that could be counted as progress towards proving me wrong, but on the other hand, I think that every year that goes by without any contact adds to the probability that there will never be any contact.


If you were paying attention I re-ordered 1 and 2 from their original positions, since there’s no reasonable scenario under which extrasolar colonies aren’t strictly dependent on extraterrestrial colonies. Beyond that, we once again ask, but what if I’m wrong? And, once again, I think there is very little downside if I’m wrong, at least on the first two. (Being wrong about the third prediction creates a future I don’t think any of us can imagine.) And as with many of these predictions I hope I am wrong, but when it comes to leaving the Earth things are a little bit different.


I think there are a significant number of people, especially among the more educated, who assume, on some level, that however bad it gets here, we will eventually be able to leave the Earth behind. And an even greater number of people who assume that progress in general will fix the problems eventually, even if they haven’t thought things all the way through to the necessity of eventually leaving the Earth. And the point I’ve made in several posts is that if leaving Earth is the eye of the needle, if that is going to eventually be the only way forward, then we probably need to take it a lot more seriously, because it’s going to be significantly more difficult than most people think. And as I pointed out in the last post, it’s possible, even likely that taking it seriously enough to make it happen will involve some trade-offs.


War (I hope I’m wrong about all of these)


1- Two or more nukes will be exploded in anger within 30 days of one another.
Just the excitement over North Korea during 2017 has unfortunately brought this prediction much closer to being fulfilled.


2- There will be a war with more deaths than World War II (in absolute terms, not as a percentage of population.)
Another thing I desperately hope is not true. And while the news on the first point is bad. I think that here I haven’t seen much sign that any of the major powers is interested in all out war. Which is good.


3- The number of nations with nuclear weapons will never be less than it is right now.
When introducing this prediction last year I mentioned that the current number is nine: US, Russia, Britain, France, China, North Korea, India, Pakistan and Israel. I haven’t seen any hint that any of the current countries are about to give up their weapons, but I have seen talk about the idea of Japan and South Korea possibly acquiring them in response to North Korea. I think it would take a lot for Japan to decide to develop nukes, but the difficulty would be entirely emotional and psychological, the technical side of it would be easy for them.


More than any other section this is the one where I hope I’m wrong. I even include it in the title. I very much hope Steven Pinker is right and that the world is getting less and less violent. And I am aware of the evidence for this. But, not to beat a dead horse, it bears repeating that the disappointment which comes from expecting a war and not having one, is far better than the disappointment of not expecting a war and getting one.


Miscellaneous


1-There will be a natural disaster somewhere in the world that kills at least a million people
Given the random nature of disasters I don’t think 2017 did much to change the probabilities here, though for those who believe that global warming will cause more hurricanes, and greater climate instability, (I think the jury is still out) every year we don’t do something significant about carbon emissions brings us closer to that sort of climate disaster.


2-The US government’s debt will eventually be the source of a gigantic global meltdown.
It wasn’t that long ago that I covered this at some length so I won’t add much except to say that the deficit was $666 billion in 2017 (I’ll try not to read too much into that number.) And the debt grew by only $600 billion, which was slower than the Obama era average of over a trillion.


3- Five or more of the current OECD countries will cease to exist in their current form.
2017 brought some excitement here with the Catalonian independence vote. And of course there was talk of a Calexit after the election of Trump. Meaning this is another prediction I think is looking pretty good after 2017.


What all of these predictions have in common is that they’re Black Swans, improbable events that would have a major impact. Most of the items on the list are things people are insufficiently protected against, while others (primary the Outer Space section) are things we’re doing a bad job of capitalizing on. Looking forward, I don’t think any of my predictions are going to be proven true or false in 2018, and probably not in 2019, but I do believe that sometime between now and January 1st, 2117, all of them will end up being accurate, and it doesn’t matter if it only happens that one year, if humanity has made the wrong call, in most cases that one time is all it will take.





One final prediction, I’ll continue to try to come up with vaguely humorous appeals for donations and that most of them will largely fail. Despite that consider donating anyway.