If you prefer to listen rather than read:
Or download the MP3
Or download the MP3
One of the central themes of this blog has been that the modern world is faced with two possible outcomes: societal collapse or technological singularity. For those of you just joining us, who may not know, a technological singularity is some advancement which completely remakes the world. It’s most often used in reference to creating artificial intelligence which is smarter than the smartest human, but it could also be something like discovering immortality. This is the possible future where technology (hopefully) makes everything alright. But it’s not the only possibility, we’re faced with salvation on one hand and disaster on the other.
Which will it be? Will we be saved by a technological singularity or wiped out by a nuclear war? (Perhaps you will argue that there’s no reason why it couldn’t be both. Or maybe instead you prefer to argue that it will be neither. I don’t think both or neither are realistic possibilities, though my reasoning for that conclusion will have to wait for a future post.)
Once again, in my ongoing effort to catch up on past promises, this is that future post. It’s finally time to fulfill the commitment I made at the very beginning, and answer the question, why can’t it be both or neither?
Let’s start with the possibility that we might experience both at the same time. And right off the bat we have to decide what that would even look like. I think the first thing that pops into my head is the movie Elysium, Neill Blomkamp’s follow-up to District 9. In this movie you have a collapsed civilization on the planet’s surface and a civilization in orbit that has experienced, at a minimum, a singularity in terms of space habitation and health (they have machines that can cure all diseases). At first glance this appears to meet the standard of both a collapse and a singularity happening at the same time, and coexisting. That said, it is fiction. And while I don’t think that should immediately render it useless, it is a big strike against it.
As you may recall I wrote previously about people mistaking fiction for history. But for the moment let’s assume that this exact situation could happen. That one possibility for the future is a situation identical to the one in the movie. Even here we have to decide what our core values are before we can definitively declare that this is a situation where both things have occurred. Or more specifically we have to define our terms.
Most people assume that a singularity, when it comes, will impact everyone. I’ve often said that the internet is an example of a “soft” singularity, and indeed one of its defining characteristics is that it has impacted the life of nearly everyone on the planet. Even if less than half of people use the internet, I think it’s safe to assume that even non-users have experienced the effects of it. Also, since the number of internet users continues to rapidly increase, it could be argued that it’s a singularity which is still spreading. Whereas in Elysium (and other dystopias) there is no spread. Things are static or getting worse, and for whatever reason the singularity is denied to the vast majority of people. (And if I understand the ending of the movie correctly it’s being denied just out of spite.) Which is to say that if you think that a singularity has to have universal impact, Elysium is not a singularity.
If, on the other hand, you view collapse as a condition where technological progress stops, then Elysium is not a story of collapse. Technological progress has continued to advance. Humanity has left the Earth, and there appears to be nothing special stopping them from going even farther. This is where core values really come into play.
I’ve discussed the idea of core values previously, and when I did, I mentioned a friend of mine whose core value is for intelligence to escape this gravity well, and Elysium either qualifies or is well on it’s way to qualifying for this success condition. Which means if you’re my friend Elysium isn’t a story of collapse it’s a story of triumph.
You may feel that I’ve been cheating and that what I’m really saying is that collapse and singularity are fundamentally contradictory terms and that’s why you can’t have both. I will admit that there is a certain amount of truth to that, but also as you can see a lot depends on what your “win” condition is. As another example of this, if you’re on the opposite side of the fence and your core values incline you to hope for a deindustrialized, back to nature, future, then one person’s collapse could be your win condition.
You may wonder why I’m harping on a subject of such limited utility, and further using a mediocre movie to illustrate my point. I imagine before we even began that all of you were already on board with the idea that you can’t have both a technological singularity and a societal collapse. I imagine this doesn’t merely apply to readers of this blog, but that most people agree that you can’t have both, despite a talented performance from Matt Damon which attempts to convince them otherwise. But in spite of the obviousness of this conclusion, I still think there’s some fuzzy thinking on the subject.
Allow me to explain. If, as I asserted in my last post, all societies collapse, and if the only hope we have for avoiding collapse is some sort of technological singularity. Then we are, as I have said from the very beginning, in a race between the two. Now of course structuring things as a race completely leaves out any possibility of salvation through religion, but this post is primarily directed at people who discount that possibility. If you are one of those people and you agree that it’s a race, then you should either be working on some potential singularity or be spending all of your efforts on reducing the fragility of society, so that someone else has as long as possible to stumble upon the singularity, whatever that ends up being.
I admit that the group I just described isn’t a large group, but it may be larger than you think. As evidence of this I offer up some of the recent articles on Silicon Valley Preppers. Recall, that we are looking for people who believe that a collapse is possible but don’t otherwise behave as if we’re in a race in which only one outcome can prevail. In other words, if, like these people, you believe a collapse could happen, you definitely shouldn’t be working on ways to make it more likely, by increasing inequality and fomenting division and anger, which seems to have been the primary occupation of most of these wealthy preppers. On top of this they appear to be preparing for something very similar to the scenario portrayed in Elysium.
Tell me if this description doesn’t come pretty close to the mark.
I was greeted by Larry Hall, the C.E.O. of the Survival Condo Project, a fifteen-story luxury apartment complex built in an underground Atlas missile silo....“It’s true relaxation for the ultra-wealthy,” he said. “They can come out here, they know there are armed guards outside. The kids can run around.” ...In 2008, he paid three hundred thousand dollars for the silo and finished construction in December, 2012, at a cost of nearly twenty million dollars. He created twelve private apartments: full-floor units were advertised at three million dollars; a half-floor was half the price. He has sold every unit, except one for himself, he said.... In a crisis, his swat-team-style trucks (“the Pit-Bull VX, armored up to fifty-calibre”) will pick up any owner within four hundred miles. Residents with private planes can land in Salina, about thirty miles away.
A remoted guarded luxury enclave where they can wait out the collapse of the planet? This seems pretty on the money, and don’t even get me started on Peter Thiel’s island.
Far be it from me to criticize someone for being prepared for the worst. Though in this particular case, I’m not sure that fleeing to the rich enclave will be as good of a tactic as they think. John Michael Greer, who I quote frequently, is fond of pointing out that every time some treasure seeker finds gold coins which have been buried, that it’s evidence of a rich prepper, from history, whose plans failed. Where my criticism rest is the fact that they seem to spend hardly any resources on decreasing the fragility of the society we already have.
Reading these prepper stories you find examples of people from Reddit and Twitch and Facebook. What do any of these endeavors do that makes society less fragile? At best they’re neutral, but an argument could definitely be made that all three of these websites contribute to an increase in divisiveness and by extension they actually increase to the risk of collapse. But, as I already alluded to, beyond their endeavors, they are emblematic of the sort of inequality that appears to be at the heart of much of the current tension.
As a final point if these people don’t believe that a societal collapse and a technological singularity are mutually exclusive, what do they imagine the world will look like when they emerge from their bunkers? I see lots of evidence of how they’re going to keep themselves alive, but how do they plan to keep technology and more importantly, infrastructure alive?
A few years ago I read this fascinating book about the collapse of Rome. From what I gathered, it has become fashionable to de-emphasis the Western Roman Empire as an entity. An entity which ended in 476 when the final emperor was deposed. Instead, these days some people like to view what came after 476 as very similar to what came before only with a different group of people in charge, but with very little else changing. This book was written to refute that idea, and to re-emphasis the catastrophic nature of end of Rome. One of the more interesting arguments against the idea of a smooth transition was the quality of pottery after the fall. Essentially before the fall you had high quality pottery made in a few locations and which could be found all over the empire. Afterwards you have low quality, locally made pottery that was lightly fired and therefore especially fragile, a huge difference in quality.
It should go without saying, that a future collapse could have very little in common with the collapse of Rome, but if the former Romans couldn’t even maintain the technology for making quality pottery, what makes us think that we’ll be able to preserve multi-billion dollar microchip fabrication plants, or the electrical grid or even anything made of concrete?
The point is, if there is a collapse, I don’t think it’s going to be anything like the scenario Silicon Valley Preppers have in their head.
And now, for the other half of the post, we finally turn to the more interesting scenario. That we end up with neither. That somehow we avoid the fate of all previous civilizations and we don’t collapse, but, also, despite having all the time in the world to create some sort of singularity, that we don’t manage to do that either.
At first glance I would argue that the “neither” scenario is even more unlikely than the “both” scenario, but this may put me in the minority, which is, I suppose, understandable. People have a hard time imagining any future that isn’t just an extension of the present they already inhabit. People may claim that they can imagine a post-apocalyptic future, but really they’re just replaying scenes from The Road, or Terminator 2 (returning to theaters in 3D this summer!). As an example, take anyone living in Europe in 1906, was there a single person who could have imagined what the next 40 years would bring? The two World Wars? The collapse of so many governments? The atomic bomb? And lest you think I’m only focused on the negative, take any American living in 1976. Could any of them have imagined the next 40 years? Particularly in the realm of electronics and the internet. Which is just to say, as I’ve said so often, predicting the future is hard. People are far more likely to imagine a future very similar to the present, which means no collapses or singularities.
It’s not merely that they dismiss potential singularities because they don’t fit with how they imagine the future, it’s that they aren’t even aware of the possibility of a technological singularity. (This is particularly true for those people living in less developed countries.) Even if they have heard of it, there’s a good chance they’ll dismiss it as a strange technological religion complete with a prophet, a rapture, and a chosen people. This attitude is not only found among those people with no knowledge of AI, some AI researchers are among its harshest critics. (My own opinion is more nuanced.)
All of this is to say that many people who opt for neither have no concept of a technological singularity, or what it might look like or what it might do to jobs. Though to adapt my favorite apocryphal quote from Trotsky. You may not be interested in job automation, but job automation is interested in you.
All of the lack of information, and the present-day bias in thinking, apply equally well to the other end of the spectrum and the idea of society collapsing, but on top of that you have to add in the optimism bias most humans have. This is the difference between the 1906 Europeans and the 1976 Americans. The former would not be willing spend anytime considering what was actually going to happen even if you could describe it to them in exact detail, while the latter would happily spend as much time, as you could spare, listening to you talk about the future.
In other words, most people default to the assumption that neither will happen, not because they have carefully weighed both options, but because they have more pressing things to think about.
As I said at the start I don’t think it can be neither, and I would put the probability of that, well below the probability of an eventual singularity, but that is not to say that I think a singularity is very likely either (if you’ve been reading this blog for any length of time you know that I’m essentially on “Team Collapse”.)
My doubts exist in spite of the fact that I know quite a bit about what the expectations are, and the current state of the technology. All of the possible singularities I’ve encountered have significant problems and this is setting aside my previously mentioned religious objection to most of them. To just go through a few of the big ones and give a brief overview:
- Artificial Intelligence: We obviously already have some reasonably good artificial intelligence, but for it to be a singularity it would have to be generalized, self-improving, smarter than we are, and conscious. I think the last of those is the hardest, even if it turns out that the materialists are totally right (and a lot of very smart, non-religious people think that they aren’t) we’re not even close to solving the problem.
- Brain uploading: I talked about this in the post I did about Robin Hansen and the MTA conference, but in essence, all of the objections about consciousness are still essentially present here, and as I mentioned there, if we can’t even accurately model a species with 302 neurons. How do we ever model or replicate a species with over 100 billion?
- Fusion Power: This would be a big deal, big enough to count as a singularity, but not the game changer that some of the other things would be. Also as I pointed out in a previous post, at a certain point power isn’t the problem if we’re going to keep growing, heat is.
- Extraterrestrial colonies: Perhaps the most realistic of the singularities at least in the short term, but like fusion not as much of a game changer as people would hope. Refer to my previous post for a full breakdown of why this is harder than people think, but in short, unless we can find some place that’s livable and makes a net profit, long-term extraterrestrial colonies are unsustainable.
In other words while most people reject the idea of a singularity because they’re not familiar with the concept, even if they were, they might, very reasonably, choose to reject it all the same.
You may think at this point that I’ve painted myself into a corner. For those keeping score at home I’ve argued against both, I’ve argued against neither and I’ve argued against a singularity all by itself. (I think they call that a naked singularity, No? That’s something else?) Leaving me with just collapse. If we don’t collapse I’m wrong, and all the people who can neither understand the singularity or imagine a catastrophe will be vindicated. In other words, I’ve left myself in the position of having to show that civilization is doomed.
I’d like to think I went a long way towards that in my last post, but this time I’d like to approach it from another angle. The previous post pointed out the many ways in which our current civilization is similar to other civilizations who’ve collapsed. And while those attributes are something to keep an eye on, even if we were doing great, even if there are no comparisons to be drawn between our civilization and previous civilizations in the years before their collapse, there are still a whole host of external black swans, any one of which would be a catastrophic.
As we close out the post let’s just examine a half dozen potential catastrophes, every one of which has to avoided in the coming years:
1- Global Nuclear War: Whether that be Russia vs. the US or whether China’s peaceful rise proves impossible, or whether it’s some new actor.
2- Environmental Collapse: Which could be runaway global warming or it could be a human caused mass extinction, or it could be overpopulation.
3- Energy Issues: Can alternative energy replace carbon based energy? Will the oil run out? Is our energy use going to continue to grow exponentially?
4- Financial Collapse: I previously mentioned the modern world’s high levels of connectivity, which means one financial black swan can bring down the entire system, which almost happened in 2008.
5- Natural disasters: These include everything from super volcanoes, to giant solar storms, to impact by a comet.
6- Plagues: This could be something similar to the Spanish Flu pandemic, or it could be something completely artificial, an act of bioterrorism for example.
Of course this list is by no means exhaustive. Also remember that we don’t merely have to avoid these catastrophes for the next few decades we have to avoid them forever, particularly if there’s no singularity on the horizon.
Where is the world headed? What should we do? I know I have expressed doubts about the transhumanists, and people like Elon Musk, but at least these individuals are thinking about the future. Most people don’t. They assume tomorrow will be pretty much like today, and that their kids will have a life very similar to theirs. Maybe that’s so, and maybe it’s not, but if the singularity or collapse don’t happen during the life of your children or of their children, it will happen during the lives of someone’s children. And it won’t be both and it won’t be neither. I hope it’s some kind of wonderful singularity, but we should prepare for it to be a devastating catastrophe.
I repeat what I’ve said from the very beginning. We’re in a race between societal collapse and a technological singularity. And I think collapse is in the lead.
If you’re interested in ways to prevent collapse you should consider donating. It won’t stop the collapse of civilization, but it might stop the collapse of the blog.
No comments:
Post a Comment