Support This Blog On Patreon!

Support this Blog!

All I ask is a $1 a month. (But more is great too.) If you find this content to be beneficial, interesting or just a fascinating peek into true insanity please donate.

Saturday, February 17, 2018

The Terrible Power of Tiny Trends

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:



Or download the MP3



Einstein is said to have remarked that “compound interest is the most powerful force in the universe”, or maybe he said it was “mankind’s greatest invention”. Or more likely he said no such thing, and this quote ended up being attributed to him later, as is the case with so many of his supposed quotes (nor does it just happen to Einstein.) Regardless the quote persists because it has an element of truth to it. Compound interest acts as something of a juggernaut, slowly gathering momentum until it’s essentially unstoppable. All the way back in 1769 an Anglican minister and actuarial mathematician named Richard Price gave this example of its power:


A shilling put out at 6% compound interest at our Saviour’s birth would . . . have increased to a greater sum than the whole solar system could hold, supposing it a sphere equal in diameter to the diameter of Saturn’s orbit.


But, of course, no one did invest a shilling at 6% at the time of Jesus’ birth. And the reasons why are probably obvious, but they bear reexamination despite their obviousness.


Perhaps the most obvious reason no one did it, is that there are no banks which have survived from that time till this. And after the recent financial crisis, it should be an open question as to what it even means for a bank to “survive”. There are some very old banks in England, but it’s pretty clear that none of them would have survived for the last 300 years (or even the last 30 without government help. And not only did no banks survive from 0 AD until now, but no country has survived. (As you may recall I argued in a previous post that very few countries have survived intact for more than about 100 years). If it had been possible to make such an investment, another question is who would the beneficiary have been. The Japanese Imperial Family has apparently been around that long, but I’m not aware of any others. Or perhaps there’s some organization that, had they been far-sighted enough, could now own a sphere of money as big as the Solar System? There are a few Christian Churches who, in theory, trace their organization all the way back to the death of Jesus, (which I suppose is close enough) and perhaps if any banks and countries had survived with them, they could have made that investment.


However, even if the Orthodox Church of Jerusalem or Emperor Suinin had wanted to make such an investment and even if there had been a bank around to accept it and hold on to it all the way down to the present day (paying 6% interest the entire time, though even 1% would probably still get you the Earth and all of its productive assets.) There is still one, final, insurmountable hurdle. They must have figured out some way to ensure that no one, in all the time between 0 and 2018 AD, could have ever raided the “piggy bank”. That everyone from bandits, to the government (or are those the same thing) would have left that giant pile of money sitting there, untouched for over two thousand years. And of all hypotheticals we’ve considered, that is the least realistic of them all.


In any event, regardless of what Einstein did or didn’t say, it’s evident that the power of compound interest is checked by many things, the stability of the banking system, and of nations, by impatience, greed, and the longevity of organizations. And this is probably a good thing, even if the Japanese Imperial Family would do a great job of running the world, I think the process of selling it to them would be hugely disruptive. And in fact I would swear that I heard a podcast a couple of years ago (Radiolab maybe?) that claimed there was a time when people were so worried about bequests that lasted for 100s of years, that moves were made to limit them, but for the life of me I can’t find it. In any event it’s not important, the important thing I want to emphasis is first, the power of even a tiny effect if that effect compounds (and even non-compounding effects if they last long enough.) And second that when things get derailed it’s often because of instability rather than the reverse.


Compound interest draws a lot of attention, not only because it provides exponential growth, but also because it’s a simple system which is easy to track. Anyone can sit down and put together a spreadsheet and see exactly how big the principal gets, and if they like, they can adjust the interest rate and see that earning a 6% interest rate is way better than a 20% improvement over earning a 5% interest rate. The question I want to examine is whether there are other things which act like compound interest in their potential for growth (and by extension impact) but which we have missed because they’re hard to measure. Is there anything on which we’re accumulating a sort of societal compound interest and if we are, which things are accumulating a positive interest rate and which things are accumulating negative interest?


It may be that there are only a few things like that, and that even if we don’t have a firm handle on them, people are still aware of them, and working to solve them. But I’m also interested in things which don’t compound or do so very slowly, but which might be disastrous if they continue for long enough. These things are even harder to see, but more important to discuss because of that.


Let’s start by looking at societal trends that most closely resemble compound interest. Here, the first one most people think of is population. I already spent a post talking about this, so I don’t intend to spend much additional time on it, but you can immediately see where the fact that population compounds causes problems on both ends of the spectrum. First you have a potential Malthusian Catastrophe, which people have been discussing since, well, the time of Malthus, and more recently, particularly in the more developed world, people have started noticing potential problems on the other side of things, of having too low of a birth rate, which compounds in the other direction.


As I said in my previous post I think both directions have the potential to be catastrophic, and to again quote from one of the all time great movies of the past century, Tommy Boy, “In auto parts, you’re either growing or you’re dying. There ain’t no third direction.” As it is with auto parts so it is with humanity. I don’t think there’s a very credible, non-dystopian scenario where we have a precisely stable population.


In an attempt to not be entirely pessimistic, let’s now turn to look at something which compounds in a good way: knowledge. In fact not only does knowledge compound, but the rate at which it compounds is going up. It’s as if you had an investment that started out paying 1% per year, but quickly went to 10% and then 100%. I am often critical of technology, particularly when it’s implemented naively, but this is one things it has done quite well.


At this point, I might toss out a statistic on how fast knowledge is currently doubling, and I will, but it’s going to get a little meta. If you do a Google search on rate of knowledge doubling you’ll get one of those info boxes, and it will say that knowledge is currently doubling every 12 months and soon it will double every 12 hours. This box links to an article written in 2013. The article has no reference for the current 12 month doubling rate (and in fact it actually says it’s 13 months) but does link to an IBM paper (on an unrelated subject) for the 12 hour doubling rate. When you look at the article it actually says 11 hours (and by the way, I can forgive both roundings, I appreciate the elegance of going from 12 months to 12 hours) and goes on to say that the 11 hour number is set to happen “four years from now”. When was the paper written? 2006! Meaning back in 2013, knowledge was presumably already doubling every 11 hours, and possibly even quicker than that. And who knows what the doubling rate is now. I looked for a more current figure, but all of the top results reference the same 2013 original from the infobox, and most of them repeat  the claim that “Soon information will double every 12 hours” not aware that that was a prediction from a 2006 paper for the rate of doubling in 2010.


In any event, I assume knowledge has a certain rate of doubling, and that the rate is increasing. And when people are optimistic about the future, what you find when you peel away the layers is that they are mostly relying on this positive compounding overwhelming any other negative compounding or even any other negative trends. To put it simply, they feel that the future is going to be awesome because we’re getting smarter. I am definitely sympathetic to this point of view, and it has a lot going for it, but I’m not sure it’s quite the unalloyed good people think it is. First, however fast knowledge is increasing, the human brain isn’t getting any more powerful. And I’m well aware that this leads directly to transhumanism, but as I just pointed out with some of the questions in the last post, replacing ourselves with artificial intelligence in order to keep up with the growth of knowledge, is something which could end very badly. Of more immediate concern, the pressure for scientific knowledge to increase has lead to a massive system of “publish or perish” which has in turn created the replication crisis. All of which is to say, as I so often do, I hope the optimists are right, but I think the challenges are vastly more significant than they think.


The other famous compounding trend that’s gotten a lot of attention over the last few decades is  Moore’s Law. Of course any mention of Moore’s Law has to be accompanied by the obligatory mention of concerns that it’s running out of steam. The next step in this discussion is for someone to come along and mention quantum computing and how it will revive Moore’s Law. And of course all of this, once again, leads directly into transhumanism, AI and the aforementioned awesome future, which I’ve probably already spent enough time on.


As the circle widens evidence of compounding or exponential effects becomes harder to find. And we start to move into the realm of long term trends which may or may not have compounding effects. Despite this, even if something doesn’t grow or shrink exponentially if grows or shrinks period for long enough, inevitably problems are going to arise.


I have already discussed lots of things which fit this criteria, and so I’ll mostly be reviewing trends we’ve touched on previously. To start with, there’s, naturally, the national debt, which I discussed a few months ago. I think a case could be made for the debt growing exponentially, certainly if you look at it just starting in 2000 (or even 1980) that’s what the curve looks like, even as a percentage of GDP. However if you widen the view and go all the way back to the countries founding you’ll see lots of debt peaks which later dropped to more manageable levels. It should be said that on all previous occasions the peak was due to war, and the war ended. This time there is no war (or if there is, it shows no signs of ending). For this reason and others I’m on record as saying that I expect the debt to essentially follow the track it’s already on rather than dropping back as it always has before. As to what that might mean, I recommend reading my previous post.


Another area where exponential growth is often mentioned is social media. And as I’ve pointed out a couple of times this isn’t necessarily a good thing. The more persnickety among you may argue that growth in social media is just a subset of the growth in knowledge (or even Moore’s Law) which we’ve already covered, but while most people don’t directly interact with “knowledge” they are intimately involved with Facebook. Also, I don’t consider this to be something that truly compounds, for one, I suspect that Facebook’s growth will be more of an S-Curve, than an unbounded arc towards infinity. Additionally and obviously there aren’t infinite people… All of this doesn’t matter, because Facebook and similar social media sites are interesting for another reason. If you grant the premise, which I and an increasing number of others have made, that Facebook is actually doing more harm than good, then I think it provides a great example of something that’s not intrinsically bad, but only becomes so after significant growth.


In other words, if we look at the trends associated with Facebook that actually concern people we can see where they all started out benignly, and only began causing problems once the curve/userbase reached a certain level. Let’s just look at a few examples of trends within social media.


Coordination: Looking back to my Moloch post I mention that the best way to get around “races to the bottom” is to coordinate. Unfortunately Facebook has taken coordination to a level where instead of bringing people together it’s allowed them to splinter into incredibly narrow ideological niches.


Speech: As I pointed out in the last post where I talked about social media, we’re discovering that excessive speech can be used to censor almost as effectively as actual censorship.


User Base: Having a massive user base is what makes Facebook appealing, it also provides a single point of failure where one bad decision can have a gigantic impact. And I’m not even talking about the whole Russia/Facebook controversy, I’m talking about how tiny changes to one of the algorithms is national news.


One trend which I haven’t spent a lot of time discussing is the rise in inequality. It’s not that I haven’t been aware of the discussion or the underlying problem, I just wasn’t sure that I had much to say about it that was unique or interesting. Still it’s a problem I’m interested in, so just recently I started reading The Great Leveler, I expect I’ll eventually devote an entire post to the book, but for now it does have something interesting to say which ties directly into the current topic. From the book jacket:


Ever since humans began to farm, herd livestock, and pass on their assets to future generations, economic inequality has been a defining feature of civilization. Over thousands of years, only violent events have significantly lessened inequality. The “Four Horsemen” of leveling--mass-mobilization warfare, transformative revolutions, state collapse and catastrophic plagues--have repeatedly destroyed the fortunes of the rich. Scheidel identifies and examines these processes, from the crises of the earliest civilizations to the cataclysmic world wars and communist revolutions of the twentieth century. Today, the violence that reduced inequality in the past seems to have diminished, and that is a good thing. But it casts serious doubt on the prospects for a more equal future.


Here we see two interesting ideas associated with rising trends in general. First they often have unintended consequences. (A topic which can never get too much attention.) In this case it’s the unintended consequence of reducing violence. And as great as this is, Scheidel argues that much like a forest fire, you need one every so often to clear out the accumulated deadwood. Which is not to say that you couldn’t have a post-violence society which was more equal, but in which everyone is objectively worse off. Thus even knowing, that sans violence, inequality is just going to continue to rise, that may still not be a trade we’re willing to make. And so, inequality just keeps growing, but this takes us to the second point illustrated by the example. Things can’t grow forever, and as we saw in the example of trying to earn compound interest since the birth of Jesus, when they do break, it’s generally though instability, of exactly the sort Scheidel is talking about. If we avoid mass-mobilization warfare, does that just mean we’ll eventually have a transformative revolution, or that if we avoid both the state will just collapse? Which wraps this point in with the first one. It may be that you can only avoid a forest fire for so long, and that eventually one comes whether you want it or not. Recall that it’s not just the rich getting richer, most everyone else is getting poorer, are you sure that’s a trend that can continue forever without ever crashing under its own instability?


In other words all of this is to say that no matter how innocuous or small a negative trend is, if it continues for long enough something has to break. Fortunately humanity has gotten pretty good at making course corrections. Still there are some recent trends where our attempted course correction has so far been ineffective. And other cases where the course we should take is clear, but difficult. And finally there are some cases where I’m not sure what sort of course correction we should make, and even if I was I’m doubtful we’d ever take it. In closing I’d like to provide one final example that combines a little bit of all of those issues.


The example I’m thinking of is the recent increase in deaths of despair. Here, most of the attention has been focused on people overdosing on opioids, and my sense is that people feel it’s a problem we’ve just recently become aware of, and that we have corrected our course, it just hasn’t quite taken effect yet. I certainly hope that is true, but even if it is, there’s more going on than just opioid addiction. First, it’s not as if opium was just barely discovered, or that heroin was only recently created (Bayer started marketing it over the counter in 1895.) The epidemic of overdosing is largely unique to our time and place. Second, even if you strip away deaths of despair to do opioids, you still have an increase in suicides and deaths from alcoholism. One writer puts it this way:


Opioids are like guns handed out in a suicide ward; they have certainly made the total epidemic much worse, but they are not the cause of the underlying depression.  


(As long as we’re on the subject did you see the story about the fentanyl bust in Boston? 33 lbs, which may not seem like much, but fentanyl is so bad, that’s enough to kill every person in Massachusetts.)


Returning to the quote, if there is an underlying depression, and I believe there is. All of the explanations for it involve things like job loss, and inequality, both of which seem destined to get worse. As I already said we’re pretty good at course correction, but job loss is unlikely to get better as automation becomes more prevalent, and if we buy the thesis of The Great Leveler, inequality is unlikely to get better absent violence. Accordingly there is at least some justification for thinking that the trends are going to continue, that whatever course correction we have initiated is not going to be enough.


I said that the second possibility is a change of course which is clear, but difficult. In this case it’s clear that we need to remove the despair. Most sociologists agree on what that would take: more high-paying manufacturing jobs, stronger families, and a general feeling of being useful. Some of those are easier to define than others, but all are incredibly difficult to accomplish. Though my argument for a long time has been that this is what people were hoping for when they voted for Trump. And while I assume some of them genuinely thought he could give them all that, I imagine most were making a speculative attempt to complicate.


Finally deaths of despair also fall into that category of problems where I’m not sure what to do. The world has moved on and we can’t turn back the clock. Trump can promise to bring back manufacturing jobs till the cows come home, but he’s unlikely to have much of an impact. The trends we see are too massive to be stopped so trivially. Thus saying we need more high-paying manufacturing jobs is not all that different from saying we need a miracle.


If the trends can’t be stopped, and as I said, I hope they can. It may initially not seem like a catastrophe. But this is where the length of the trend becomes important. Taking just the opioid component (and remember there’s a lot more going on with deaths of despair.) If things merely continue as they have been we’ll end up with 420,000 additional dead people in the US over just the next 10 years (basically equal to all the US soldiers who died in World War 2). And that’s not all deaths that’s deaths over the pre-epidemic baseline, which I’m pegging at around 1999-2000. (Figures extrapolated from charts linked to in the show notes.) This all assumes that we stop it here, and it doesn’t get any worse. If, on the other hand, the trend continues to rise at the same rate it has been, a 5x increase over 17 years (from 1999 to 2016) then by 2035 we would have a quarter of a million people dying every year.


I don’t think that will happen, and it’s possible that the opioid epidemic is a better example of a trend we missed than a trend which will continue. Which is to say that if you had predicted that overdose deaths from opioids would go from around 10,000 in the year 2000 to 52,000 in the year 2016, people would have thought you were crazy. And if they had believed you they would have done everything in their power to stop it. The question I want to leave with, is what are the current trends where we’re metaphorically in the year 2000. Trends just at the beginning of their rise, or even worse beginning to compound, where this is the time to act. Is there any way of identifying them, and if so, is there anything we can do? Or, are these trends similar to inequality, something that can only be reversed significantly by instability and violence?


I suspect I’ll be referring back to this post a lot, particularly since there are a lot of examples I didn’t even touch on. I’ll cover one of them in my next post.





This is my new record for the longest post. If you like in depth (or rambling) writing, then consider donating. If you don’t like that sort of thing, then how did you ever end up here?

Saturday, February 10, 2018

The 2018 Edge Question of the Year

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:



Or download the MP3



Since 1998 (and in other forms earlier than that) John Brockman, founder of the Edge Foundation, has been doing a question of the year. Brockman is a book agent specializing in scientific literature and consequently well connected to the sorts of people you might want answers from. In past years he’s gotten answers from people like Richard Dawkins, Steven Pinker, Freeman Dyson and even Alan Alda. I have not read all the responses to every annual question (though I’ve read quite a few) but those I have read are, in general, very thought provoking, and at least once (perhaps more) I’ve used the answers as part of a post.


It occured to me near the end of last year, that despite Brockman, unaccountably, not asking me for my answer, that I could nevertheless provide one for the 2018 Question of the Year, and that it might make a good post. I had actually intended to do it in January while the year was still fresh, but as far as I can tell the 2018 version of the question has only just recently become available. (And it still isn’t listed on the page with the rest of the questions). Also, based on this statement it appears that this might be the last year he does it.


After twenty years, I’ve run out of questions. So, for the finale to a noteworthy Edge project, can you ask "The Last Question"?


Unlike past years, this question, as might be expected, produced some very short responses. In that vein I’ll keep my response short as well, but I still have a few thousand more words to write, so I’ll save my response for the very end and spend the rest of the time discussing the other responses. I think if nothing else they give a very interesting snapshot of the current intellectual zeitgeist. Though this last question does have the same weakness as most of the previous questions in the series. Scott Aaronson (the very first response from this year, it’s alphabetical) describes his response and the challenge of responding thusly:


I’m not thrilled with the result, but reading through the other questions makes it clear just how challenging it is to ask something that doesn’t boil down to: “When will the rest of the world recognize the importance of my research topic?”


Fortunately my “research topic” is very broad, and however prejudiced the individual responses are, there is a lot of overlap between my interests and those of the people responding.


To begin with there are some questions which directly speak to past posts. For example the response by Jaeweon Cho is basically one of the points I was examining just last week:
Can we design a modern society without money which is at least as effective economically and politically as our current system?


As you’ll recall the unfortunate historical conclusion is that it’s probably impossible. But apparently there are still people, like Mr. Cho, who continue to hope that it can it be done. No offense to Mr. Cho, but not only is that a question which already has a pretty solid answer, I don’t think it is a really good “last question” either (a failing common to many of the responses this year.) Which is to say I’m not sure the future of humanity hinges on whether we can design a system without money.


More interesting and consequential is Kurt Gray’s Question:


What will happen to human love when we can design the perfect robot lover?


Long time readers may recognize this as being very similar to the question I posed in my post Is Pornography a Supernormal Stimuli? Unfortunately I don’t know Mr. Gray’s answer to that question, but I’m confident, as I pointed out in the post, that we’re already getting a taste of the answer to that question and it’s not good. Of course I’m sure that there will be at least some social and cultural stigma around having sexual relations with a robot for a long time, but for anyone who watched Bladerunner 2049, and remembers Joi, the holographic companion, you know that it doesn’t necessarily have to be skeezy.


One of the other respondents was Samuel Arbesman, who you may recall from the post I did where I reviewed his book Overcomplicated (among other things) And his question is very near and dear to my heart:


How do we best build a civilization that is galvanized by long-term thinking?


This is one answer/question that definitely isn’t guilty of the sin Aaronson described above (the sin of being a thinly veiled discussion of the person’s research.) And it also fits the criteria for being a good “last question”. I think it’s safe to assume, as Arbesman probably does, that long-term thinking corresponds to long-term success, or in other words we’re not just going to accidentally stumble into a glorious future, we’re going to have to work for it. The question is, are we “doing the work”?


As I have pointed out repeatedly, I don’t think we are, and one of the problems with the modern world is its increasing fixation with solving urgent, but potentially unimportant problems. As I have argued, not only is long-term thinking a requirement for civilization, it may in fact be the very definition of civilization. That we are civilized to the extent that we do not prefer the present over the future. Thus I might slightly rephrase Arbesmans question to be, “If civilization relies on long-term thinking, how do we encourage that in a world that is becoming increasingly focused on the short-term?” I’m not sure, but I’m going to keep writing about it.


Speaking of writing about things, if we move from specific posts to the broader themes of the blog it takes us to Joscha Bach’s question:


What is the optimal algorithm for discovering truth?


As I have pointed out from time to time, it’s nice to imagine truth as some immutable object. And if we could just find the right algorithm we would uncover this truth in all its glory and in so doing banish doubt forever. Though perhaps I’m being unfair to Bach, maybe he is just hoping to uncover as much truth as possible, and has no illusions about uncovering the truth behind everything. (Also it should be said, unlike many questions this is an interesting “last question” not merely an interesting question.)


The issue I take with the question is that I think there’s less Truth available than most people imagine. There certainly is some, a few kernels here and there, that have brought large benefits to humanity (the law of gravity for example) but I think a better question is, “What is the optimal algorithm for making decisions under uncertainty?”


I agree with Bach that it would be nice if we just knew THE TRUTH and could act accordingly, but that option is unavailable for 95% (and maybe closer to 99%) of the things we’re actually interested in because we’ll never know the truth. And this is why so much of this blog is focused on antifragility. Because most of the time we’re acting in situations where the truth is unavailable.


Speaking of situations where the truth appears to be fluid, Kate Darling asks:


What future progressive norms would most forward-thinking people today dismiss as too transgressive?


This is one of the many questions which doesn’t feel like a “last question”, nevertheless this is also something I’ve repeatedly wondered about, though, if you’ve read my blog, I’m more interested in past progressive norms which are now dismissed as horribly trasgressive, than future progressive norms. But both past and future norms tie back into the idea of the Overton Window, which is something of significant short-term importance.


I think it says something about the speed at which things are changing that this question is even asked. As the phrase goes I don’t know Kate Darling from Adam (or should it be Eve? There’s those progressive norms again) so I don’t know if she considers herself particularly progressive, but the fact that she’s worried enough about it to make it her “last question”, says a lot about the modern world.


Continuing to match responses, to things I’ve covered in the blog, as you might imagine Fermi’s Paradox makes an appearance in the responses, though it didn’t show up in as many times as I would have thought. Perhaps the paradox no longer grips the imagination like it once did. But interestingly in one of the few responses where it did show up, it showed up connected to religion. Thus once again, even if I do say so myself, my own interests prove to be right on the cutting edge. As to the question, Kai Krause asks:


What will happen to religion on earth when the first alien life form is found?​


First, I should offer a caveat. This question really only applies to Fermi’s Paradox if the alien life in question is intelligent, but that is where the question is the most interesting. Second, similar to some of the other responses, I’m not sure this is a very good last question.


To begin with, a lot of it’s finality depends on the religion you’re speaking of and the form the alien life takes. If the question isn’t about the paradox, if it just relates to, say, finding simple life in the oceans of Europa, then I don’t suspect that it will have much of an effect on any of the world’s major religions.


If, on the other hand, the question ties into the paradox, and the first alien life form we encounter is a civilization vastly superior to our own, then I imagine the impact could be a lot greater, and it might in fact be final, particularly if the aliens share one of our religions. I know most people would be shocked if the aliens had any religion, and even more shocked if it happened to be one that was already practiced on Earth, but I think I’ve laid out a reasonable argument for why that shouldn’t be so shocking. Nevertheless I guess we’ll have to cross that bridge when we come to it.


Supposing, as most people expect, that the aliens have no religion. Then the situation could be final, but that would be because of the alien part of the question, the religion part of the question would have nothing to do with it. As far as the religion side of things, even if it had the effect of wiping out religions (which I doubt) and even given my own feelings on the importance of religion, I don’t think the end of religion would mean the end of humanity. All of the preceding being just a complicated way of arguing that Krause didn’t intend to ask a “last question” with respect to humanity, he intended to ask a “last question” with respect to religion. I would argue that if we rephrased the question as, “Are those crazy religious people going to finally give up their delusions once we find aliens?” It would be at least as close to Krause’s true intent as the actual question and maybe more so.


But perhaps I’m being unfair to Krause, though he’s not the only one to ask questions about religion, and if we move over to that topic, we find that Christopher Stringer was more straightforward:


Can we ever wean humans off their addiction to religion?


And he’s joined by Ed Regis:


Why is reason, science, and evidence so impotent against superstition, religion, and dogma?


Less negative, but mining a similar vein Elaine Pagels asks:


Why is religion still around in the twenty-first century?


I may be harping on this too much, but why are all these questions “last questions”? Despite my decidedly pro-religion stance, I’m not arguing that they’re not interesting questions, in fact I think the utility of religion is embedded in those questions (something Stringer and Regis, in particular, might want to consider) but they’re only last questions if the respondents imagine that unless we can get rid of religion, that humanity is doomed. As you might have guessed I strongly disagree with this point. Not only because of how far we’ve already come with religion, but also because unless they’re all defining religion very narrowly, I think we should be extremely worried they’ll toss out the baby of morality with the bathwater of religion.


I think David G. Myers, while basically speaking on the same general topic phrases his question not only more interestingly, but more productively:


Does religious engagement promote or impede morality, altruism, and human flourishing?


At least this question admits the possibility that the billions of religious people both past and present might not all be hopelessly deluded. And this is also a slightly better last question. The progress we’ve made thus far, as a largely religious species, argues strongly that religion doesn’t prevent progress from being made, and on the other hand asking if religion might be important to “morality, altruism, and human flourishing” is something that should definitely be done before we get rid of it entirely (as Stringer and Regis appear prepared to do.)


Outside of religion, many of the questions involve AI, another subject I’ve touched on from time to time in this space. Starting us off we have a question from someone I’ve frequently mentioned when speaking of AI, our old pal Eliezer Yudkowsky:


What is the fastest way to reliably align a powerful AGI around the safe performance of some limited task that is potent enough to save the world from unaligned AGI?


(I should explain that here Yudkowsky uses AGI, as in artificial general intelligence, rather than the more common AI.)


In this area, at least, we finally start to see some good “last questions”. From Yudkowsky’s perspective, if we don’t get this question right, it will be the last question we ask. He even goes so far as to use the phrase “save the world”, and, from his perspective this is exactly what’s at stake.


Maria Spiropulu’s question is similarly final, and similarly bleak.
Could superintelligence be the purpose of the universe?


I’m assuming it’s bleak because it makes no mention of humanity’s place in this eventual future. One assumes that the superintelligence she envisions would be artificial, but even if it weren’t the alternative is a race of post-humans which would probably bear very little resemblance to humanity as it is now. And I know that for many people that’s a feature not a bug, but we’ll grapple with that some other time.


As long as we’re emphasising how well the AI responses fit in with the “last question” theme, Max Tegmark’s question is especially clever in this regard”


What will be the literally last question that will preoccupy future superintelligent cosmic life for as long as the laws of physics permit?


Something of a meta-last question, but another one which presupposes the eventual triumph of some form of superintelligence.


Finally in this category I want to mention Tom Griffiths’ response:
What new cognitive abilities will we need to live in a world of intelligent machines?


This question is less futuristic than the rest, covering as it does mere job automation rather than the eventual heat death of the universe, but job automation could nevertheless be sufficiently disruptive that there is a potential for this to be the “last question”, I don’t think it’s too much of a stretch to say that a purposeless ennui has already gripped much of the nation, and job automation, as I have pointed out, promises to only exacerbate it.


My purpose in highlighting these questions, other than pointing out the obvious, that people other than myself are interested in artificial intelligence, is to illustrate the widespread belief that we’re in a race. The same race I talked about in my very first post. A race between a beneficial technological singularity and catastrophe. And, as these responses have alluded to, the technological singularity doesn’t have to be beneficial. It could be the catastrophe.


Of course other than Yudkowsky’s response, the AI questions I listed avoid any direct mention of potential catastrophe (which is probably one of the reasons I keep coming back to Yudkowsky, whatever his other flaws at least he acknowledges the possibility things will go poorly.) In fact given the “last question” theme there’s really a surprising dearth of questions which allude to the possibility that progress might not continue along the same exponential curve it has followed for the last few decades. Only a single question, from John Horgan mentions the word “war”.
What will it take to end war once and for all?


And even this question seems to assume that we’re 95% or even 99% of the way there, and he’s just wondering what it will take to finally push us over the top.


As I said Horgan, and Yudkowsky are outliers, most of the responses seem to assume that there will still be researchers and scientists around in 50 or 100 or in however many years, working as they always have to create the “optimal algorithm for discovering truth” or develop a “comprehensive mathematics of human behavior” or investigate the differences in evolution “one hundred thousand years from now”. I can only assume that he expects us to be around in one hundred thousand years, otherwise I imagine evolution will be much the same one hundred thousand years from now as it’s been for the last three billion years.


Perhaps the next 50 to 100 years will be as uneventful as the last 50, but the next 100,000? I know that many of these people believe that something awesome will happen long before then. Something which will lock in progress and lock out catastrophe. (Something like civilizational tupperware.) But what if it doesn’t. Is there any chance of that? And if there is, shouldn’t we perhaps do something about it? Particularly now that we’ve got all these cool gadgets that might help us with that? Fortunately, there are some responses which acknowledge this. Albert Wenger in particular seems to share exactly the concerns I just mentioned:


How do we create and maintain backup options for humanity to quickly rebuild an advanced civilization after a catastrophic human extinction event?


And Dylan Evans is even more pessimistic:


Will civilization collapse before I die?


(Though perhaps his question is more subtle than it initially looks. Maybe Evans is a transhumanist and expects to live for a very long time.)


I was happy to see at least a few questions which acknowledged the fragility of things, but in the end I keep coming back to Arbesman’s question:


How do we best build a civilization that is galvanized by long-term thinking?


Since so many of the other responses inadvertently proved his point. There was a decided lack of long term thinking even in answer to a question that basically demanded it. As Aaronson said, most of the questions really boiled down to:


When will the rest of the world recognize the importance of my research topic?


And of those that avoided that, many engaged in only positive long-term thinking, which to my mind is a potentially fatal oversight. One that overlooks not only the challenges humanity is likely to face, but the challenges we should be the most worried about.


Of course, I promised I would provide my own response. And it’s certainly unfair to criticize others if I’m not willing to try the same thing myself, but before I do. To the many respondents who religiously read this blog, but whose questions I didn’t mention. (Sorry Freeman!) I apologize. You may assume that it was fantastic.


And now, finally, the moment you’ve all been waiting for, my last question:


Is humanity doomed because we have mistaken technology for wisdom, narcissism for joy, and debauchery for morality?





There’s another last question I’d like you to consider, not the last question for humanity, but the last question I ask at the end of every post. Any chance you’d consider donating? Maybe it’s finally time to answer yes.