Support This Blog On Patreon!

Support this Blog!

All I ask is a $1 a month. (But more is great too.) If you find this content to be beneficial, interesting or just a fascinating peek into true insanity please donate.

Saturday, February 10, 2018

The 2018 Edge Question of the Year

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:



Or download the MP3



Since 1998 (and in other forms earlier than that) John Brockman, founder of the Edge Foundation, has been doing a question of the year. Brockman is a book agent specializing in scientific literature and consequently well connected to the sorts of people you might want answers from. In past years he’s gotten answers from people like Richard Dawkins, Steven Pinker, Freeman Dyson and even Alan Alda. I have not read all the responses to every annual question (though I’ve read quite a few) but those I have read are, in general, very thought provoking, and at least once (perhaps more) I’ve used the answers as part of a post.


It occured to me near the end of last year, that despite Brockman, unaccountably, not asking me for my answer, that I could nevertheless provide one for the 2018 Question of the Year, and that it might make a good post. I had actually intended to do it in January while the year was still fresh, but as far as I can tell the 2018 version of the question has only just recently become available. (And it still isn’t listed on the page with the rest of the questions). Also, based on this statement it appears that this might be the last year he does it.


After twenty years, I’ve run out of questions. So, for the finale to a noteworthy Edge project, can you ask "The Last Question"?


Unlike past years, this question, as might be expected, produced some very short responses. In that vein I’ll keep my response short as well, but I still have a few thousand more words to write, so I’ll save my response for the very end and spend the rest of the time discussing the other responses. I think if nothing else they give a very interesting snapshot of the current intellectual zeitgeist. Though this last question does have the same weakness as most of the previous questions in the series. Scott Aaronson (the very first response from this year, it’s alphabetical) describes his response and the challenge of responding thusly:


I’m not thrilled with the result, but reading through the other questions makes it clear just how challenging it is to ask something that doesn’t boil down to: “When will the rest of the world recognize the importance of my research topic?”


Fortunately my “research topic” is very broad, and however prejudiced the individual responses are, there is a lot of overlap between my interests and those of the people responding.


To begin with there are some questions which directly speak to past posts. For example the response by Jaeweon Cho is basically one of the points I was examining just last week:
Can we design a modern society without money which is at least as effective economically and politically as our current system?


As you’ll recall the unfortunate historical conclusion is that it’s probably impossible. But apparently there are still people, like Mr. Cho, who continue to hope that it can it be done. No offense to Mr. Cho, but not only is that a question which already has a pretty solid answer, I don’t think it is a really good “last question” either (a failing common to many of the responses this year.) Which is to say I’m not sure the future of humanity hinges on whether we can design a system without money.


More interesting and consequential is Kurt Gray’s Question:


What will happen to human love when we can design the perfect robot lover?


Long time readers may recognize this as being very similar to the question I posed in my post Is Pornography a Supernormal Stimuli? Unfortunately I don’t know Mr. Gray’s answer to that question, but I’m confident, as I pointed out in the post, that we’re already getting a taste of the answer to that question and it’s not good. Of course I’m sure that there will be at least some social and cultural stigma around having sexual relations with a robot for a long time, but for anyone who watched Bladerunner 2049, and remembers Joi, the holographic companion, you know that it doesn’t necessarily have to be skeezy.


One of the other respondents was Samuel Arbesman, who you may recall from the post I did where I reviewed his book Overcomplicated (among other things) And his question is very near and dear to my heart:


How do we best build a civilization that is galvanized by long-term thinking?


This is one answer/question that definitely isn’t guilty of the sin Aaronson described above (the sin of being a thinly veiled discussion of the person’s research.) And it also fits the criteria for being a good “last question”. I think it’s safe to assume, as Arbesman probably does, that long-term thinking corresponds to long-term success, or in other words we’re not just going to accidentally stumble into a glorious future, we’re going to have to work for it. The question is, are we “doing the work”?


As I have pointed out repeatedly, I don’t think we are, and one of the problems with the modern world is its increasing fixation with solving urgent, but potentially unimportant problems. As I have argued, not only is long-term thinking a requirement for civilization, it may in fact be the very definition of civilization. That we are civilized to the extent that we do not prefer the present over the future. Thus I might slightly rephrase Arbesmans question to be, “If civilization relies on long-term thinking, how do we encourage that in a world that is becoming increasingly focused on the short-term?” I’m not sure, but I’m going to keep writing about it.


Speaking of writing about things, if we move from specific posts to the broader themes of the blog it takes us to Joscha Bach’s question:


What is the optimal algorithm for discovering truth?


As I have pointed out from time to time, it’s nice to imagine truth as some immutable object. And if we could just find the right algorithm we would uncover this truth in all its glory and in so doing banish doubt forever. Though perhaps I’m being unfair to Bach, maybe he is just hoping to uncover as much truth as possible, and has no illusions about uncovering the truth behind everything. (Also it should be said, unlike many questions this is an interesting “last question” not merely an interesting question.)


The issue I take with the question is that I think there’s less Truth available than most people imagine. There certainly is some, a few kernels here and there, that have brought large benefits to humanity (the law of gravity for example) but I think a better question is, “What is the optimal algorithm for making decisions under uncertainty?”


I agree with Bach that it would be nice if we just knew THE TRUTH and could act accordingly, but that option is unavailable for 95% (and maybe closer to 99%) of the things we’re actually interested in because we’ll never know the truth. And this is why so much of this blog is focused on antifragility. Because most of the time we’re acting in situations where the truth is unavailable.


Speaking of situations where the truth appears to be fluid, Kate Darling asks:


What future progressive norms would most forward-thinking people today dismiss as too transgressive?


This is one of the many questions which doesn’t feel like a “last question”, nevertheless this is also something I’ve repeatedly wondered about, though, if you’ve read my blog, I’m more interested in past progressive norms which are now dismissed as horribly trasgressive, than future progressive norms. But both past and future norms tie back into the idea of the Overton Window, which is something of significant short-term importance.


I think it says something about the speed at which things are changing that this question is even asked. As the phrase goes I don’t know Kate Darling from Adam (or should it be Eve? There’s those progressive norms again) so I don’t know if she considers herself particularly progressive, but the fact that she’s worried enough about it to make it her “last question”, says a lot about the modern world.


Continuing to match responses, to things I’ve covered in the blog, as you might imagine Fermi’s Paradox makes an appearance in the responses, though it didn’t show up in as many times as I would have thought. Perhaps the paradox no longer grips the imagination like it once did. But interestingly in one of the few responses where it did show up, it showed up connected to religion. Thus once again, even if I do say so myself, my own interests prove to be right on the cutting edge. As to the question, Kai Krause asks:


What will happen to religion on earth when the first alien life form is found?​


First, I should offer a caveat. This question really only applies to Fermi’s Paradox if the alien life in question is intelligent, but that is where the question is the most interesting. Second, similar to some of the other responses, I’m not sure this is a very good last question.


To begin with, a lot of it’s finality depends on the religion you’re speaking of and the form the alien life takes. If the question isn’t about the paradox, if it just relates to, say, finding simple life in the oceans of Europa, then I don’t suspect that it will have much of an effect on any of the world’s major religions.


If, on the other hand, the question ties into the paradox, and the first alien life form we encounter is a civilization vastly superior to our own, then I imagine the impact could be a lot greater, and it might in fact be final, particularly if the aliens share one of our religions. I know most people would be shocked if the aliens had any religion, and even more shocked if it happened to be one that was already practiced on Earth, but I think I’ve laid out a reasonable argument for why that shouldn’t be so shocking. Nevertheless I guess we’ll have to cross that bridge when we come to it.


Supposing, as most people expect, that the aliens have no religion. Then the situation could be final, but that would be because of the alien part of the question, the religion part of the question would have nothing to do with it. As far as the religion side of things, even if it had the effect of wiping out religions (which I doubt) and even given my own feelings on the importance of religion, I don’t think the end of religion would mean the end of humanity. All of the preceding being just a complicated way of arguing that Krause didn’t intend to ask a “last question” with respect to humanity, he intended to ask a “last question” with respect to religion. I would argue that if we rephrased the question as, “Are those crazy religious people going to finally give up their delusions once we find aliens?” It would be at least as close to Krause’s true intent as the actual question and maybe more so.


But perhaps I’m being unfair to Krause, though he’s not the only one to ask questions about religion, and if we move over to that topic, we find that Christopher Stringer was more straightforward:


Can we ever wean humans off their addiction to religion?


And he’s joined by Ed Regis:


Why is reason, science, and evidence so impotent against superstition, religion, and dogma?


Less negative, but mining a similar vein Elaine Pagels asks:


Why is religion still around in the twenty-first century?


I may be harping on this too much, but why are all these questions “last questions”? Despite my decidedly pro-religion stance, I’m not arguing that they’re not interesting questions, in fact I think the utility of religion is embedded in those questions (something Stringer and Regis, in particular, might want to consider) but they’re only last questions if the respondents imagine that unless we can get rid of religion, that humanity is doomed. As you might have guessed I strongly disagree with this point. Not only because of how far we’ve already come with religion, but also because unless they’re all defining religion very narrowly, I think we should be extremely worried they’ll toss out the baby of morality with the bathwater of religion.


I think David G. Myers, while basically speaking on the same general topic phrases his question not only more interestingly, but more productively:


Does religious engagement promote or impede morality, altruism, and human flourishing?


At least this question admits the possibility that the billions of religious people both past and present might not all be hopelessly deluded. And this is also a slightly better last question. The progress we’ve made thus far, as a largely religious species, argues strongly that religion doesn’t prevent progress from being made, and on the other hand asking if religion might be important to “morality, altruism, and human flourishing” is something that should definitely be done before we get rid of it entirely (as Stringer and Regis appear prepared to do.)


Outside of religion, many of the questions involve AI, another subject I’ve touched on from time to time in this space. Starting us off we have a question from someone I’ve frequently mentioned when speaking of AI, our old pal Eliezer Yudkowsky:


What is the fastest way to reliably align a powerful AGI around the safe performance of some limited task that is potent enough to save the world from unaligned AGI?


(I should explain that here Yudkowsky uses AGI, as in artificial general intelligence, rather than the more common AI.)


In this area, at least, we finally start to see some good “last questions”. From Yudkowsky’s perspective, if we don’t get this question right, it will be the last question we ask. He even goes so far as to use the phrase “save the world”, and, from his perspective this is exactly what’s at stake.


Maria Spiropulu’s question is similarly final, and similarly bleak.
Could superintelligence be the purpose of the universe?


I’m assuming it’s bleak because it makes no mention of humanity’s place in this eventual future. One assumes that the superintelligence she envisions would be artificial, but even if it weren’t the alternative is a race of post-humans which would probably bear very little resemblance to humanity as it is now. And I know that for many people that’s a feature not a bug, but we’ll grapple with that some other time.


As long as we’re emphasising how well the AI responses fit in with the “last question” theme, Max Tegmark’s question is especially clever in this regard”


What will be the literally last question that will preoccupy future superintelligent cosmic life for as long as the laws of physics permit?


Something of a meta-last question, but another one which presupposes the eventual triumph of some form of superintelligence.


Finally in this category I want to mention Tom Griffiths’ response:
What new cognitive abilities will we need to live in a world of intelligent machines?


This question is less futuristic than the rest, covering as it does mere job automation rather than the eventual heat death of the universe, but job automation could nevertheless be sufficiently disruptive that there is a potential for this to be the “last question”, I don’t think it’s too much of a stretch to say that a purposeless ennui has already gripped much of the nation, and job automation, as I have pointed out, promises to only exacerbate it.


My purpose in highlighting these questions, other than pointing out the obvious, that people other than myself are interested in artificial intelligence, is to illustrate the widespread belief that we’re in a race. The same race I talked about in my very first post. A race between a beneficial technological singularity and catastrophe. And, as these responses have alluded to, the technological singularity doesn’t have to be beneficial. It could be the catastrophe.


Of course other than Yudkowsky’s response, the AI questions I listed avoid any direct mention of potential catastrophe (which is probably one of the reasons I keep coming back to Yudkowsky, whatever his other flaws at least he acknowledges the possibility things will go poorly.) In fact given the “last question” theme there’s really a surprising dearth of questions which allude to the possibility that progress might not continue along the same exponential curve it has followed for the last few decades. Only a single question, from John Horgan mentions the word “war”.
What will it take to end war once and for all?


And even this question seems to assume that we’re 95% or even 99% of the way there, and he’s just wondering what it will take to finally push us over the top.


As I said Horgan, and Yudkowsky are outliers, most of the responses seem to assume that there will still be researchers and scientists around in 50 or 100 or in however many years, working as they always have to create the “optimal algorithm for discovering truth” or develop a “comprehensive mathematics of human behavior” or investigate the differences in evolution “one hundred thousand years from now”. I can only assume that he expects us to be around in one hundred thousand years, otherwise I imagine evolution will be much the same one hundred thousand years from now as it’s been for the last three billion years.


Perhaps the next 50 to 100 years will be as uneventful as the last 50, but the next 100,000? I know that many of these people believe that something awesome will happen long before then. Something which will lock in progress and lock out catastrophe. (Something like civilizational tupperware.) But what if it doesn’t. Is there any chance of that? And if there is, shouldn’t we perhaps do something about it? Particularly now that we’ve got all these cool gadgets that might help us with that? Fortunately, there are some responses which acknowledge this. Albert Wenger in particular seems to share exactly the concerns I just mentioned:


How do we create and maintain backup options for humanity to quickly rebuild an advanced civilization after a catastrophic human extinction event?


And Dylan Evans is even more pessimistic:


Will civilization collapse before I die?


(Though perhaps his question is more subtle than it initially looks. Maybe Evans is a transhumanist and expects to live for a very long time.)


I was happy to see at least a few questions which acknowledged the fragility of things, but in the end I keep coming back to Arbesman’s question:


How do we best build a civilization that is galvanized by long-term thinking?


Since so many of the other responses inadvertently proved his point. There was a decided lack of long term thinking even in answer to a question that basically demanded it. As Aaronson said, most of the questions really boiled down to:


When will the rest of the world recognize the importance of my research topic?


And of those that avoided that, many engaged in only positive long-term thinking, which to my mind is a potentially fatal oversight. One that overlooks not only the challenges humanity is likely to face, but the challenges we should be the most worried about.


Of course, I promised I would provide my own response. And it’s certainly unfair to criticize others if I’m not willing to try the same thing myself, but before I do. To the many respondents who religiously read this blog, but whose questions I didn’t mention. (Sorry Freeman!) I apologize. You may assume that it was fantastic.


And now, finally, the moment you’ve all been waiting for, my last question:


Is humanity doomed because we have mistaken technology for wisdom, narcissism for joy, and debauchery for morality?





There’s another last question I’d like you to consider, not the last question for humanity, but the last question I ask at the end of every post. Any chance you’d consider donating? Maybe it’s finally time to answer yes.

No comments:

Post a Comment