Or download the MP3
Saturday, July 28, 2018
If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:
Or download the MP3
Or download the MP3
As I mentioned, this week on Thursday and Friday, I gave two presentations at the Sunstone Symposium, and when the week began, I was not quite as prepared for those presentations as I would have liked. What this meant is that time I would normally have spent on my weekly post ended up getting spent on preparing my presentations instead. Accordingly, I considered skipping this week’s post, but I’m skipping next week to go to GenCon and also skipped a week not that long ago, so as a compromise I decided that I won’t skip this week, but I will take the somewhat easier option of putting together a few short pieces on some things I’ve been thinking about recently, rather than taking the time to construct a single, more involved piece.
Also, related to that, I apologize that I didn’t engage at all with the comments from last week’s post. It was a good discussion, but I didn’t have the time to contribute to it, which is sometimes how it goes.
With the grovelling apologies out of the way, let’s move on to the actual topics.
The Sunstone Symposium
Obviously it would be odd if I went to something like the Sunstone Symposium and then had nothing to say about it. But before I get into that I imagine some people are curious about my presentations, and might even appreciate a link. Unfortunately the presentations weren’t filmed, though they were recorded, but buying a recording is $12/presentation, which may be an entirely appropriate price for the other presentations, but it’s way too much to pay for mine. However, I did the presentations in Google Slides and I made pretty extensive notes to go along with them, so I will link to that. Here are the links for the AI Presentation (slideshow, notes) and for the Fermi’s Paradox Presentation (slideshow, notes). Also, you should put the slideshow into presentation mode for the full effect.
With that out of the way, what about the rest of it? I don’t think I’m going to do a blow-by-blow of things, though there were a couple of moments of note which I’ll mention at the end. Instead, what I mostly want to focus on is a higher level discussion of things. I want to start with discussing ecclesiology. As with so many things I first encountered this term in a blog post by Scott Alexander on SlateStarCodex, though it’s one of those things I should have known about earlier.
Technically it’s the study of theology as it applies to the nature and structure of the Christian Church. But Alexander was using the term in a somewhat broader sense, as a study of the trade-offs between the level of agreement you have in a group, and the number of people in that group. Basically, if you require that everyone agree about absolutely everything you’ll have a very focused and ideologically united group with exactly three members (if you’re lucky). If, on the other hand, you have very few requirements for what someone should believe before they join your group (or religion). Then you’ll have a very large but very unfocused group.
Every group has to deal with this problem, and religions are no exception. Though if you came up with a score based on multiplying the number of adherents by the strength of their beliefs. You would find that religions generally score higher than any other kind of group. Additionally the relationship between the rigidity of beliefs and the number of adherents is not entirely linear. After a certain point, the further watering down of a religion’s beliefs leads to fewer adherents rather than more. Otherwise Unitarian Universalism would be the world’s largest religion. All of which is to say that there’s a bunch of factors which go into getting a high “score”. Further you’re perfectly within your rights to argue that the score doesn’t matter. And if it does matter, it’s still probably not a direct correlation to power and influence, but it is nevertheless something all groups and religions are going to be affected by.
All of this is a long-winded lead-in to saying that Sunstone is an example of what gets created at the intersection of this trade-off between strengthening (or in this case holding fast) to beliefs and having more adherents. As an aside, talking about ecclesiology in this context ignores, of course, whether the religion is actually true, because if it’s true, then this mostly shouldn’t matter (though I suppose there some wiggle room around the edges where it still would.) But it’s hard to get a read on how much truth the average Sunstone attendee feels is contained in the Mormon Church (and all of its various offshoots), which is why I choose to focus on the question of ecclesiology. And what I find interesting about this question is that despite Sunstone being right in the middle of it, all of the presentations I went to either entirely ignored it, or came out 100% in favor of watering down the doctrine in order to attract more adherents, without speaking at all about how this might affect the overall cohesiveness and unity of the church.
To be fair watering down some of the prohibitions is a strategy, and a straightforward one at that, but it’s not without its consequences, even if you set aside any claims of truth. The world already has one Unitarian Universalist church, it doesn’t need another one, and even if it did that’s not some guaranteed path to success. Essentially every religion (particularly in the developed world) is bleeding members, and it’s not clear that the LDS Church is bleeding members at a faster rate than other churches, in fact it might even be slower…
As I mentioned these are all going to be short topics, so I’m not going to say anything beyond that, but maybe in a future post I’ll circle back and talk more about Sunstone, or the subject of ecclesiology. Though, before I leave the subject of Sunstone entirely, I did say there were a couple moments from the Symposium that were worth relating.
The first was Thursday night. The program suggested that people might be eating at Joe’s Crab Shack. I didn’t necessarily have anyone to go to dinner with, but I thought I’d wander over to Joe’s and see if I could crash someone else’s group. I arrived at the same time as a couple of gentleman, and asked if I could join them, they said sure, and we sat down for dinner along with four other individuals. It was only after I was seated that I found out that I was with a bunch of Community of Christ leaders including two Apostles and one of the Presidents of the Seventy. If you don’t know much about the various sects of Mormonism (according to them I’m a Brighamite and they’re Josephites) this won’t mean much to you, but trust me it was a big deal. It was also delightful. I had a great time chatting with them about all manner of things. And it it ended up being a fantastic bit of serendipity.
The second moment was Friday morning. I decided to go to a presentation entitled “Transfeminist Critique” (in this case it was a Transfeminist Critique of the LDS Church). I obviously assumed that I would disagree with the presenter on just about everything, but I was interested in the current arguments going around in this space. That said, I confess I didn’t listen as closely as I might have. Partially that was my fault. I had my computer open and I was catching up on all the email I’d neglected while doing last minute prep on my presentation (this presentation immediately followed mine). But in part it was because the presenter just read her paper (she was a transwoman) without any accompanying slides or even standing up, which, I’m going to be honest, made things somewhat monotonous.
Those caveats aside, the overwhelming impression I came away with was that this person was putting forth a worldview completely incompatible with reality as I understand it... I know that’s a strong statement, and as I said it was just an impression. Also I hope I’m wrong, I hope there is some way to peacefully integrate her worldview with all the other world views out there. World views which have existed for an awfully long time and don’t have much room for extreme and widespread gender ambiguity. I guess what I’m trying to say is that this presentation was another example of the world fracturing into numerous groups with competing values that appear entirely irreconcilable.
Which takes us directly to our next point.
I’ve already mentioned Scott Alexander and SlateStarCodex in this post, and it’s somewhat embarrassing to mention him again, but recently one of the interesting debates which has raged over there (actually does Alexander ever rage? Maybe “simmered over there”?) is whether there are values which are completely irreconcilable. Whether when everything is said and done it’s as Justice Holmes said, “Between two groups that want to make inconsistent kinds of world I see no remedy but force.”
Alexander, ever the peacemaker, argues that they’re aren’t. And I mostly agree with him. And I think the point he makes that tradition, and evolution are actually better at accomplishing certain things than being really intellectual about it, is exactly the point I’ve made over and over again. And his example is worth quoting:
A natural interpretation: people with explicit modeling are smart and good, people who still use metaphysical heuristics are either too hidebound to switch or too stupid to do the modeling.
I think this is partly right, but since our goal is to make value differences seem less clear-cut and fundamental, I want to make the devil’s advocate case for respecting metaphysical heuristics.
First, the heuristics are, if nothing else, proven to be compatible with continuing to live; the explicit models often suck.
Soylent uses an explicit model of nutrition to try to replace our vague heuristics about “eating healthy”. I am mostly satisfied with the quality of its research; it generally avoids stupid mistakes. It does not completely avoid them; the product has no cholesterol, because “cholesterol is bad”, but the badness of cholesterol is controversial, and even if we grant the basic truth of the statement, it applies only at the margin in the standard American diet. If you eat only one food item, you had better get that food item really right, and it turns out that having literally zero cholesterol in your diet is long-term dangerous. This was an own-goal, and a smarter explicit modeler could have avoided it. But explicit models that only work when you get everything exactly right will fail 95% of the time for geniuses and 100% of the time for the rest of us.
And even if Soylent had avoided own-goals, they still risk running up against the limit of our understanding. Decades ago, doctors invented a Soylent-like fluid to pump into the veins of patients whose digestive systems were so damaged they could not eat normally. These patients tended to get a weird form of diabetes and die. After a lot of work, the doctors discovered that chromium – of all things – was actually a really important dietary nutrient, and nobody had ever noticed before because it’s more or less impossible to run out of chromium with any diet except having synthetic fluids pumped into your veins. After years of progress on nutritional fluids, the patients who need them no longer die; we can be pretty sure we’ve found everything that’s fatal in deficiency. But these patients do tend to feel much worse, and be much less healthy, than people eating normal diets. How many mildly-important trace micronutrients are left to discover? And how many of these are or aren’t in Soylent?
This all relates to a discussion where certain people seem like hateful bigots, but are really just following a heuristic. And they don’t differ in values with those who have an explicit model as much as we think.
Okay, so I agree with Alexander, enough to include a huge quote from him. (Though partially that’s me padding my word count.) Why did I even bring this issue up? Well first, it’s an interesting debate, and it wouldn’t be a horrible use of your time to read the three posts I linked to above. But second, because I think he misses two things:
First, technology lets us refine things to a degree we never could before, meaning that while previously, humanity was a giant mishmash of values which mostly evened out in practice. These days we can take a naked value and turn it up to 11. Alexander points out that even people who are adamantly against foreign aid, might still donate to tsunami relief for Japan after the 2011 earthquake. This is probably not the case with a naked value turned up to 11, there would be no room for exceptions. I’ve talked about the tension between survival and happiness. As Alexander points out, no human is going to entirely ignore either, but we could create systems which do. And even if the system is set up not to ignore a value, by explicitly defining values, as is the case with Soylent, you could overlook some critical part of how that value actually works in practice.
Second, Alexander could be entirely correct, but it may not matter, because people think there are irreconcilable differences. The perception of those differences may be all that matters. It is conceivable that in this day and age people might not put in the hard work to understand the other side. Speaking of which, let’s move on to discuss Trump, the greatest current flashpoint of irreconcilable values, whose summit with Putin was very much in the news recently.
Trump and Russia
There are people who think Trump is a master strategist, despite the fact that to all appearances he looks like a bumbling, mercurial, ill-tempered oaf. That rather, this apparent oafishness is all part of a master strategy playing out at a level most people (including myself) can’t understand. That Trump is playing 4D chess. Interestingly enough, it’s apparently not just his supporters who think this, it’s also the Chinese according to a recent article in the Financial Times:
I have just spent a week in Beijing talking to officials and intellectuals, many of whom are awed by [Trump’s] skill as a strategist and tactician…He [Yafei] worries that strategic competition has become the new normal and says that “trade wars are just the tip of the iceberg”.
…In Chinese eyes, Mr Trump’s response is a form of “creative destruction”. He is systematically destroying the existing institutions — from the World Trade Organization and the North American Free Trade Agreement to Nato and the Iran nuclear deal — as a first step towards renegotiating the world order on terms more favourable to Washington. Once the order is destroyed, the Chinese elite believes, Mr Trump will move to stage two: renegotiating America’s relationship with other powers. Because the US is still the most powerful country in the world, it will be able to negotiate with other countries from a position of strength if it deals with them one at a time rather than through multilateral institutions that empower the weak at the expense of the strong…
My interlocutors say that Mr Trump is the first US president for more than 40 years to bash China on three fronts simultaneously: trade, military and ideology. They describe him as a master tactician, focusing on one issue at a time, and extracting as many concessions as he can. They speak of the skillful way Mr Trump has treated President Xi Jinping. “Look at how he handled North Korea,” one says. “He got Xi Jinping to agree to UN sanctions [half a dozen] times, creating an economic stranglehold on the country. China almost turned North Korea into a sworn enemy of the country.” But they also see him as a strategist, willing to declare a truce in each area when there are no more concessions to be had, and then start again with a new front.
I am still inclined to think he’s an oaf, but this seemed worth sharing. Though I still haven’t talked about Russia. Well, I bring up the report from China to introduce the idea that even if you think he’s an oaf (as I do) that things still might not be as catastrophic as you’ve been lead to believe.
If you go back far enough into my archives, you’ll encounter the idea that what we should really be paying attention to at the level of the Federal Government is preventing big black swans, and I gave the example of the Sack of Baghdad in 1258 by the Mongols. In the current world our Sack of Baghdad would be a war involving nuclear weapons. Accordingly if we strip away all the talk of Trump as traitor, and what he may or may not have said about Russian election interference or however much he may have stabbed the American intelligence agencies in the back, did the Trump-Putin summit increase or decrease the chances of this particular black swan?
I haven’t spent as much time as I probably should have reading about the summit, part of the problem is that the anti-trump hysteria is so widespread that it’s hard to wade through. But my impression is that very few people are focused on this angle. And that those who have mentioned it can’t agree on whether the summit increased or decreased the chances of nukes being used. People who think it’s made the problem worse argue that the summit increased Putin’s confidence, which makes him more expansionist, which leads to wars which leads to nukes. People who think it’s made the problem better, think that the friendlier we are with Russia, the less likely they are to nuke us, and that this policy is definitely better than the policy of encirclement which preceded Trump and almost certainly would have continued had Clinton won. (Specifically I’m talking about the policy of continuing to add more nations to NATO.) Which boils down to: is a confident, expansionist Russia or fearful, hemmed in Russia more likely to used nukes?
I’m going to argue strongly for the latter, that if you’re worried about nukes, creating a fearful Russia through encirclement is worse. Sure in the past, expansionist impulses have often led to war, but recall that nuclear weapons are a completely different type of weapon. You don’t nuke someone and then take over their territory, because once you’ve nuked them you probably don’t want it. You nuke someone when you don’t care anymore, when you’re out of other options.
Now to be fair, reasonable people could disagree on this point. And insofar as Trump has empowered Putin to be worse to the people in Russia than he already was, that’s unfortunate, but when just considering whether the Trump policy of being buddies with Putin, or the assumed Clinton policy of encircling Russia is better at avoiding that which must be avoided at all costs. I think I’m going to have to go with Trump. Maybe the Chinese are right...
Moving to Word Press
Finally, I’m not sure if you noticed, since I am posting this in both places. But I have started to move everything over to a new domain: rwrichey.com. This, is as people say, my real name (more or less).
I start blogging under the handle Jeremiah, first because I fancied I was a modern day Jeremiah (maybe I am, maybe I’m not, certainly if I am then I’m not the only one or the best). Second because of the theme of the blog (which I’m mostly still keeping). And finally because I was worried something I posted would end up being controversial and inflammatory enough that it would bleed over into my business or personal life, and I could limit the chances of that happening if I blogged under a pseudonym.
That could still happen obviously, and maybe I’ll look back on this switch with bitter regret, but I’ve decided it’s a huge hassle to maintain anonymity, and that even if you do a really good job, if someone goes to enough effort, they’re going to figure it out.
The big upshot of this is that if you have a comment you should post it at rwrichey.com. And of course let me know (via email or the blogspot comments) if you run into any problems. I am definitely going to move all the posts over, hopefully very soon. I’m not sure if I’ll be able to move the comments over, we’ll have to see, but if it’s not too difficult I will.
Finally, I probably don’t say this often enough, but for all those reading this, I really appreciate it. And I’ll see you in a couple of weeks.
Saturday, July 21, 2018
If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:
Or download the MP3
Or download the MP3
Nietzsche claimed that, “God is dead” (or for the purists “Gott is tot”). When I first heard this (I’m guessing in high school?) I assumed that it was just a particularly direct version of what atheists have been saying for decades. Notable only in that it was an early example of this sentiment, but not otherwise especially unique or interesting.
Since then I have come to understand that Nietzsche was making a deeper point. Though in claiming this I am wandering into the deep weeds of philosophy and it’s entirely possible that I am about to vastly over simplify Nietzsche’s point, or mis-represent it entirely, similar to Otto in a Fish Called Wanda, though this possibility has never stopped me before, so with that caveat out of the way...
As I understand it Nietzsche was saying that progress and technology and the enlightenment had ruled out the possibility of God, and in doing so had removed one of the central pillars of Western-Christian Civilization. And without that pillar, which includes God as a source of absolute morality, that we were inevitably doomed to nihilism. I think you get a sense of this just from considering a more extended selection of what Nietzsche said, which is frankly pretty powerful.
God is dead. God remains dead. And we have killed him. How shall we comfort ourselves, the murderers of all murderers? What was holiest and mightiest of all that the world has yet owned has bled to death under our knives: who will wipe this blood off us? What water is there for us to clean ourselves? What festivals of atonement, what sacred games shall we have to invent? Is not the greatness of this deed too great for us? Must we ourselves not become gods simply to appear worthy of it?
These are all important, if heavily metaphorical questions, and, of course, to that last question the transhumanist would reply, “Maybe so, maybe we do have to become gods, fortunately that’s exactly what we intend to do.”
Two of the topics I come back to over and over again, Artificial Intelligence and Fermi’s Paradox, relate to this question of the absence of God. And next week I’m going to be doing an hour long presentation on both of them at the annual Sunstone Symposium.
(If you happen to be attending the symposium, I’ll be doing my AI presentation at 11:30 am on Thursday the 26th in room 200-B, and I’ll be doing my Fermi’s Paradox presentation at 10:15 am on Friday the 27th in room 200-D. Please stop by and say, “Hi!”)
Given that I was already doing a bunch of work to prepare for these presentations, I had initially thought that this week’s post would be on AI and then next week’s post would be Fermi’s Paradox. But as I got into things, I realized that for those who have actually read the blog there’s not much point in posting the stuff I’m preparing to present at Sunstone, which is understandably going to be more introductory, and probably a repeat of a lot of things I’ve already said, and which you’ve already read. I’m still hoping they film both presentations, and put them online, so that I can post links to them. I guess we’ll see. It’s my first time so I’m not sure what will happen.
Instead I thought I’d look for a subject which combined the two topics in an interesting way, and I believe the quote from Nietzsche does exactly that, though at a pretty high level (which is to be expected when combining these two subjects.)
It may not be apparent what the quote from Nietzsche has to do with Fermi’s Paradox. Well, if Nietzsche is correct and we have metaphorically killed the traditional Christian God, (and given the similarities probably the Muslim God as well.) Then there’s still the possibility that there might be other god-like beings out there, specifically god-like extraterrestrials. I have not encountered any evidence that Nietzsche considered this possibility, but his statement obviously doesn’t preclude it, and for obvious reasons even if Nietzsche didn’t consider it, we should. One could imagine that if the two main things that Christianity supplied were morality and salvation, that sufficiently advanced aliens could provide both, or perhaps just one or the other.
The first thing that’s evident once we turn to consider this idea is the possibility that if god-like extraterrestrials are going to provide morality it may not be a morality we particularly like. Many people, when considering Fermi’s Paradox have come to the conclusion that the universe is a dark forest. A place of incredible danger. This theory takes its name from the Remembrance of Earth’s Past trilogy by Liu Cixin where it was the title of one of the books. Here’s how it’s described there:
The universe is a dark forest. Every civilization is an armed hunter stalking through the trees like a ghost, gently pushing aside branches that block the path and trying to tread without sound. Even breathing is done with care. The hunter has to be careful, because everywhere in the forest are stealthy hunters like him. If he finds another life—another hunter, angel, or a demon, a delicate infant to tottering old man, a fairy or demigod—there’s only one thing he can do: open fire and eliminate them.
Liu is not the only person to put forth this theory (he just gave it the catchiest name). Years before Liu wrote his books other people were arguing that we shouldn’t engage in Active SETI for very similar reasons (this included the late Professor Hawking). For myself I wrote a whole post explaining why I didn’t think the Dark Forest explanation of the paradox was very likely, but for those that do think it’s likely, it entirely undermines the idea of a universal morality, or at least posits that if there is a universal morality, it’s a morality of universal violence. Which takes us to a place not that much different than Nietzsche’s original thought. Instead of being alone, bereft of morality and adrift in an uncaring universe, we could be surrounded by genocidal aliens, gifted with a morality of unceasing violence, and adrift in a malevolent universe. I think most people would actually prefer the first option. But either way, the eventual nihilism Nietzsche predicts is just as likely, if not moreso.
Of course there are a broad range of possible moral codes which extraterrestrials might possess. But within all the speculation it’s very hard to find anyone arguing that there is some universal system of morality which all aliens must, by necessity embrace. And of course my argument is, that if such a system exists, Occam’s Razor would suggest that we already have it, even if we’ve been given the basic, “early reader” version of this morality. And, once we add Fermi’s Paradox to Nietzsche’s observation. If we take that further step and place ourselves outside a human frame of reference, universal morality, or a morality which easily replaces Christianity, becomes impossible to imagine. With this in mind, what makes atheists and similar individuals so certain that there is morality outside of the concept of God? Certainly Nietzsche didn’t think so:
When one gives up the Christian faith, one pulls the right to Christian morality out from under one's feet. This morality is by no means self-evident... By breaking one main concept out of Christianity, the faith in God, one breaks the whole: nothing necessary remains in one's hands.
Nietzsche argues that even if you maintain the rest of Christianity (and certainly it could be argued that we mostly did, at least initially) that without “faith in God...nothing necessary remains”. And indeed, it certainly appears to me that once people abandoned the lynchpin of “faith in God” that it began a slow erosion of everything else which was once considered Christian morality. Further, as I pointed out, while there’s no evidence that Nietzsche considered the possibility of god-like extraterrestrials, even if we add them to our consideration, there’s no reason to think that they would halt this erosion. Aliens, at least as they are typically imagined, don’t solve the problem of God’s absence, or at least I think we can conclude that they don’t solve the problem of morality. That still leaves us the problem of salvation. Will god-like extraterrestrials come along and save humanity?
Here, before going any further we have to acknowledge that salvation looks different to different people. In its most minimal sense it’s just a synonym for survival. Being saved just entails not ceasing to exist. On the other side of the spectrum salvation is used interchangeably with exaltation. Not only do you survive, but you achieve a state of perfect happiness. On the survival end it makes sense to talk about humanity surviving, and that being a good thing, regardless of whether any individual human survives. But on the exaltation end of things, it’s much more common to look at things from the level of an individual, is any given person immortal and happy. Is that person saved?
In a world which largely acts as if God is dead, it’s interesting that as the rest of Christian morality has eroded away, the two remaining pillars of moral high ground, of terminal value, end up falling into these same two categories with survival on one end and happiness (or technically hedonism) on the other. I discussed the tension between these two values previously and argued, that if we were going to try to construct a morality in the absence of God that it’s better to build it around the value of survival, if for no other reason that happiness is impossible in the absence of survival. I’ve already hopefully shown where aliens are unlikely to be able to help us with morality, and it seems equally unlikely they would be able to do much for our happiness, leaving only helping us to survive. This idea has appeared in science fiction, though far less often than the opposite trope of aliens looking to exterminate humanity. That said, there are still plenty of interesting examples. For myself I quite enjoyed the book Spin by Robert Charles Wilson.
However, if being rescued from extinction by aliens is a possibility, then, as I pointed out in another recent post, they need to have either saved us already (perhaps through means we can’t detect?) or they probably aren’t going to save us. And of course this applies to everything I’ve said thus far. If god-like extraterrestrials are going to step in and take the place of Nietzsche’s dead god, in any capacity, they need to have done so already.
Thus far we’ve been looking at what the ramifications would be if god-like aliens do exist, but more and more people feel that’s the wrong way to bet. That odds are we’re entirely alone. As examples of this, I just talked about the paper which claimed to “dissolve Fermi’s Paradox” and previously I discussed a book dedicated to the paradox which concluded, after offering up 75 potential explanations, that the most likely explanation is that we’re all alone in the visible universe. If this is the case, then it would appear that Nietzsche was entirely correct about the essential emptiness of existence despite completely ignoring potential god-like extraterrestrials who could step in and fill the gap. Accordingly, we are left with two possibilities. There are aliens, but they almost certainly won’t provide either morality or salvation, and definitely not both, or there are no aliens, god-like or otherwise. Meaning that after a long detour through Fermi’s Paradox, the reality of Nietzsche’s claim has not been significantly altered. We’re still in the same situation we were before, and possibly worse, since, in my opinion, if it did nothing else, the detour provided good reasons for doubting that any sort of universal morality exists in the absence of God.
I should interject here, again, that personally I think there is a God, and I think assuming his existence, along with the existence of religion and all that entails, is the best way to answer all of the issues we’ve covered so far, but I think this puts me in the minority of people with an interest in the paradox.
The main thrust of Nietzsche’s argument, from my limited understanding, is that people have not sufficiently grappled with the implications of there being no God. Now, according to polls, this doesn’t necessarily apply to most people, who still believe in God, and would therefore, presumably, be exempt from any need to “grapple”. Rather, Nietzsche appeared to mostly be talking to intellectuals. In his day and age they occupied the salons and drawing rooms of Europe, and discussed things like evolution and emancipation. In our day and age they occupy the internet and discuss things like Fermi’s Paradox and artificial intelligence. And just as Nietzsche accused the intellectuals of his day of not coming to terms with the ramifications implied in their discussions, I’m accusing the intellectuals of our day of the same thing. Particularly those people who believe that Fermi’s Paradox has been dissolved, who believe we are all alone in the universe. Which, let’s be clear, is a pretty big deal.
If you are one of those people who don’t believe in God, and who further believe that we’re all alone in the universe (or if that we’re not alone that it doesn’t help.) What do you do now? This is where Nietzsche may be at his most impressive. Lots of people pointed out that the decline of religion was going to cause unforeseen issues, though perhaps with less panache than Nietzsche, but when he goes on to say, “Must we ourselves not become gods simply to appear worthy of it?” He manages to precisely describe the transhumanism movement a century or more in advance of its appearance. (Interestingly, his big prediction, a descent into nihilism, has mostly not happened. But maybe it just hasn’t happened… yet.)
I mentioned up front that I was going to be discussing AI, which is the subject we turn to now. And which is less us becoming gods than us creating gods, but the basic principle remains the same. And the question I had with Fermi’s Paradox remains essentially the same was well. If there are no god-like extraterrestrials to step into the gap Nietzsche noticed, is it possible we could create a god-like AI to fill that gap?
Once again those who have abandoned a belief in God are looking to this “substitute god” to provide them with morality or salvation or hopefully both. Though in this case they do have one very important advantage, instead of being required to accept what the universe offers, as is the case with aliens (should they exist), in the case of artificial intelligence we get to design our deity. (I’m actually a little bit surprised no one has started an AI company with the name “Designer Deities”.)
This means, first off, that we’ll almost certainly combine the morality part with the salvation part. Or, to put it another way, we’ll do our best to make sure that whatever morality the AI ends up with, that one of the values is human salvation (definitely in the survival sense and if possible in the exaltation sense as well.) Which means that a century after Nietzsche pointed out the problem, we’ve come up with a straightforward solution: All we have to do is figure out how to teach computers to be good. (They would, of course, also need a certain amount of power beyond that, but most people assume that this is just a matter of time.) All of the problems Nietzsche describes can be reduced to the single problem of AI morality. Unfortunately even though it’s only one problem it’s an extraordinarily difficult problem.
As you may know from reading other posts of mine, or from following the subject in general, no one is exactly sure how you get a computer to be good. In fact no one is entirely sure what good means in this context, and there are lots of things which seem like a good way to implement morality, which could, in practice, turn out to be very bad. I’ve given numerous examples elsewhere, but let’s briefly consider Asimov's three laws of robotics, which are often mentioned in this context. The first of these is:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
It’s not hard to see where taking all humans and locking them up in a padded room with a set number of optimally healthy calories delivered every day would conform with this rule, and fit the survival definition of salvation. This is one of the reasons why some people contend that it’s not enough for our AI deity to ensure our survival, they really need to exalt us.
(It’s interesting to note here the general principle, that survival is easy, exaltation is tough. Which may end up being the subject of a different post…)
We’ve once again arrived at a place where it becomes apparent that no one is 100% confident that we can formulate a universal system of morality, particularly if it needs to be defined with enough precision to feed into a computer. Now I’m sure there are some atheists out there that will scoff at the idea that religion provides a universal system of morality, but they’re missing the point. Religious people don’t think you can just give the Bible (or the Koran) to your new AI and grant it instant perfect morality. In other words, they don’t think it provides a perfect system of morality applicable in all times and all circumstances. (Though maybe some do.) It’s that they have faith that religious belief combined with God’s omnipotence, creates a perfect system. Which is why, I believe, Nietzsche felt that “By breaking one main concept out of Christianity, the faith in God, one breaks the whole: nothing necessary remains in one's hands.” That faith is the critical component.
I understand people who don’t have faith, or think they shouldn’t have to have faith. Or who scoff at the very idea of faith. But I think these people will also find that it’s difficult to universalize morality without it. That becoming gods or creating gods is a difficult project.
Not too long ago, someone close to me came and told me that he had decided to leave the Mormon Church. The person said that he was now an atheist, or at least an agnostic. (I suspect the latter term is closer to the truth.) And he mentioned that one of the turning points was when he encountered something Penn Jillette had said, that you could be an atheist and still be good. I agree with this statement, and I would also agree that the horrible nihilism Nietzsche predicted would accompany the decrease in religion has also largely not come to pass either. But I think, as we examine the various developments in the realm of replacing god (if he is in fact dead, remember I argue that he’s not) it becomes clear that there isn’t some alternate system of morality which slots into the spot once occupied by Christianity. That when Penn says that you can be good and be an atheist, he’s largely saying that you can continue to maintain religiously derived morality without believing in God.
But, the neo-christian morality which seems to dominate these days, and which I assume Penn is referring to, is obviously getting farther and farther away from its core, and when it comes both to morality and nihilism it’s entirely possible that all of Nietzsche’s worst predictions will come true, it’s just taking longer than he expected. That people really haven’t grappled with the Death of God, and that as morality continues to erode, as it becomes more difficult to define, as we seek to replace God, that the reckoning is coming. Yes, it’s slower than Nietzsche expected. And yes, it’s very subtle, but the reckoning is coming.
You may think that it’s easy doing a cursory and ill-informed survey of one philosophical statement, taken out of context, but it’s not, it takes a certain bull-headed determination, and if you appreciate that determination, regardless of how misguided it is, then consider donating.