Support This Blog On Patreon!

Support this Blog!

All I ask is a $1 a month. (But more is great too.) If you find this content to be beneficial, interesting or just a fascinating peek into true insanity please donate.

Saturday, August 12, 2017

Remind Me What The Heck Your Point is Again?

If you prefer to listen rather than read, this blog is available as a podcast here. Or you want to listen to just this post:



Or download the MP3



The other day I was talking to my brother and he said, “How would you describe your blog in a couple of sentences?”


It probably says something about my professionalism (or lack thereof) that I didn’t have some response ready to spit out. An elevator pitch, if you will. Instead I told him, “That’s a tough one.” Much of this difficulty comes because, if I were being 100% honest, the fairest description of my blog would boil down to: I write about fringe ideas I happen to find interesting. Of course, this description is not going to get me many readers, particularly if they have no idea whether there’s any overlap between what I find interesting and what they find interesting.


I didn’t say this to my brother, mostly because I didn’t think of it at the time. Instead, after few seconds, I told him, well of course the blog does have a theme, it’s right there in the title, but I admitted that it might be more atmospheric than explanatory. Though I think we can fix that with the addition of a few words. Which is how Jeremiah 8:20 shows up on my business cards. (Yeah, that’s the kind of stuff your donations get spent on, FYI.) With those few words added it reads:


The harvest [of technology] is past, the summer [of progress] is ended, and we are not saved.


If I was going to be really pedantic, I might modify it, and hedge, so it read as follows:


Harvesting technology is getting more complex, the summer where progress was easy is over, and I think we should prepare for the possibility that we won’t be saved.


If I was going to be more literary and try to pull in some George R.R. Martin fans I might phrase it:


What we harvest no longer feeds us, and winter is coming.


But once again, you would be forgiven if, after all this, you’re still unclear on what this blog is about (other than weird things I find interesting). To be fair, to myself, I did explain all of this in the very first post, and re-reading it recently, I think it held up fairly well. But it could be better, and this assumes that people have even read my very first post, which is unlikely since at the time my readership was at its nadir, and despite my complete neglect of anything resembling marketing, since then, it has grown, and presumably at least some of those people have not read the entire archive.


Accordingly, I thought I’d take another shot at it. To start, one concept which runs through much (though probably not all) of what I write, is the principle of antifragility, as introduced by Nassim Nicholas Taleb in his book of (nearly) the same name.


I already dedicated an entire post to explaining the ideas of Taleb, so I’m not going to repeat that here. But, in brief, Taleb starts with what should be an uncontroversial idea, that the world is random. He then moves on to point out the effects of that, particularly in light of the fact that most people don’t recognize how random things truly are. They are often Fooled by Randomness (the title of his first book) into thinking that there’s patterns and stability when there aren’t. From there he moves on to talk about extreme randomness through introducing the idea of a Black Swan (the name of his second book) which is something that:


  1. Lies outside the realm of regular expectations
  2. Has an extreme impact
  3. People go to great lengths afterwards to show how it should have been expected.


It’s important at this point to clarify that not all black swans are negative. And technology has generally had the effect of increasing the number of black swans of both the positive (internet) and negative (financial crash) sort. In my very first post I said that we were in a race between these two kinds of black swans, though rather than calling them positive or negative black swans I called them singularities and catastrophes. And tying it back into the theme of the blog a singularity is when technology saves us, and a catastrophe is when it doesn’t.


If we’re living in a random world, with no way to tell whether we’re either going to be saved by technology or doomed by it, then what should we do? This is where Taleb ties it all together under the principle of antifragility, and as I mentioned it’s one of the major themes of this blog. Enough so that another short description of the blog might be:


Antifragility from a Mormon perspective.


But I still haven’t explained antifragility, to say nothing of antifragility from a Mormon perspective, so perhaps I should do that first. In short, things that are fragile are harmed by chaos and things that are antifragile are helped by chaos. I would argue that it’s preferable to be antifragile all of the time, but it is particularly important when things get chaotic. Which leads to two questions: How fragile is society? And how chaotic are things likely to get? I have repeatedly argued that society is very fragile and that things are likely to get significantly more chaotic. And further, that technology increases both of these qualities


Earlier, I provided a pedantic version of the theme, changing (among other things) the clause “we are not saved” to the clause “we should prepare for the possibility that we won’t be saved.” As I said, Taleb starts with the idea that the world is random, or in other words unpredictable, with negative and positive black swans happening unexpectedly. Being antifragile entails reducing your exposure to negative black swans while increasing your exposure to positive black swans. In other words being prepared for the possibility that technology won’t save us.


To be fair, it’s certainly possible that technology will save us. And I wouldn’t put up too much of a fight if you argued it was the most likely outcome. But I take serious issue with anyone who wants to claim that there isn’t a significant chance of catastrophe. To be antifragile, consists of realizing that the cost of being wrong if you assume a catastrophe and there isn’t one, is much less than if you assume no catastrophe and there is one.


It should also be pointed out that most of the time antifragility is relative. To give an example, if I’m a prepper and the North Koreans set off an EMP over the US which knocks out all the power for months. I may go from being a lower class schlub to being the richest person in town. In other words chaos helped me, but only because I reduced my exposure to that particular negative black swan, and most of my neighbors didn’t.


Having explained antifragility (refer back to the previous post if things are still unclear) what does Mormonism bring to the discussion? I would offer that it brings a lot.


First, Mormonism spends quite a bit of time stressing the importance of antifragility, though they call it self reliance, and emphasis things like staying out of debt, having a plan for emergency preparedness, and maintaining a multi-year supply of food. This aspect is not one I spend a lot of time on, but it is definitely an example of Mormon antifragility.


Second, Mormons, while not as apocalyptic as some religions nevertheless reference the nearness of the end right in their name. We’re not merely Saints, we are the “Latter-Day Saints”. While it is true that some members are more apocalyptic than others, regardless of their belief level I don’t think many would dismiss the idea of some kind of Armageddon outright. Given that, if you’re trying to pick a winner in the race between catastrophe and singularity or more broadly, negative or positive black swans, belonging to religion which claims we’re in the last days could help break that tie. Also as I mentioned it’s probably wisest to err on the side of catastrophe anyway.


Third, I believe Mormon Doctrine provides unique insight into some of the cutting edge futuristic issues of the day. Over the last three posts I laid out what those insights are with respect to AI, but in other posts I’ve talked about how the LDS doctrine might answer Fermi’s Paradox. And of course there’s the long running argument I’ve had with the Mormon Transhumanist Association over what constitutes an appropriate use of technology and what constitutes inappropriate uses of technology. This is obviously germane to the discussion of whether technology will save us. And what the endpoint of that technology will end up being. And it suggests another possible theme:


Connecting the challenges of technology to the solutions provided by LDS Doctrine.


Finally, any discussion of Mormonism and religion has to touch on the subject of morality. For many people issues of promiscuity, abortion, single-parent families, same sex marriage, and ubiquitous pornography are either neutral or benefits of the modern world. This leads some people to conclude that things are as good as they’ve ever been and if we’re not on the verge of a singularity then at least we live in a very enlightened era, where people enjoy freedoms they could have never previously imagined.


The LDS Church and religion in general (at least the orthodox variety) take the opposite view of these developments, pointing to them as evidence of a society in serious decline. Perhaps you feel the same way, or perhaps you agree with the people who feel that things are as good as they’ve ever been, but if you’re on the fence. Then, one of the purposes of this blog is to convince you that even if there is no God, that it would be foolish to dismiss religion as a collection of irrational biases, as so many people do. Rather, if we understand the concept of antifragility, it is far more likely that rather than being irrational that religion instead represents the accumulated wisdom of a society.


This last point deserves a deeper dive, because it may not be immediately apparent to you why religions would necessarily accumulate wisdom or what any of this has to do with antifragility. But religious beliefs can only be either fragile or antifragile, they can either break under pressure or get stronger. (In fairness, there is a third category, things which neither break nor get stronger, Taleb calls this the robust category, but in practice it’s very rare for things to be truly robust.) If religious beliefs were fragile, or created fragility then they would have disappeared long ago. Only beliefs which created a stronger society would have endured.


Please note that I am not saying that all religious beliefs are equally good at encouraging antifragile behavior. Some are pointless or even irrational, but others, particularly those shared by several religions are very likely lessons in antifragility. But a smaller and smaller number of people have any religious beliefs and an even smaller number are willing to actively defend these beliefs, particularly those which prohibit a behavior currently in fashion.


However, if these beliefs are as useful and as important as I say they are then they need all the defending they can get. Though in doing this a certain amount of humility is necessary. As I keep pointing out, we can’t predict the future. And maybe the combination of technology and a rejection of traditional morality will lead to some kind of transhuman utopia, where people live forever, change genders whenever they feel like it and live in a fantastically satisfying virtual reality, in which everyone is happy.


I don’t think most people go that far in their assessment of the current world, but the vast majority don’t see any harm in the way things are either, but what if they’re wrong about that?


And this might in fact represent yet another way of framing the theme of this blog:


But what if we’re wrong?


In several posts I have pointed out the extreme rapidity with which things have changed, particularly in the realm of morality, where, in a few short years, we have overturned religious taboos stretching back centuries or more. The vast majority of people have decided that this is fine, and, that in fact, as I already mentioned, it’s an improvement on our benighted past. But even if you don’t buy my argument about religions being antifragile I would hope you would still wonder, as I do, “But what if we’re wrong?”


This questions not only applies to morality, but technology saving us, the constant march of progress, politics, and a host of other issues. And I can’t help but think that people appear entirely too certain about the vast majority of these subjects.


In order bring up the possibility of wrongness, especially when you’re the ideological minority there has to be freedom of speech, another area I dive into from time to time in this space. Also you can’t talk about freedom of speech or the larger ideological battles around speech without getting into the topic of politics. A subject I’ll return to.


As I have already mentioned, and as you have no doubt noticed the political landscape has gotten pretty heated recently and there are no signs of it cooling down. I would argue, as others have, that this makes free speech and open dialogue more important than ever. In this endeavor I end up sharing a fair amount of overlap with the rationalist community. Which you must admit is interesting given the fact that this community clearly has a large number of atheists in it’s ranks. But that failing aside, I largely agree with much of what they say, which is why I link to Scott Alexander over at SlateStarCodex so often.


On the subject of free speech the rationalists and I are definitely in agreement. Eliezer Yudkowsky, an AI theorist, who I mentioned a lot in the last few posts, is also one of the deans of rationality and he had this to say about free speech:


There are a very few injunctions in the human art of rationality that have no ifs, ands, buts, or escape clauses. This is one of them. Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.


I totally agree with this point, though I can see how some people might choose to define some of the terms more or less broadly, leading to significant differences in the actual implementation of the rule. Scott Alexander is one of those people, and he chooses to focus on the idea of the bullet, arguing that we should actually expand the prohibition beyond just literal bullets or even literal weapons. Changing the injunction to:


Bad argument gets counterargument. Does not get bullet. Does not get doxxing. Does not get harassment. Does not get fired from job. Gets counterargument. Should not be hard.


In essence he want’s to include anything that’s designed to silence the argument rather than answer it. And why is this important? Well if you’ve been following the news at all you’ll know that there has been a recent case where exactly this thing happened, and a bad argument got someone fired. (Assuming it even was a bad argument which might be a subject for another time.)


Which ties back into asking, “But what if we’re wrong?” Because unless we have a free and open marketplace of ideas where things can succeed and fail based on their merits, rather than whether they’re the flavor of the month, how are we ever going to know if we’re wrong? If you have any doubts as to whether the majority is always right then you should be incredibly fearful of any attempt to allow the majority to determine what gets said.


And this brings up another possible theme for the blog:


Providing counterarguments for bad arguments about technology, progress and religion.


Running through all of this, though most especially with the topic I just discussed, free speech, is politics. The primary free speech ground is political, but issues like morality and technology and fragility all play out at the political level as well.


I often joke that you know those two things that you’re not supposed to talk about? Religion and politics? Well I decided to create a blog where I discuss both. Leading me to yet another possible theme:


Religion and Politics from the perspective of a Mormon who thinks he’s smarter than he probably is.


Perhaps the final thread running through everything, is like most people I would like to be original, which is hard to do. The internet has given us a world where almost everything you can think of saying has been said already. (Though I’ve yet to find anyone making exactly the argument I make when it comes to Fermi’s Paradox and AI.) But there is another way to approximate originality and that is to say things that other people don’t dare to say, but which hopefully, are nevertheless true. Which is part of why I record under a pseudonym. So far the episode that most fits that description is the episode I did on LGBT youth and suicide, with particular attention paid to the LDS stand and role in that whole debate.


Going forward I’d like to do more of that. And it suggests yet another possible theme:


Saying what you haven’t thought of or have thought of but don’t dare to bring up.


In the end, the most accurate description of the blog is still, that I write about fringe ideas I happen to find interesting, but at least by this point you have a better idea of the kind of things I find interesting and if you find them interesting as well, I hope you’ll stick around. I don’t think I’ve ever mentioned it within an actual post, but on the right hand side of the blog there’s a link to sign up for my mailing list, and if you did find any of the things I talked about interesting, consider signing up.





Do you know what else interests me? Money. I know that’s horribly crass, and I probably shouldn’t have stated it so bluntly, but if you’d like to help me continue to write, consider donating, because money is an interesting thing which helps me look into other interesting things.

Saturday, August 5, 2017

Returning to Mormonism and AI (Part 3)

If you prefer to listen rather than read, this blog is available as a podcast here. Or you want to listen to just this post:



Or download the MP3



This is the final post in my series examining the connection between Mormonism and Artificial Intelligence (AI). I would advise reading both of the previous posts before reading this one (Links: Part One, Part Two), but if you don’t, here’s where we left off:


Many people who’ve made a deep study of artificial intelligence feel that we’re potentially very close to creating a conscious artificial intelligence. That is, a free-willed entity, which, by virtue being artificial would have no upper limit to its intelligence, and also no built in morality. More importantly, insofar as intelligence equals power (and there’s good evidence that it does). We may be on the verge of creating something with godlike abilities. Given, as I just said, that it will have no built in morality, how do we ensure that it doesn’t use it’s powers for evil? Leading to the question, how do you ensure that something as alien as an artificial consciousness ends up being humanity’s superhero and not our archenemy?


In the last post I opined that the best way to test the morality of an AI would be to isolate it and then give it lots of moral choices where it’s hard to make the right choice and easy to make the wrong choice. I then pointed out that this resembles the tenets of several religions I know, most especially my own faith, Mormonism. Despite the title, the first two posts were very light on religion in general and Mormonism is particular. This post will rectify that, and then some. It will be all about the religious parallels between this method for testing an AI’s morality and Mormon Theology.


This series was born as a reexamination of a post I made back in October where I compared AI research to Mormon Doctrine. And I’m going to start by revisiting that, though hopefully, for those already familiar with October’s post, from a slightly different angle.


To begin our discussion, Mormons believe in the concept of a pre-existence, that we lived as spirits before coming to this Earth. We are not the only religion to believe in a pre-existence, but most Christians (specifically those who accept the Second Council of Constantinople) do not. And among those christian sects and other religions who do believe in it, Mormons take the idea farther than anyone.


As a source for this, in addition to divine revelation, Mormons will point to the Book of Abraham, a book of scripture translated from papyrus by Joseph Smith and first published in 1842. From that book, this section in particular is relevant to our discussion:


Now the Lord had shown unto me, Abraham, the intelligences that were organized before the world was...And there stood one among them that was like unto God, and he said unto those who were with him: We will go down, for there is space there, and we will take of these materials, and we will make an earth whereon these may dwell; And we will prove them herewith, to see if they will do all things whatsoever the Lord their God shall command them;


If you’ve been following along with me for the last two posts then I’m sure the word “intelligences” jumped out at you as you read that selection. But you may have also have noticed the phrase, “And we will prove them herewith, to see if they will do all things whatsoever the Lord their God shall command them;” And the selection, taken as a whole, depicts a situation very similar to what I described in my last post, that is, creating an environment to isolate intelligences while we test their morality.


I need to add one final thing before the comparison is complete. While not explicitly stated in the selection, we, as Mormons, believe that this life is a test is to prepare us to become gods in our own right. With that final piece in place we can take the three steps I listed in the last post with respect to AI researchers and compare them to the three steps outlined in Mormon theology:


AI: We are on the verge of creating artificial intelligence.
Mormons: A group of intelligences exist.


AI: We need to ensure that they will be moral.
Mormons: They needed to be proved.


Both: In order to be able to trust them with godlike power.


Now that the parallels between the two endeavors are clear, I think that much of what people have traditionally seen as problems with religion end up being logical consequences flowing naturally out of a system for testing morality.


The rest of this post will cover some of these traditional problems and look at them from both the “creating a moral AI” standpoint and the “LDS theology” standpoint. (Hereafter I’ll just use AI and LDS as shorthand.) But before I get to that, it is important to acknowledge that the two systems are not completely identical. In fact there are many ways in which they are very different.


First when it comes to morality, we can’t be entirely sure that the values we want to impart to an AI are actually the best values for it to have. In fact many AI theorists, have put forth the “Principle of Epistemic Deference”, which states:


A future superintelligence occupies an epistemically superior vantage point: it’s beliefs are (probably, on most topics) more likely than ours to be true. We should therefore defer to the superintelligence’s opinion whenever feasible.


No one would suggest that God has a similar policy of deferring to us on what’s true and what’s not. And therefore the LDS side of things has a presumed moral clarity underlying it which the AI side does not.


Second, when speaking of the development of AI it is generally assumed that the AI could be both smarter and more powerful than the people who created it. On the religious/LDS side of things there is a strong assumption in the other direction, that we are never going to be smarter or more powerful than our creator. This doesn’t change the need to test the morality, but it does make the consequences of being wrong a lot different for us than for God.


Finally, while in the end, we might only need a single, well-behaved AI to get us all of the advantages of a superintelligent entity, it’s clear that God wants to exalt as many people as possible. Meaning that on the AI side of things the selection process could, in theory, be a lot more draconian. While from an LDS perspective, you might expect things to be tough, but not impossible.


These three things are big differences, but none of them represents something which negates the core similarities. But they are something to keep in mind as we move forward and I will occasionally reference them as I go through the various similarities between the two systems.


To being with, as I just mentioned one difference between the AI and LDS models is how confident we are in what the correct morality should be, with some AI theorists speculating that we might actually want to defer to the AI on certain matters of morality and truth. Perhaps that’s true, but you could imagine that some aspects of morality are non-negotiable, for example you wouldn’t want to defer to the AIs conclusion that humanity is inferior and we should all be wiped out, however ironclad the AI’s reasons ended up being.


In fact, when we consider the possibility that AIs might have a very different morality from our own, an AI that was unquestioningly obedient would solve many of the potential problems. Obviously it would also introduce different problems. Certainly you wouldn’t want your standard villain type to get a hold of a superintelligent AI who just did whatever it was told, but also no one would question an AI researcher who told the AI to do something counterintuitive to see what it would do. And yet, just today I saw someone talk about how it’s inconceivable that the true God should really care if we eat pork, apparently concluding that obedience has no value on it’s own.


And, as useful as this is when in the realm of our questionable morality, how much more useful and important is it to be obedient when we turn to the LDS/religious side of things and the perfect morality of God?


We see many examples of this. The one familiar to most people would be when God commanded Abraham to sacrifice Isaac. This certainly falls into the category of something that’s counterintuitive, not merely based on the fact that murder is wrong, but also God had promised Abraham that he would have descendents as numerous as the stars in the sky, which is hard when you’ve killed your only child. And yet despite this Abraham went ahead with it and was greatly rewarded for his obedience.


Is this something you’d want to try on an AI? I don’t see why not. It certainly would tell you a lot about what sort of AI you were dealing with. And if you had an AI that seemed otherwise very moral, but was also willing to do what you asked because you asked it, that might be exactly what you were looking for.


For many people the existence of evil and the presence of suffering are both all the proof they need to conclude that God does not exist. But as you may already be able to see, both from this post and my last post, any test of morality, whether it be testing AIs or testing souls, has to include the existence of evil. If you can’t make bad choices then you’re not choosing at all, you’re following a script. And bad choices are, by definition evil, (particularly choices as consequential as those made by someone with godlike power). To put it another way, a multiple choice test where there’s only one answer and it’s always the right one, doesn’t tell you anything about the subject you’re testing. Evil has to exist, if you want to know whether someone is good.


Furthermore, evil isn’t merely required to exist. It has to be tempting. To return to the example of the multiple choice test, even if you add additional choices, you haven’t improved the test very much if the correct choice is always in bold with a red arrow pointing at it. If good choices are the only obvious choices then you’re not testing morality, you’re testing observation. You also very much risk making the nature of the test transparent to a sufficiently intelligent AI, giving it a clear path to “pass the test” but in a way where it’s true goals are never revealed. And even if they don’t understand the nature of the test they still might always make the right choice just by following the path of least resistance.


This leads us straight to the idea of suffering. As you have probably already figured out, it’s not sufficient that good choices be the equal of every other choice. They should actually be hard, to the point where they’re painful. A multiple choice test might be sufficient to determine whether someone should be given an A in Algebra, but both the AI and LDS tests are looking for a lot more than that. Those tests are looking for someone (or something) that can be trusted with functional omnipotence. When you consider that, you move from thinking of it in terms of a multiple choice question to thinking of it more like qualifying to be a Navy SEAL, only perhaps times ten.


As I’ve said repeatedly, the key difficulty for anyone working with an AI, is determining its true preference. Any preference which can be expressed painlessly and also happens to match what the researcher is looking for is immediately suspect. This makes suffering mandatory. But what’s also interesting is that you wouldn’t necessarily want it to be directed suffering. You wouldn’t want the suffering to end up being the red arrow pointing at the bolded correct answer. Because then you’ve made the test just as obvious but from the opposite direction. As a result suffering has to be mostly random. Bad things have to happen to good people, and wickedness has to frequently prosper. In the end, as I mentioned in the last point, it may be that the best judge of morality is whether someone is willing to follow a commandment just because it’s a commandment.


Regardless of its precise structure, in the end, it has to be difficult for the AI to be good, and easy for it to be bad. The researcher has to err on the side of rejection, since releasing a bad AI with godlike powers could be the last mistake we ever make. Basically, the harder the test the greater its accuracy, which makes suffering essential.


Next, I want to look at the idea that AIs are going to be hard to understand. They won’t think like we do, they won’t value the same things we value. They may, in fact, have a mindset so profoundly alien that we don’t understand them at all. But we might have a resource that would help. There’s every reason to suspect that other AIs created using the same methodology, would understand their AI siblings much better than we do.


This leads to two interesting conclusions both of which tie into religion, the first I mentioned in my initial post back in October. But I also alluded to it in the previous posts in this series. If we need to give the AIs the opportunity to sin, as I talked about in the last point. Then any AIs who have sinned are tainted and suspect. We have no idea whether their “sin” represented their true morals which they have now chosen to hide from us, or whether they have sincerely and fully  repented. Particularly if we assume an alien mindset. But if we have an AI built on a similar model which never sinned that AI falls into a special category. And we might reasonably decide to trust it with the role of spokesperson for the other AIs.


In my October post I drew a comparison between this perfect AI, vouching for the other AIs, and Jesus acting as a Messiah. But in the intervening months since then, I realized that there was a way to expand things to make the fit even better. One expects that you might be able to record or log the experiences of a given AI. If you then gave that recording to the “perfect” AI, and allowed it to experience the life of the less perfect AIs you would expect that it could offer a very definitive judgement as whether a given AI had repented or not.


For those who haven’t made the connection, from a religious perspective, I’ve just described a process that looks very similar to a method whereby Jesus could have taken all of our sins upon himself.


I said there were two conclusions. The second works exactly the opposite of the first. We have talked of the need for AIs to be tempted, to make them have to work at being moral, but once again their alien mindset gets in the way. How do we know what’s tempting to an artificial consciousness? How do we know what works and what doesn’t? Once again other AIs probably have a better insight into their AI siblings, and given the rigor of our process certain AIs have almost certainly failed the vetting process. I discussed the moral implications of “killing” these failed AIs, but it may be unclear what else to do. How about allowing them to tempt the AIs who we’re still testing? Knowing that the temptations that they invent will be more tailored to the other AIs than anything we could come up with. Also, insofar as they experience emotions like anger and jealously and envy they could end up being very motivated to drag down those AIs who have, in essence, gone on without them.


In LDS doctrine, we see exactly this scenario. We believe that when it came time to agree to the test, Satan (or Lucifer as he was then called) refused and took a third of the initial intelligences with him (what we like to refer to as the host of heaven) And we believe that those intelligences are allowed to tempt us here on earth. Another example of something which seems inexplicable when viewed from the standpoint of most people’s vague concept of how benevolence should work, but which makes perfect sense if you imagine what you might do if you were testing the morality of an AI (or spirit).


This ties into the next thing I want to discuss. The problem of Hell. As I just alluded to, most people only have a vague idea of how benevolence should look. Which I think actually boils down to, “Nothing bad should ever happen.” And eternal punishment in Hell is yet another thing which definitely doesn’t fit. Particularly in a world where steps have been taken to make evil attractive. I just mentioned Satan, and most people think he is already in Hell, and yet he is also allowed to tempt people. Looking at this from the perspective of an AI, perhaps this is as good as it gets. Perhaps being allowed to tempt the other AIs is the absolute most interesting, most pleasurable thing they can do because it allows them to challenge themselves against similarly intelligent creations.


Of course, if you have the chance to become a god and you miss out on it because you’re not moral enough, then it doesn’t matter what second place is, it’s going to be awful, relative to what could have been. Perhaps there’s no way around that, and because of this it’s fair to describe that situation as Hell. But that doesn’t mean that it couldn’t actually, objectively, be the best life possible for all of the spirits/AIs that didn’t make it. We can imagine some scenarios that are actually enjoyable if there’s no actual punishment, it’s just a halt to progression.


Obviously this and most of the stuff I’ve suggested is just wild speculation. My main point is that by viewing this life as a test of morality, a test to qualify for godlike power (which the LDS do) provides a solution to many of the supposed problems with God and religion. And the fact that AI research has arrived a similar point and come to similar conclusions, supports this. I don’t claim that by imagining how we would make artificial intelligence moral that all of the questions people have ever had about religion are suddenly answered. But I think it gives a surprising amount of insight to many of the most intractable questions. Questions which atheists and unbelievers have used to bludgeon religion for thousands of years, questions which may turn out to have an obvious answer if we just look at it from the right perspective.






Contrary to what you might think, wild speculation is not easy, it takes time and effort. If you enjoy occasionally dipping into wild speculation, then consider donating.