Support This Blog On Patreon!

Support this Blog!

All I ask is a $1 a month. (But more is great too.) If you find this content to be beneficial, interesting or just a fascinating peek into true insanity please donate.

Saturday, August 5, 2017

Returning to Mormonism and AI (Part 3)

If you prefer to listen rather than read, this blog is available as a podcast here. Or you want to listen to just this post:



Or download the MP3



This is the final post in my series examining the connection between Mormonism and Artificial Intelligence (AI). I would advise reading both of the previous posts before reading this one (Links: Part One, Part Two), but if you don’t, here’s where we left off:


Many people who’ve made a deep study of artificial intelligence feel that we’re potentially very close to creating a conscious artificial intelligence. That is, a free-willed entity, which, by virtue being artificial would have no upper limit to its intelligence, and also no built in morality. More importantly, insofar as intelligence equals power (and there’s good evidence that it does). We may be on the verge of creating something with godlike abilities. Given, as I just said, that it will have no built in morality, how do we ensure that it doesn’t use it’s powers for evil? Leading to the question, how do you ensure that something as alien as an artificial consciousness ends up being humanity’s superhero and not our archenemy?


In the last post I opined that the best way to test the morality of an AI would be to isolate it and then give it lots of moral choices where it’s hard to make the right choice and easy to make the wrong choice. I then pointed out that this resembles the tenets of several religions I know, most especially my own faith, Mormonism. Despite the title, the first two posts were very light on religion in general and Mormonism is particular. This post will rectify that, and then some. It will be all about the religious parallels between this method for testing an AI’s morality and Mormon Theology.


This series was born as a reexamination of a post I made back in October where I compared AI research to Mormon Doctrine. And I’m going to start by revisiting that, though hopefully, for those already familiar with October’s post, from a slightly different angle.


To begin our discussion, Mormons believe in the concept of a pre-existence, that we lived as spirits before coming to this Earth. We are not the only religion to believe in a pre-existence, but most Christians (specifically those who accept the Second Council of Constantinople) do not. And among those christian sects and other religions who do believe in it, Mormons take the idea farther than anyone.


As a source for this, in addition to divine revelation, Mormons will point to the Book of Abraham, a book of scripture translated from papyrus by Joseph Smith and first published in 1842. From that book, this section in particular is relevant to our discussion:


Now the Lord had shown unto me, Abraham, the intelligences that were organized before the world was...And there stood one among them that was like unto God, and he said unto those who were with him: We will go down, for there is space there, and we will take of these materials, and we will make an earth whereon these may dwell; And we will prove them herewith, to see if they will do all things whatsoever the Lord their God shall command them;


If you’ve been following along with me for the last two posts then I’m sure the word “intelligences” jumped out at you as you read that selection. But you may have also have noticed the phrase, “And we will prove them herewith, to see if they will do all things whatsoever the Lord their God shall command them;” And the selection, taken as a whole, depicts a situation very similar to what I described in my last post, that is, creating an environment to isolate intelligences while we test their morality.


I need to add one final thing before the comparison is complete. While not explicitly stated in the selection, we, as Mormons, believe that this life is a test is to prepare us to become gods in our own right. With that final piece in place we can take the three steps I listed in the last post with respect to AI researchers and compare them to the three steps outlined in Mormon theology:


AI: We are on the verge of creating artificial intelligence.
Mormons: A group of intelligences exist.


AI: We need to ensure that they will be moral.
Mormons: They needed to be proved.


Both: In order to be able to trust them with godlike power.


Now that the parallels between the two endeavors are clear, I think that much of what people have traditionally seen as problems with religion end up being logical consequences flowing naturally out of a system for testing morality.


The rest of this post will cover some of these traditional problems and look at them from both the “creating a moral AI” standpoint and the “LDS theology” standpoint. (Hereafter I’ll just use AI and LDS as shorthand.) But before I get to that, it is important to acknowledge that the two systems are not completely identical. In fact there are many ways in which they are very different.


First when it comes to morality, we can’t be entirely sure that the values we want to impart to an AI are actually the best values for it to have. In fact many AI theorists, have put forth the “Principle of Epistemic Deference”, which states:


A future superintelligence occupies an epistemically superior vantage point: it’s beliefs are (probably, on most topics) more likely than ours to be true. We should therefore defer to the superintelligence’s opinion whenever feasible.


No one would suggest that God has a similar policy of deferring to us on what’s true and what’s not. And therefore the LDS side of things has a presumed moral clarity underlying it which the AI side does not.


Second, when speaking of the development of AI it is generally assumed that the AI could be both smarter and more powerful than the people who created it. On the religious/LDS side of things there is a strong assumption in the other direction, that we are never going to be smarter or more powerful than our creator. This doesn’t change the need to test the morality, but it does make the consequences of being wrong a lot different for us than for God.


Finally, while in the end, we might only need a single, well-behaved AI to get us all of the advantages of a superintelligent entity, it’s clear that God wants to exalt as many people as possible. Meaning that on the AI side of things the selection process could, in theory, be a lot more draconian. While from an LDS perspective, you might expect things to be tough, but not impossible.


These three things are big differences, but none of them represents something which negates the core similarities. But they are something to keep in mind as we move forward and I will occasionally reference them as I go through the various similarities between the two systems.


To being with, as I just mentioned one difference between the AI and LDS models is how confident we are in what the correct morality should be, with some AI theorists speculating that we might actually want to defer to the AI on certain matters of morality and truth. Perhaps that’s true, but you could imagine that some aspects of morality are non-negotiable, for example you wouldn’t want to defer to the AIs conclusion that humanity is inferior and we should all be wiped out, however ironclad the AI’s reasons ended up being.


In fact, when we consider the possibility that AIs might have a very different morality from our own, an AI that was unquestioningly obedient would solve many of the potential problems. Obviously it would also introduce different problems. Certainly you wouldn’t want your standard villain type to get a hold of a superintelligent AI who just did whatever it was told, but also no one would question an AI researcher who told the AI to do something counterintuitive to see what it would do. And yet, just today I saw someone talk about how it’s inconceivable that the true God should really care if we eat pork, apparently concluding that obedience has no value on it’s own.


And, as useful as this is when in the realm of our questionable morality, how much more useful and important is it to be obedient when we turn to the LDS/religious side of things and the perfect morality of God?


We see many examples of this. The one familiar to most people would be when God commanded Abraham to sacrifice Isaac. This certainly falls into the category of something that’s counterintuitive, not merely based on the fact that murder is wrong, but also God had promised Abraham that he would have descendents as numerous as the stars in the sky, which is hard when you’ve killed your only child. And yet despite this Abraham went ahead with it and was greatly rewarded for his obedience.


Is this something you’d want to try on an AI? I don’t see why not. It certainly would tell you a lot about what sort of AI you were dealing with. And if you had an AI that seemed otherwise very moral, but was also willing to do what you asked because you asked it, that might be exactly what you were looking for.


For many people the existence of evil and the presence of suffering are both all the proof they need to conclude that God does not exist. But as you may already be able to see, both from this post and my last post, any test of morality, whether it be testing AIs or testing souls, has to include the existence of evil. If you can’t make bad choices then you’re not choosing at all, you’re following a script. And bad choices are, by definition evil, (particularly choices as consequential as those made by someone with godlike power). To put it another way, a multiple choice test where there’s only one answer and it’s always the right one, doesn’t tell you anything about the subject you’re testing. Evil has to exist, if you want to know whether someone is good.


Furthermore, evil isn’t merely required to exist. It has to be tempting. To return to the example of the multiple choice test, even if you add additional choices, you haven’t improved the test very much if the correct choice is always in bold with a red arrow pointing at it. If good choices are the only obvious choices then you’re not testing morality, you’re testing observation. You also very much risk making the nature of the test transparent to a sufficiently intelligent AI, giving it a clear path to “pass the test” but in a way where it’s true goals are never revealed. And even if they don’t understand the nature of the test they still might always make the right choice just by following the path of least resistance.


This leads us straight to the idea of suffering. As you have probably already figured out, it’s not sufficient that good choices be the equal of every other choice. They should actually be hard, to the point where they’re painful. A multiple choice test might be sufficient to determine whether someone should be given an A in Algebra, but both the AI and LDS tests are looking for a lot more than that. Those tests are looking for someone (or something) that can be trusted with functional omnipotence. When you consider that, you move from thinking of it in terms of a multiple choice question to thinking of it more like qualifying to be a Navy SEAL, only perhaps times ten.


As I’ve said repeatedly, the key difficulty for anyone working with an AI, is determining its true preference. Any preference which can be expressed painlessly and also happens to match what the researcher is looking for is immediately suspect. This makes suffering mandatory. But what’s also interesting is that you wouldn’t necessarily want it to be directed suffering. You wouldn’t want the suffering to end up being the red arrow pointing at the bolded correct answer. Because then you’ve made the test just as obvious but from the opposite direction. As a result suffering has to be mostly random. Bad things have to happen to good people, and wickedness has to frequently prosper. In the end, as I mentioned in the last point, it may be that the best judge of morality is whether someone is willing to follow a commandment just because it’s a commandment.


Regardless of its precise structure, in the end, it has to be difficult for the AI to be good, and easy for it to be bad. The researcher has to err on the side of rejection, since releasing a bad AI with godlike powers could be the last mistake we ever make. Basically, the harder the test the greater its accuracy, which makes suffering essential.


Next, I want to look at the idea that AIs are going to be hard to understand. They won’t think like we do, they won’t value the same things we value. They may, in fact, have a mindset so profoundly alien that we don’t understand them at all. But we might have a resource that would help. There’s every reason to suspect that other AIs created using the same methodology, would understand their AI siblings much better than we do.


This leads to two interesting conclusions both of which tie into religion, the first I mentioned in my initial post back in October. But I also alluded to it in the previous posts in this series. If we need to give the AIs the opportunity to sin, as I talked about in the last point. Then any AIs who have sinned are tainted and suspect. We have no idea whether their “sin” represented their true morals which they have now chosen to hide from us, or whether they have sincerely and fully  repented. Particularly if we assume an alien mindset. But if we have an AI built on a similar model which never sinned that AI falls into a special category. And we might reasonably decide to trust it with the role of spokesperson for the other AIs.


In my October post I drew a comparison between this perfect AI, vouching for the other AIs, and Jesus acting as a Messiah. But in the intervening months since then, I realized that there was a way to expand things to make the fit even better. One expects that you might be able to record or log the experiences of a given AI. If you then gave that recording to the “perfect” AI, and allowed it to experience the life of the less perfect AIs you would expect that it could offer a very definitive judgement as whether a given AI had repented or not.


For those who haven’t made the connection, from a religious perspective, I’ve just described a process that looks very similar to a method whereby Jesus could have taken all of our sins upon himself.


I said there were two conclusions. The second works exactly the opposite of the first. We have talked of the need for AIs to be tempted, to make them have to work at being moral, but once again their alien mindset gets in the way. How do we know what’s tempting to an artificial consciousness? How do we know what works and what doesn’t? Once again other AIs probably have a better insight into their AI siblings, and given the rigor of our process certain AIs have almost certainly failed the vetting process. I discussed the moral implications of “killing” these failed AIs, but it may be unclear what else to do. How about allowing them to tempt the AIs who we’re still testing? Knowing that the temptations that they invent will be more tailored to the other AIs than anything we could come up with. Also, insofar as they experience emotions like anger and jealously and envy they could end up being very motivated to drag down those AIs who have, in essence, gone on without them.


In LDS doctrine, we see exactly this scenario. We believe that when it came time to agree to the test, Satan (or Lucifer as he was then called) refused and took a third of the initial intelligences with him (what we like to refer to as the host of heaven) And we believe that those intelligences are allowed to tempt us here on earth. Another example of something which seems inexplicable when viewed from the standpoint of most people’s vague concept of how benevolence should work, but which makes perfect sense if you imagine what you might do if you were testing the morality of an AI (or spirit).


This ties into the next thing I want to discuss. The problem of Hell. As I just alluded to, most people only have a vague idea of how benevolence should look. Which I think actually boils down to, “Nothing bad should ever happen.” And eternal punishment in Hell is yet another thing which definitely doesn’t fit. Particularly in a world where steps have been taken to make evil attractive. I just mentioned Satan, and most people think he is already in Hell, and yet he is also allowed to tempt people. Looking at this from the perspective of an AI, perhaps this is as good as it gets. Perhaps being allowed to tempt the other AIs is the absolute most interesting, most pleasurable thing they can do because it allows them to challenge themselves against similarly intelligent creations.


Of course, if you have the chance to become a god and you miss out on it because you’re not moral enough, then it doesn’t matter what second place is, it’s going to be awful, relative to what could have been. Perhaps there’s no way around that, and because of this it’s fair to describe that situation as Hell. But that doesn’t mean that it couldn’t actually, objectively, be the best life possible for all of the spirits/AIs that didn’t make it. We can imagine some scenarios that are actually enjoyable if there’s no actual punishment, it’s just a halt to progression.


Obviously this and most of the stuff I’ve suggested is just wild speculation. My main point is that by viewing this life as a test of morality, a test to qualify for godlike power (which the LDS do) provides a solution to many of the supposed problems with God and religion. And the fact that AI research has arrived a similar point and come to similar conclusions, supports this. I don’t claim that by imagining how we would make artificial intelligence moral that all of the questions people have ever had about religion are suddenly answered. But I think it gives a surprising amount of insight to many of the most intractable questions. Questions which atheists and unbelievers have used to bludgeon religion for thousands of years, questions which may turn out to have an obvious answer if we just look at it from the right perspective.






Contrary to what you might think, wild speculation is not easy, it takes time and effort. If you enjoy occasionally dipping into wild speculation, then consider donating.

9 comments:

  1. "if you have the chance to become a god and you miss out on it because you’re not moral enough, then it doesn’t matter what second place is, it’s going to be awful"

    This reminds me of a quote I read from Marlowe's "Doctor Faustus":

    "Why this is hell, nor am I out of it.
    Think'st thou that I, who saw the face of God,
    And tasted the eternal joys of heaven,
    Am not tormented with ten thousand hells
    In being deprived of everlasting bliss?"

    ReplyDelete
    Replies
    1. That's much more eloquent than what I said, but also, exactly what I meant. Thanks for posting it.

      Delete
  2. I love this analogy. It's a great way to explain God and the plan of salvation from a very logical point of view.

    ReplyDelete
    Replies
    1. Glad that you find it useful. That was my hope.

      Delete
  3. I think you could do more to mine this in the opposite direction and make it useful to AI researchers. If God has already faced this problem and figured out the solution, why reinvent the wheel?

    A couple of examples:
    1. There's a major problem of motivation that permeates this discussion. In the end we are always left to infer motivation based on external actions but we have the problem that some motivation needs to be provided the AI externally or else we are left to guess at its motivations.

    Think of it this way. Say you succeeded in created an AI that successfully passed all the tests outlined here and previously. This intelligent entity has had to do some seemingly unintelligent things, such as experience/witness/endure suffering, pain, and temptation. Why? It's already been discussed that we don't want the reason to be "it figured out this is a test and it wants godlike power so it's just bidding its time". But ensuring against this negative motivation can't be good enough. We need to know why it would want to endure such a difficult test and make correct choices when we've intentionally made these difficult on it.

    The AI researchers would need a way to establish trust with the AI. They would need to give it a reason to trust them, especially since this whole test idea seems to provide the AI every reason NOT to trust them. An AI that passed arbitrary tests without good reason to do so would be exceptionally suspect! One way to do build trust we could measure AI motivation against would be to require some suffering or sacrifice on the part of the AI researchers. This sacrifice or suffering should equal or exceed what is being applied to the AI or sufficient trust may not be established.

    2. It has been proposed that this simulation be applied after the AI has been created in order to test it. However the test may itself provide additional resources to propel AI development.

    Some programmers developed a "learning" algorithm capable of rewriting itself based on ongoing feedback, and applied it to play Mario Brothers. The program failed a lot, but it used the failures to adjust the set of heuristics it used to play the game. In the end, it was able to flawlessly play the level because it had tried and failed a number of times and learned from its failures. In this way, the developers weren't required to program every situation, but relied on the simulation to provide the computer with the necessary information to overcome programming difficulties in its own. We could use a similar system in developing AI, where we use the testing process itself to allow the AI to develop through trial and error. Of course we would want a simulation of the real world we intend to eventually place the AI in - not just to test it! - but to help ensure the AI we eventually develop is capable of solving problems we care about and in a way we want it to solve them. Programmers aren't infinitely intelligent, and they don't have to be if we create tools that are able to both test and produce exceptional AI.

    ReplyDelete
    Replies
    1. I think reversing the direction is brilliant. And I agree that AI motivation is a big problem. I think in part that ties into other AIs being the best judge of their siblings, but I like your idea as well, and from a Christian perspective, there's a lot of emphasis on how much Christ did suffer. Though in our scenario he's a perfect AI, not the researcher, but we also see some support for God the Father suffering as well particularly in the immediate aftermath of the crucifixion.

      I think we're on the same page with the learning system you describe in your second point, and it's an example of the curse of knowledge given that I assumed everyone would realize whatever isolation we created would have to look very similar to what you describe, but, I see now that's not obvious, but as you point out, in some respects it's the backbone of the entire endeavor.

      Delete
    2. I think the problem of how to accurately perform morals testing of intelligent agents in order to grant them expanded power extends beyond just LDS cosmology and AI research. Some examples:
      - criminals in prisons eligible for parole
      - business leaders
      - politicians

      Perhaps there are universal lessons to be learned that have broad applicability. For example, using the testing/proving process also as a teaching platform; the importance of relationships/emotional appeal in motivation; use of temptation to identify areas of moral weakness; etc.

      Delete
    3. That's an interesting idea. Certainly you name three classes of people where it is generally acknowledged that as a society we could be doing a much better job. And as you point out we could probably be a lot more systematic in our approach, not to mention far more clever as well.

      Delete