Saturday, April 7, 2018

Further Lessons in Comparing AI Risk and the Plan of Salvation

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:



Or download the MP3



On the same day as this post goes live, I'll be at the annual conference of the Mormon Transhumanist Association (MTA). You may remember my review of last year’s MTA Conference. This year I'm actually one of the presenters. I suspect that they may not have read last year’s review (or any of my other critical articles) or they may have just not made the connection. But also, to their credit, they're very accepting of all manner of views even critical ones, so perhaps they know exactly who I am. I don't know, I never got around to asking.


The presentation I’m giving is on the connection between AI Risk and the LDS Plan of Salvation. Subjects I covered extensively in several past posts. I don't think the presentation adds much to what I already said in those previous posts, so there wouldn’t be much point in including it here. (If you’re really interested email me and I’ll send you the Google slide deck.) However my presentation does directly tie into some of the reservations I have about the MTA and so, given that perhaps a few of them will be interested enough in my presentation to come here and check things out, I thought this would be a good opportunity to extend what I said in the presentation and look at what sort of conclusions might follow if we assume that life is best viewed as similar to the process for reducing AI Risk.


As I mentioned I covered the initial subject (the one I presented on today) at some length already, but for those who need a quick reminder or who are just joining us. Here's what you need to know:


1- We may be on the verge of creating an artificial superintelligence.
2- By virtue its extreme intelligence, this AI would have god-like power.
3- Accordingly we have to ensure that the superintelligence will be moral. i.e. not destroy us.


Mormons believe that this life is just such a test of "intelligences". A test of their morality in preparation for eventually receiving god-like powers. Though I think I'm the first to explicitly point out the similarities between AI Risk and the LDS Plan of Salvation. Having made that connection, my argument is that many things previously considered strong arguments against, or problems with, religion (eg suffering, evil, Hell, etc.) end up being essential components on the path to trusting something with god-like power. Considering these problems in this new light was the primary subject of the presentation I gave today. The point of this post is to go farther, and consider what further conclusions we might be able to draw from this comparison, particularly as it relates to the project of Mormon Transhumanism.


Of course everything I say going forward is going to be premised on accepting the LDS Plan of Salvation (more accurately, my specific interpretation of it) and the connections I'm drawing between it and AI Risk. Which I assume many are not inclined to do, but if you could set your reservations aside for the moment I think there’s some interesting intellectual territory to cover.


All of my thinking proceeds from the idea that one of the methods you’re going to try as an Artificial Intelligence Researcher (AIR) is isolating your AI. Limiting the damage a functionally amoral superintelligence can cause by cutting it off from its ability to cause that harm, at least in the real world.


(Now of course many people have argued that it may be difficult to keep an AI in a box so to speak, but if the AIR is God and we’re the intelligences, presumably that objection goes away.)


It’s easy to get fixated on this isolation, but the isolation is a means to an end not an end in itself. It’s not necessary for its own sake, it's necessary because we assume that the AI already has god-like intelligence, and we’re trying to keep it from having a god-like impact until it has god-like morals. Accordingly we have three pieces to the puzzle:


1- Intelligence
2- Morals
3- Impact


What happens when we consider those three attributes with respect to humans? It’s immediately obvious from the evidence that we’re way out ahead on 3. That humanity has already made significant strides towards having the ability to create a god-like impact, without much evidence that we have made similar strides with attributes 1 and 2. The greatest example of that is nuclear weapons. Trump and Putin could separately or together have a god-like impact on the world. And the morality of it would be the opposite of god-like and the intelligence of the action would not be far behind.
`
Now I would assume that God isn’t necessarily worried about any of the things we worry about when it comes to superintelligence. But if there was going to be a concern (perhaps even just amongst ourselves) it would be the same as the concerns of the AIRs, that we end up causing god-like impacts before we have god-like morals. Meaning that the three attributes are not all equally important. I believe any objective survey of LDS scripture, prophetic counsel or general conference talks would conclude that the overwhelming focus of the church is on item 2, morals. If you dig a little deeper you can also find writings about the importance of intelligence, but I think you’ll find very little related to having a god-like impact.


I suspect at this point, I need to spell out what I mean by that phrase. I’ve already given the example of nuclear war. To that there are a whole host of environmental effects I could add, on the negative side of things. On the positive side you have the green revolution, the internet, skyscrapers, rising standards of living, etc. Looking towards the future we can add immortality, brain-uploading, space colonization, and potentially AI, though that could go either way.


All of these are large scale impacts, and that’s the kind of thing I’m talking about. Things historians could be discussing in hundreds of years. LDS/Mormon doctrine does not offer much encouragement in favor of making these sorts of impacts. In fact, if anything, it comes across as much more personal and dispenses advice about what we should do if someone sues us for our cloak, or the benefits of saving even one soul, or what we should do if we come across someone who has been left half dead by robbers. All exhortations which apply to individual interactions. There’s essentially nothing about changing the world on a large scale through technology, and arguably what advice is given, is strongly against it. Of course, as you can probably guess I’m talking about the Tower of Babel. I did a whole post on the idea that the Tower of Babel did apply to the MTA, so I won’t rehash it here, but the point of all of this is that I get the definite sense that the MTA has prioritized the impact piece of the equation for godhood to the detriment of the morality piece, which for an AIR monitoring the progress of a given intelligence ends up being precisely the sort of thing you would want to guard against.


As an example of what I’m talking about consider the issue of immortality. Something that is high on the Transhumanist list as well as the Mormon Transhumanist list. Now to be clear all Mormons believe in eventually immortality, it’s just that most of them believe you have to die first and then come back. The MTA hopes to eliminate the “dying first” part. This is a laudable goal, and one that would have an enormous impact, but that’s precisely the point I was getting at above, allowing god-like impacts before you have god-like morality is the thing we’re trying to guard against in this model. Also “death” appears to have a very clear role in this scenario, insofar as tests have to end at some point. If you’re an AIR this is important if only for entirely mundane reasons like scheduling, having limited resources and most of all having a clear decision point. But I assume you would also be worried that the longer a “bad” AI has to explore its isolation the more likely it is to be able to escape. Finally, and perhaps most important for our purposes, there’s significant reason to believe that morality becomes less meaningful if you allow an infinite time for it to play out.


If this were just me speculating on the basis of the analogy, you might think that such concerns are pointless, or that they don’t apply we replace our AIR with God. But it turns out that something very similar is described in the Book of Mormon, in Alma chapter 42. The entire chapter speaks to this point, and it’s probably worth reading in its entirety, but here is the part which speaks most directly to the subject of immortality.


...lest he should put forth his hand, and take also of the tree of life, and eat and live forever, the Lord God placed cherubim and the flaming sword, that he should not partake of the fruit—


And thus we see, that there was a time granted unto man to repent, yea, a probationary time, a time to repent and serve God.


For behold, if Adam had put forth his hand immediately, and partaken of the tree of life, he would have lived forever, according to the word of God, having no space for repentance; yea, and also the word of God would have been void, and the great plan of salvation would have been frustrated.


But behold, it was appointed unto man to die...


At a minimum, one gets the definite sense that death is important. But maybe it’s still not clear why, the key is that phrase “space for repentance”. There needs to be a defined time during which morality is established. Later in the chapter the term “preparatory state” is used a couple of times, also the term “probationary state”. Both phrases point to a test of a specific duration, a test that will definitely determine one way or the other whether an intelligence can be trusted with god-like power. Because while it’s not clear that this necessarily the case with God. With respect to artificial intelligence, once we give them god-like power we can’t take it back. The genie won’t go back in the bottle.


To state it more succinctly, this life is not a home for intelligences, it’s a test of intelligences, and tests have to end.


It is entirely possible that I’m making too much of the issue of immortality. Particularly since true immortality is probably still a long way off, and I wouldn’t want to stand in the way of medical advances which could improve the quality of life. (Though I think there’s a good argument to be made that many recent advances have extended life without improving it.)  Also I think that if death really is a crucial part of God’s Plan, that immortality won’t happen regardless of how many cautions I offer or how much effort the transhumanists put forth. (Keeping in mind the assumptions I mentioned above.)


Of more immediate concern might be the differences in opinion between the MTA and the LDS Leadership, which I’ve covered at some length in those previous posts I mentioned at the beginning. But, to highlight just one issue I spoke about recently, the clear instruction from the church, is that it’s leaders should counsel against elective transexxual surgery, while as far as I can tell (see my review of the post-genderism presentation from last year) the MTA, views “Gender Confirmation Surgery” as one of the ways in which they can achieve the “physical exaltation of individuals and their anatomies” (that’s from their affirmation). Now I understand where they’re coming from. It certainly does seem like the “right” thing to do is to allow people the freedom to choose their gender, and allow gay people the freedom to get married in the temple (another thing the LDS Leadership forbids). But let me turn to another story from the scriptures. This time we’ll go to the Bible.


In the Old Testament there’s a classic story concerning Samuel and King Saul. King Saul is commanded to:


...go and smite Amalek, and utterly destroy all that they have, and spare them not; but slay both man and woman, infant and suckling, ox and sheep, camel and ass.


But rather than destroying everything Saul:


spared...the best of the sheep, and of the oxen, and of the fatlings, and the lambs, and all that was good, and would not utterly destroy them: but every thing that was vile and refuse, that they destroyed utterly.


He does this because he figures that God will forgive him for disobeying, once he sacrifices all of the fatlings and lambs, etc. But in fact this act is where God decides that making Saul the King was a mistake. And when Samuel finally shows up he tells the King:


And Samuel said, Hath the Lord as great delight in burnt offerings and sacrifices, as in obeying the voice of the Lord? Behold, to obey is better than sacrifice, and to hearken than the fat of rams.


I feel like this Biblical verse might be profitably placed in a very visible location in all AIR offices. Because when it comes down to it, no matter how good the AI is (or thinks it is) or how clever it ends up being. In the end the most important thing might be that if you tell the AI to absolutely never do X, you want it to absolutely never do X.


You could certainly imagine an AI pulling a “King Saul”. Perhaps if we told it to solve global warming it might decide to trigger massive volcanic eruptions. Or if we told it to solve the population problem, we could end up with a situation that Malthus would have approved of, but which the rest of the world finds abhorrent. Even if, in the long run, the AI assures us that the math works out. And it’s likely that our demands on these issues would seem irrational to the AI, even evil. But for good or for ill, humanity definitely has some values which should supersede behaviors which the AI might otherwise be naturally inclined to adopt, or which, through its own reasoning, it might conclude is the moral choice. If we can accept that this is a possibility with potential superintelligences, how much more could it be the case when we consider the commandments of God? Who is a lot more intelligent and moral than we are.


If we accept the parallel, then we should accept, exactly this possibility, that something similar might be happening with God. That there may be things we are being commanded not to do, but which seem irrational or even evil. Possibly this is because we are working from a very limited perspective. But it’s also possible that we have been given certain commandments which are irrational, or perhaps just silly, and it’s not our morality or intelligence being tested, but our obedience. As I just pointed out, a certain level of blind obedience is probably an attribute we want our superintelligence to have. The same situation may exist with respect to God. And it is clear that obedience, above and beyond everything I’ve said here, is an important topic in religion. The LDS Topical Guide lists 120 scriptures under that heading, and cross-references an additional 25 closely related topics, which also probably have a similar number of scriptures attached.


Here at last we return to the idea I started this section with. I know there are many things which seem like good ideas. They are rational, and compassionate, and exactly the sort of thing it seems like we should be doing. I mentioned as one example supporting people in “gender confirmation surgery” and the example of pressing for gay marriages to be solemnized in the temple. But we can also see if we look at AI Risk, in connection with the story of King Saul, maybe this is a test of our obedience? I am not wise enough to say whether it is or not, and everyone has to chart their own path, and listen to their own conscience and do the best they have with what they’ve got. But I will say that I don’t think it’s unreasonable to draw the conclusion from this comparison that tests of obedience are something we should expect, and that they may not always make as much sense as we would like.


At this point, it’s 6 am the morning of the conference where I’ll be presenting, which basically means that I’m out of time. There were some other ideas I wanted to cover, but I suppose they’ll have to wait for another time.


I’d like to end by relating an analogy I’ve used before. One which has been particularly clarifying for me when thinking about the issues of transhumanism and where our efforts should be spent.


Imagine that you’re 15 and the time has come to start preparing to get your driver’s license. Now you just happen to have access to an autoshop. Most people (now and throughout history) have not had access to such an autoshop. But with that access you think you might be able to build your own car. Now maybe you can and maybe you can’t. Building a car from scratch is probably a lot harder than you think. But if, by some miracle, you are able to build a car, does that also give you the qualifications to drive it? Does building a car give you knowledge of the rules (morality) necessary to safely drive it? No. Studying for and passing the driver’s license test, is what (hopefully) gives you that. And while I don’t think it’s bad to study auto mechanics at the same time as studying for your driver’s license test, the one may distract from the other. Particularly if you’re trying to build an entire car, which is a very time consuming process.


God has an amazing car waiting for us. Much better than what we could build ourselves, and I think he’s less interested in having us prove we can build our own car then in showing that we’re responsible enough to drive safely.






I really was honored to be invited to present at the MTA conference, and I hope I have not generated any hard feelings with what I’ve written, either now or in the past. Of course, one way to show there are no hard feelings is to donate.


That may have crossed a line. I’m guessing it’s possible with that naked cash grab that if there weren’t any hard feelings that there are now.

19 comments:

  1. Very thought-provoking. I do wonder about your choice of words in: "God decides that making Saul the King was a mistake." Since God is all-knowing, we have to believe he doesn't make mistakes. He had to have known Saul would screw up eventually, just as Christ knew Judas Iscariot would betray Him, yet called him as apostle anyway. I think it's proof that God can have a use for even deeply flawed servants, though He will never allow them to disrupt His plan.

    ReplyDelete
    Replies
    1. I thought of the same thing when I was writing it. And perhaps there's a better way to phrase it, though if you read the original scriptural account it says, "It repenteth me that I have set up Saul to be king: for he is turned back from following me, and hath not performed my commandments." Which strongly implies second thoughts if nothing else, and maybe God saw that turning point was going to eventually arrive, and we just aren't told that.

      But in any case things for the kind words. And yes, I certainly hope that God has a use even for deeply flawed individuals...

      Delete
    2. My understanding was that God was perfectly happy to continue the system of judge, but the Israelites wanted a king, since that was fashionable at the time. God said, "okay, but if you do, make sure you do it exactly as I instruct, and even then it'll probably not turn out how you want..."

      Delete
    3. That is a good point. And does speak to a certain level of foreknowledge.

      Delete
  2. I don't know where people get this idea that we're anywhere near living forever. I feel like this is a mistake a lot of futurists make: there is a difference between life expectancy and lifespan. Life expectancy is the average number of years a person in a given population lives. This is heavily influenced by infant mortality rates, such that moderate improvements in childhood or neonatal care boost this number significantly.

    Meanwhile, lifespan is how long a person can live. This hasn't really changed with modern medicine, and I'm unaware of any big scientific breakthroughs that could reasonably be expected to do so. I've heard people throw out a lot of wishful thinking on things like telomeres, but that's mostly the kind of theory people make when they've only heard enough to speculate, but don't know enough to realize the hype is all bunk. Kind of like the idea that putting collagen in your shampoo is going to strengthen your hair.

    "But collagen is a strengthening component!" "Yeah, that's not how any of this works."

    All I'm saying is that I'm sceptical that we're anywhere near having to worry about abnormally long lifespans any time within the current technological horizon.

    ReplyDelete
    Replies
    1. They did have a presentation from the CEO of https://bioviva-science.com/ at the conference and she was claiming that they could get 33 additional years by slowing down aging. And she also basically said that telomeres was NOT the way to do it, that you could have short telomeres for quite a while and still not die. Though she had also undergone gene therapy herself and one of the things she used to illustrate it's effectiveness was the length of her telomeres.

      If, and this is a big if, they can get an additional 33 years, that's not at the level of science fiction, but it is a pretty big chunk nonetheless, it means have the 1% would die on average at 120 rather than 87.

      Delete
    2. I visited the website, and I am not convinced they will extend lifespans by any significant degree. It's good that they're pursuing gene therapy. That's a tough field, given regulatory and incentive hurdles in clinical research and pharmaceuticals. I hope they beat the odds and are wildly successful. If they are, they're not going to extend lifespans.

      Here's their strategy, from what I could glean: combine a multiplex assay and big-data to generate an "aging" biomarker. Then use that biomarker to see what levers we can pull to affect aging.

      Here's why I don't think this will work:
      1. Biomarkers are bad endpoints. I recommend the book Ending Medical Reversal for a better discussion on this topic. We're constantly getting burned because biomarkers often don't reflect underlying biology as expected. We keep falling for it because the alternative is to admit, "we don't know how to solve this yet."
      2. Aging is multi-causal by nature. This company specified on their website they're targeting monogenic diseases. But the most intractable diseases, and aging in general, are all multi-causal. Their approach does nothing to solve the problem that multi-causal/multigenic is a tough nut to crack. Biological mechanisms are like ridiculous Rube Goldberg machines with multiple interacting pathways, finely tuned. The more general (complex) the target, the more difficult it is to learn anything useful through normal research methods.
      3. Aging is a natural process, even though we often treat it like a disase. Interfering with that natural process is likely to produce cascading unintended consequences. Curing aging is like trying to redesign that Rube Goldberg machine from a device that manufactures cats to one that produces cold fusion energy. Your objectives are not aligned with those of the original inventor, and you'll likely have to tear down a larger swath of the edifice than you originally thought would be necessary. And the machine will start working against your aims.

      Take the immune system, for example. As you get older, you gradually stop making the cells that provide you with adaptive immunity. It's not clear why, other than, "your body wants to get old and die". Whatever the reason, aging progressively breaks your defense against disease, like getting pneumonia from the common cold. So a cure for the common cold, or better treat geriatric pneumonia won't extend lives. The reason old people die like this isn't the cold, they're dying because their immune system isn't working very well, by design.

      What would happen if we tried to intervene? We'd probably give people lymphoma. Actually, we'd probably start with experiments in mice that show reduced age-related T-cell and B-cell ablation. Then we'd start a Phase I trial, and see changed biomarkers, but all the patients would get lymphoma from the treatment. The company would go bankrupt, since nobody wants to touch a drug that gives you cancer, but people would continue to pump billions of dollars aimed at the identified biomarkers on the evidence from the first failed study. Later, scientists would look at the lifespans of people who survived the cancer they got from that study and determine that they still died at the same age as baseline expectations. Nobody would understand why the biomarkers didn't predict the biology. We'd be 50 years into the future from when we started with no improved understanding of the underlying biology despite billions spent on it.

      I don't expect this approach to bear fruit.

      A "solution to aging" is theoretically possible. I'm just saying nobody has a good idea of how to solve it, and it doesn't look like we're even moving in the right direction. I could be wrong, but clinical research and experimental biology are my fields of expertise, so if I'm wrong I'm confidently wrong.

      Delete
  3. When I first read your piece comparing AI/PoS I was intrigued. Since then something bothered me that I couldn't quite articulate. Then the other day I realized what it was while watching Dr. Strange, of all things. And not the part you might expect. Bear with me here. First, what bothered me was that the difference between AI and is is partly power, but also partly omniscience. We aren't omniscient or able to rapidly calculate the level of complexity we can imagine for AI.

    Tangent: Wouldn't it be cool if we could create a cloak like the one in Dr. Strange? How would that work exactly? Based on our current level of technology, we seem to be nearing the point where we could create some kind of digital interface, and you could learn to control it with your mind. Okay, but could we do better?

    How about magnetic catches on the brain interface? Then it could snap on or off. Add some kind of moldable fabric, sensitive to current changes, maybe?

    Then I was thinking about your post regarding Fermi's paradox, and some civilization out there should be millions of years advanced from us. Let's borrow from them and imagine some far off ways we could eventually design this thing! First, it would be best if we could interface directly with the brain without needing an implant. What if we had some force that could directly interface with the brain? Say it could read thoughts, and take action at a distance. Now we basically have what we need to create the cloak. But is there a "layer" in physical reality we could put this on so it doesn't get in our way?

    Maybe? Until recently, we didn't know neutrinos existed, or that they were so ubiquitous and could so easily permeate everything.

    Now, we want the cloak to protect you without you even thinking about it, like in the movie, so that suggests we need to build an AI into this sub-layer effector system.

    From here, we make one more logical step, expand the AI controlling the universal sub-layer indefinitely, thereby making it all powerful, and you have 92% the LDS concept of the Spirit of God.

    Now say some civilization created this AI, and made sure it was sufficiently moral that it won't do bad things. In fact, if bad things are even being contemplated it'll leave. This ensures nobody will be able to use this all-powerful AI for nefarious purposes.

    One problem left, how to train people to tap into and use the AI effectively? Depends on what you want to do with it. Are you looking for the Dr. Strange cloak, or the ability to move mountains, or the ability to create worlds? You'd need to pass different thresholds of moral competency for each of those.

    ReplyDelete
    Replies
    1. First, interestingly enough, I had your cloak idea already. Bear with me for a minute. So the problem with bullet proof vests is that no matter how strong they are, they have a limited amount of space to dissipate all the energy. They have to take the average bullet from supersonic speeds to 0 in the space of at most a few inches. No imagine we had some sort of nanotechnology where by running currents through it and whatever else you could stiffen it, and you built some program that could arbitrarily move the cloak in essentially any fashion you wanted (though in my imagination this wouldn't really require an AI).

      Suddenly you can imagine a cool cyberpunk future where everyone wears Dr. Strange like capes, which act as protection against bullets, you try and shoot these people and the personal radar or whatever detects the bullet, and the cloak whips out, catches it and then has several feet in which to slow it down, or more likely deflect it. You could even have a situation where swords were back in fashion because bullets where ineffective...

      Your idea of a ubiquitous and all encompassing AI with various levels of user permissions and rights is pretty fascinating (I am curious where you came up with 92% rather than 90 or 95% ;) ) and I can totally see where it would make sense. Well done!

      With your permission perhaps I'll use it the next time I give my presentation.

      Delete
    2. Feel free to use it. The reason for postulating the cloak was as a thought experiment that eventually led to the idea, "well of course any sufficiently advanced society would create this." Which leads to your insight from Fermi's paradox that based on our best understanding and assumptions the math suggests there are sufficiently advanced civilizations out there.

      So we go from, "given a million years we could probably create this" to "therefore someone has", to "and since it exists, is pervasive, and is all-powerful there needs to be some mechanism for granting permissions to use it." And of course assuming the creators of this system have had sufficient time to work on it, that permissions system must clearly already be in place.

      That's a lot of assumptions, but I feel like these sorts of speculations are always built on a mountain of unexpressed assumptions. So at least these are explicit. I think to the extent they're useful, they help us project a different perspective besides our own. For example, the AI Plan of Salvation helps change the perspective from, "of course what God wants is for me to be happy in heaven with him", which is true enough, but severely limiting, to "God has specific stringency standards that are not negotiable because of the specific hazards at play".

      I think the AI Spirit of God angle sharpens the concept to, "God isn't just letting out an intelligence, he's actively deciding to what extent he will grant that intelligent agent power, on a platform he maintains."

      God as a responsible agent is taught in LDS theology, and is covered scripturally, but it's hard to appreciate its significance from inside the self-interested human perspective, and the tendency to ask, "why would a loving God do [X] if all he cares about is me?"

      Because if we're serious about believing God is real, we need more than a caricature understanding of who He is.

      Delete
    3. I don't have much to add to this except to point out that I love the following phrases:

      "God has specific stringency standards that are not negotiable because of the specific hazards at play"
      "God isn't just letting out an intelligence, he's actively deciding to what extent he will grant that intelligent agent power, on a platform he maintains."
      "Because if we're serious about believing God is real, we need more than a caricature understanding of who He is."

      Delete
  4. "But in fact this act is where God decides that making Saul the King was a mistake. And when Samuel finally shows up he tells the King:"

    So if God is God because he endured some long period of testing/training to prove he had the morality sufficient to wield God-level power....then what does it mean to say he made a mistake?

    The problem with this training or testing theory is that in real life it doesn't really work...or work reliably. The star HS football player may become a star college player, or he might walk into college with an inflated ego and flame out. Sometimes a complete moral flub is thrust into a position of power and turns out surprisingly well (I think this is the hope that many evangelicals had for Trump, but IMO he's more like a vampire sucking their moral blood but let's go there some other time). Other times the guy who was very good ends up being quite bad when he gets power....perhaps playing by the rules for a long time puts a sense of entitlement into him. Keep telling a chap that he's a saint and you just might get a monster at the end of the process.

    This process also seems to imply there must be some higher ultimate God who has set up this 'God ladder' in such a way as to ensure that it actually works and God like powers are never granted to individuals who turn out morally bad (Or is there an element of gnosticism here where a bad entity could end up with Godlike powers so we can't sit back and assume in the end everything will work out for the best?).

    Do you feel this is the mainstream understanding most Mormons have of their faith? That it's 'Gods all the way down' not unlike the turtles upon turtles vision of the universe?

    ReplyDelete
    Replies
    1. I'm not sure what you're trying to say here. Are you claiming that any project aimed at increasing morality is bound to fail because an otherwise good person is liable to just go on a rampage if given sufficient opportunity? I'll assume, rather, that you're making the argument that some minority of people can fake moral character really well, so normal humans would be likely to screw up and put a madman I'm charge.

      The only way I can see this being a problem for God is if he's not a very good judge of character. Although I guess if you're judging billions, your stringency has to be high to avoid errors. But then we're just debating, "how good a judge of character is God?"

      In real life, sure, we have to make similar smaller judgements, but then we're not gods. So of course we'll screw it up sometimes.

      As to your last question, I've always understood this in context of Moses 1:30-32, where Moses asks God why he made all the other planets, and God responds by saying, "I'm not here to talk about all that. I'm only here to talk about Earth." So whatever, we don't know because God has other priorities than talking about it.

      Delete
    2. The model mapped out above is live is a training/proving ground for morality. If an entity demonstrates morality, it advances in powers until ultimately it gets 'Godlike' power.

      If God is just automatically a 'good judge of character', such testing would be unnecessary. God would know whether or not it was a mistake making Saul King before he made Saul King.

      If this testing is necessary, then how do we know it is reliable? It isn't a good person is liable to go on a rampage. I admit a person who has never stolen before from you despite numerous opportunities, will probably not steal from you in the future. But we nonetheless accept he might. Here if we are talking about giving someone or something 'God like' power then this 'testing process' seems inadequate. While it may work often it need only fail once to end up with a God whose not moral. Once that happens what happens then? Do 'higher up Gods' come down to correct the problem? Does the 'bad God' become a cancer to the system nurturing bad morality under him?

      Delete
    3. The test is also a training program, in LDS theology. Thus, it's not a matter of God just picking who is already morally upright over those who are unacceptable. It's a matter of training all those who with sufficient potential for different levels of post-mortal responsibilities. I'm not concerned that God will screw up a process he's presumably been doing for millions, if not billions of years. You're not obligated to believe any of that, of course, but it is logically internally consistent.

      I'm really not interested in talking about the "gods all the way down" angle. Maybe it's interesting to you, but for me it's kind of like speculating about dinosaurs in parallel universes. They have no conceivable relationship to any lived human experience, so whatever.

      Delete
    4. Perhaps I was mistaken but I thought Jeramiah had previously explained 'Gods all the way up/down' as part of Mormon theology as well. Who we think of as God was once a man (or manlike entity?) who was 'trained' by a higher God and ultimately granted his Godlike powers.

      I get it's a 'training program' of a sort as well as a 'screening program' in another sense since those who fail and refuse to correct presumably will never get a hold of power. Nonetheless the premise here seems to be that passing this training program/period/test implies morality onwards. As I tried to point out that isn't the case. A person who never stole from you is likely to not steal in the future, but ultimately there are people who never have stolen, until they did. Walter White was a good guy until he 'broke bad'.

      Billions of years? Well that's a lot of training but also a lot of time for someone to 'break bad'. Perhaps you trust your student to be good and loyal....but for 20 years? 100 years? Just happened to see The Last Jedi a few weeks ago and it's striking how different old Luke Skywalker is from young. If we're talking millions of years, well that's a lot of opportunities for things to be even worse.

      In contrast you could say this training is so intense, so serious it actually alters the character of the person who successfully undergoes it. Such a person becomes infallible and uncorruptable as a result. But this would not be very analogous to training as we know it and would seem to imply to me free will is sacrificed.

      Delete
    5. I've worked in a lab, conducting experiments during a training period. Having done so for a scant five years, I can tell you there are things I would never/always do given my training. For example, I would never forget to use gloves when handling mice. I always scruff a mouse before giving it an injection. I always add acid to aqueous solutions. I still make mistakes, but within a more narrow range.

      Prior to acceptance to the PhD program, I had to pass the GRE, but that didn't mean I'd graduated. I still had training to do. In fact, after joining my thesis lab I still had to take another qualification exam even though my training was incomplete. That exam accepted me as a PhD candidate formally, but not as a full fledged PhD. Finally, after what seemed like an eternity I defended my dissertation successfully.

      There are clearly some parallels between this man made training and vetting program and the one sketched out above. That doesn't make it true any more than it would be false if I failed to imagine a similar program.

      There's a great scene in a fantasy book called Warbreaker I recommend (because it's a fun read, not for this scene). In it, an unbeliever of one of the religions of that fantasy world peppers his friend, who is a priest of said religion, with questions trying to trip him up. The priest patiently answers the progression of objections, explaining how none of this is contradictory, and has been thoroughly considered by the clergy. He explains that this other character doesn't even know enough to ask difficult questions.

      I've found myself similarly situated when discussing the theology of other's faiths. I learn some of the underpinnings, and even some more obscure doctrines of that faith. In my younger days I even won debates with people who didn't know their doctrine very well. It wasn't until later I discovered I really didn't understand the doctrines as well as I thought, and the points I used to "win" on weren't theological problems to those who knew the religion. That the various priests knew all about the so-called problems I thought were too big to ignore, and had no issue with them. In my experience, other people's doctrines are either completely internally logically consistent, or you'll never get to the point where you could discover the inconsistencies without devoting way too much of your life studying something you don't believe.

      Delete
    6. I think you are swapping mistakes with morality here. Walter White was technically proficient and able to quickly become the nation's largest and highest quality meth producer and distributor. Before his 'breaking bad' moment he was an excellent science teacher and a great chemist (he sold out early of a partnership that went on to be worth billions).

      His moment of decision came when he decided he would no longer play by society's rules, not accept 'charity' for his health care, not see his family's material well being contingent on being a good worker. The moment he took his mind from the moral code before his break to the moral code after his break all his technical skills were unleashed upon the world in an evil direction. As there were few who could match his skills being applied to bad (there was perhaps the libertarian minded chemist and Gus, the mastermind distributor...but they were both outfoxed by Walter) so he was amazingly successful.

      Hence what I'm not clear here is given free will what prohibits a breaking bad case? The fact that someone has a long track record of morality indicates he is probably likely to be moral in the future, but does not guarantee it. Even small odds have a way of sneaking up on you. On any given day the life insurance company will say you have a small chance of death. Yet no life insurance table says you will be immoral. Given your track record of being a good husband, good father, good citizen, good person I would bet tomorrow you would probably continue being so...but I wouldn't bet on that going on forever if we had forever to watch you. Smart employers may check the long term, respected, loyal employees for theft less often than they do the new ones but they should still check them.

      I think I'm hearing from your Warbreaker example that often these 'problems' have been explored in detail already by serious believers hence my surface orientated issues and questions are unlikely to shake much faith. That might be the case but the real important question is that a fault in myself or in the believers? That the believers may have convinced themselves that their odd sounding beliefs are not technically inconsistent or incoherent is of little value if they are nonetheless false.

      Delete
    7. I think I understand where you're coming from. Does this sound like what you're asking?

      Q: Does God have free will?
      A: Yes.

      Q: If so, we expect he must be able to choose to do evil, correct?
      A: Yes, but God always chooses to do good.

      Q :What would happen if God chose evil?
      A: He would cease to be God. See for example Mormon 9:19. God knows this, and knows how to avoid doing evil and losing his position.

      Q: Why would He cease to be God?
      A: Because "the powers of heaven cannot be controlled nor handled only upon the principles of righteousness." (D&C 121:36-37) The power by which God governs the universe requires righteousness, or it will withdraw itself, even from God.

      Q: So God could theoretically randomly choose evil, and cease to be God; what would happen then? Would some "higher" God step in?
      A: Nobody knows. God has never chosen evil, for reasons that are tautologically self-evident. Given His track record, it seems unlikely He will choose evil in any meaningful time frame. In this way, belief in the character of God is necessary for life and salvation. (See Joseph Smith, Lectures on Faith.)

      Q: Does this also apply to humanity?
      A: Yes.

      Q: Does it apply only to the earthly experience, or after exhalation?
      A: Both.

      Q: What happens if a person is exhalted and chooses evil?
      A: The same as would happen to God.

      Q: Isn't that a significant concern given observed human behavior patterns and the number of people involved?
      A: If it happens it would be of concern to those affected, yes. This life is not the total extent of divine training. The whole program is the work of God, and is understood to be much longer than human lifespans, so it requires faith to believe that He knows how to avoid training acolytes who would destroy billions of worlds filled with beloved offspring on a whim, when it gets to that point.

      Q: But it COULD happen, right?
      A: Sure. But there is every reason to believe it won't.

      Delete