Support This Blog On Patreon!

Support this Blog!

All I ask is a $1 a month. (But more is great too.) If you find this content to be beneficial, interesting or just a fascinating peek into true insanity please donate.

Saturday, November 25, 2017

Review of "Rationality: AI to Zombies": Rationality vs. Antifragility

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:



Or download the MP3



I’ve mentioned here and there over the past few months that I’ve been working my way through Rationality: AI to Zombies by Eliezer Yudkowsky, and last week I finally finished that mammoth tome. Okay it wasn’t an actual tome, it was a kindle ebook, but using the page estimate on Amazon, had it been a book, it would have been 2393 pages. Which may make it the longest book I’ve ever read, surpassing stuff like Les Miserables and War and Peace. (In case you’re wondering, both of those were more enjoyable.) And, given that length, there’s a lot that could be said about it. Consequently, it may take more than one post for me to cover everything I want to. We’ll have to see, but to start off with I’d like to focus on the difference between Talebian Antifragility, my preferred framework, and Bayesian Rationality, the framework espoused by Yudkowsky in this book. And why one is better than the other.


As anyone who’s followed me for any length of time could guess, I think antifragility is better than rationality. (See this post, if you need to brush up on what antifragility is.) This is not a conclusion I came to just recently, and in fact I think I covered it pretty well in my prediction post from the beginning of the year. But back then I was reluctant to paint with too broad a brush, particularly since, at the time, I didn’t feel that I had read enough to be confident of accurately representing the rationalists. Over 2000 pages later I no longer have that concern.


To be fair, I don’t think they’re unaware the ideas of Taleb and antifragility, I’ve seen both mentioned here and there, and late in the book Yudkowsky says:


Truly it is said that “how not to lose” is more broadly applicable information than “how to win.”


This is not a bad summation of the principle of antifragility, but unfortunately insights like these are few and far between, and rather than focusing on how not to lose, or more accurately on how to survive, his focus is on winning, to the point where that is how he defines rationality.


Instrumental rationality, on the other hand, is about steering reality--sending the future where you want it to go. It’s the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this “winning.”


So rationality is about forming true beliefs and making winning decisions.


There are a couple of big things wrong with this definition, to start with, his focus on winning. And before I do anything else, I should clarify why I have such a problem with it. I mean isn’t winning good? Doesn’t winning encompass not losing? Yes and yes. But not all “wins” are equal, at a minimum there’s not just the how of winning, but when you win. It’s pretty easy to win right this second. If you’re a government, you give the masses exactly what they’re clamoring for. For example in Zimbabwe, when Mugabe took most of the land from the white farmers, and gave it to his supporters. If you’re a bank, you can win by giving everyone a mortgage, regardless of their credit, just like the now bankrupt Washington Mutual did. And if you’re a heroin addict you “win” by injecting more heroin. Importantly, all of these decisions fit Yudkowsky’s description of choosing actions “that lead to outcomes ranked higher in [their] preferences.” All of them were winning. And one of the easiest things about winning right this second is that the path is clear. You don’t have to predict the future at all. (This will be important later).


To be clear, the examples above are not meant to be representative of what I think Yudkowsky means when he says that rationality equals winning, but I fear it’s pretty close to the mark of what most people mean by it, and because of this the subtleties that Yudkowsky brings to the debate are lost. Meaning to the extent that people in a position of power listen to him at all, it’s just one more thing that gets interpreted into “Do whatever it was you were going to do already.”


Given that he contributed a couple of chapters to Global Catastrophic Risks, I don’t think Yudkowsky is unaware of the time frame over which winning has to happen, but I also don’t think he pays nearly enough attention to the trade offs which may be required. One thing that Taleb points out is that often, to win at the end, we have to do a lot of losing at the beginning. The point of antifragility is to accept small, manageable losses in order to realize large, dramatic wins. And that conversely taking easy wins in the short term can lead to large, dramatic losses. Meaning that however well intentioned and careful Yudkowsky himself is, that rationality, as he lays it out, could end up generating lots of meaningless short term victories which farther down the road lead to long term catastrophe.


To be clear, I am fine with conceding that rationalists are not so fixated on short term wins, that they are likely to emulate Mugabe, or Washington Mutual, or to shoot up heroin. But in just the last post I covered other, far more subtle ways, in which “winning” turned out to have significant amounts of “losing” attached, but in ways that were difficult to detect, and took a long time to manifest. Where “steering the future” ended up being a lot harder than people thought. All of this is to say that while Yudkowsky and the rationalists want to make everything about winning, I want to make everything about surviving. Because, as long as you’re surviving, you’re still in the game. And being in the game is important because there’s only two ways out of it, by losing or by winning permanently and forever. And guess which is more likely to happen? Thus, as Yudkowsky said, in the game the rationalists are playing it’s more important to not lose than it is to win. But that’s not what the book says.


As an aside, winning the game permanently and forever might be possible, and transhumanists (another group Yudkowsky belongs to) think that just such a win is within their grasp, either through brain uploading, or a superintelligent, friendly AI, or interstellar colonization, or something equally futuristic. And perhaps this is exactly the win we should all be working towards, but I would also argue that, if history is any guide, it’s more likely that the promise of this ultimate victory will lead us to overextend, with potentially disastrous consequences. As examples of the kind of thing I’m talking about, I would offer up all invasions of Russia, most revolutions (but particularly the communist ones) and every villainous plan from every movie as examples of exactly the sort of overreach that happens when you’re in search of a permanent victory.


I said initially that there were two problems. The first is the overreliance on the idea of winning, and the second is the difficulty of knowing what actions actually lead to a “win”, especially the farther you get from the present. As I mentioned “steering reality” is fairly straightforward when applied to the immediate future, less straightforward but still mostly worth attempting at the time horizon of a few years, and mostly impossible when you push much beyond that. Meaning that your choice of which actions to take in order to get your high-preference outcomes are less and less consequential the farther out it gets, and may eventually end up being no better than acting randomly, in terms of bringing about the future you imagine.


And actually, this is giving the “future predicting business” too much credit. It’s easy to say, that since it might at least work initially, even if it eventually ends up being about the same as acting randomly, it’s better than nothing. But in practice, once people are given the power of “steering reality”, and choosing the actions they think will have better long term outcomes, this centralization often leads to far worse outcomes. The list of times this has happened is both extensive and tragic: North Korea, the Irish Potato Famine, the 2007 Financial Crisis, China’s Great Leap Forward, etc.


However, I’m sure the rationalists don’t see it that way, and as a defense against my first criticism, that they are too focused on winning and not focused enough on survival, I am sure that they would point out that Yudkowsky does mention the importance of not losing and further that he is very aware of existential risks, being one of the primary advocates of AI safety. They might also argue that they are more aware of the tradeoffs between short-term winning and long term winning than I am giving them credit for. That Yudkowsky is only one guy, and however voluminous his book, it is only a small part of the canon. (Please feel free to point me to where it is being discussed.) I’m also sure that they could also point out the many mediocre outcomes which might derive from the more minimal, “just don’t lose” standard. All that said, Yudkowsky did have nearly 2400 pages in which to make that case, and to the best of my recollection he didn’t mention this point. Also my perception of the community is that there is far more, “The future is going to be awesome!” Than, “We need to be super careful…”


As to my second criticism, the idea that predicting the future is impossible and that their attempts to steer reality are just as likely to have bad outcomes as good ones. They might answer by pointing to the central role identifying and eliminating biases has in rationality, and the idea that it was exactly these sorts of biases that led to all the tragic examples I offered above. And, given, that they have identified and corrected for these biases that they are less likely to make the same errors. This is almost certainly true, and I feel confident in saying that if Yudkowsky were made dictator for life that we would not have a repeat of The Great Leap Forward, nor would the country turn into North Korea. Though when it comes to the 2007 financial crisis, I am less confident. I think even with Yudkowsky in charge, something very similar still would have happened. In spite of that, they might, very reasonably argue that some steering of reality is better than no steering, particularly if you eliminate biases, which they claim to have done.


This is a reasonable argument, but I am still of the opinion that, in certain key respects, it provides only the illusion of understanding and control. And this is because, as far as I can tell, Bayesian Rationality still suffers from one weakness which is greater than all the others when compared to Talebian Antifragility, it does not take into account that all errors are not equal. There are some things where being wrong matters not at all, and other things where being wrong is the difference between surviving and losing forever.


With everything we’ve covered thus far, it could be argued that Yudkowsky and I are on the same page, he just didn’t get around to saying so specifically, in his 2000 page book. And, yes, you should be picking up some sarcasm here, but I feel entitled to it, because I had to read that same 2000 page book. However, with respect to this latest criticism even that excuse is unavailable, given that, within the book there are several examples of him being more concerned with how wrong something is, while paying very little attention to how much it matters.


I want to focus on two particular examples. In the first he devotes an entire section (out of 26 total) to refuting David Chalmer’s philosophical zombie theory. In the second example, he begins another section promising to explain quantum mechanics and then spends most of it railing against the Copenhagen Interpretation. The actual substance of both the original idea and Yudkowsky’s objections are not that important, what’s important is that in both cases the difference between the two positions has no effect on how things actually work, no discernable, experimental differentiation, and except for the tiniest effect on certain, very niche, ideologies, the future envisioned by one side is identical to the future envisions by Yudkowsky. Despite this, in both cases he spent, frankly, a tedious amount of time mounting a thorough refutation. None of this is to say that I disagreed with Yudkowsky’s arguments, in both cases I was certainly convinced, but even if I wasn’t, even if nobody was, what would it have mattered? These may be large errors philosophically, but practically, they’re inconsequential.


In both cases I got the impression that it was far more important to be correct than it was to explore the consequences of being correct. But allow me to give you a more concrete example, one that I used already in my prediction post.


For a complete overview of the argument I would urge you to read that post, but to briefly recap my point. The rational way of predicting the future is to make a clear, easy to check prediction and assign it a confidence level, (as in I’m 90% confident this will happen or I’m 70% confident.) Then once the time specified by the prediction has passed you check to see if it actually happened. If done correctly,, then 90% of your 90% confidence predictions should come to pass, 80% of your 80% confidence and so on. It’s generally better to be underconfident than overconfident, but the ideal with this system is still to match your confidence with reality. It should be said that in general the problem with past methods was not with people being too cautious, but with being too certain.


In any event, I’m reasonably certain this is the sort of prediction Yudkowsky espouses, though it’s yet another thing he doesn’t really get around to in 2000+ pages. (In fact it’s actually surprising how few practical examples there are in the book.) Part of the reason I’m certain, is that the system is very Bayesian in character. The confidence level is your initial/prior probability and as things change you should use the probability implied by the changes to update your prior probabilities and establish new probabilities.


This is a good system, it’s definitely WAY better than the how things worked in the past. Which was for experts to make an outrageous prediction, state it with absolute certainty and be right about as often as dart-throwing chimps. That is a horrible system, and while most of the credit for exposing it and changing it belongs to Philip Tetlock, and the Good Judgement Project, to the extent that the rationalists are pushing people to this methodology that’s a good thing, but there’s a problem, and the problem is, as I pointed out, not all errors are the same. Or to put it another way, outcomes are asymmetrical.


In my previous post, I didn’t have access to Eliezer Yudkowsky’s predictions, but I did have access to the final results of the 2016 predictions of Scott Alexander from SlateStarCodex, which if you know anything about this space, is basically the next best thing. Now, before I get into it, I should mention that I have an enormous amount of respect for Alexander, and this exercise is only possible because he was so rigorous in making his predictions using the methodology I described.


Returning to the predictions, as you can imagine, since it was 2016 he made some predictions about the presidential election. He gave Trump an 80% chance of losing, conditional on his winning the republican nomination, and he gave him a 60% chance of getting that. Which means if you do the math, that he gave Trump an 88% chance of losing. As we all recall, Trump did not lose, but that’s okay, because even if we round up and count this as one of his 90% predictions (which he did not, he treated it as two separate predictions) Alexander got about 90% of his 90% predictions correct, so the system works, and everything is fine right?


Not exactly, because as I pointed out in the original post, the stuff he was wrong about (Trump and Brexit) was far more consequential than the stuff he was right about. Which is to say that being 90% accurate about your 90% predictions doesn’t make the world 90% the way you expected and 10% of the way different, because generally the stuff you’re wrong about has far more impact that then stuff you’re right about. At the political level (which is where Alexander was predicting) our world isn’t 10% Trump, it’s nearly 100% Trump.


Now, I don’t want to give you the impression that Alexander was egregiously wrong, in fact given that he made his prediction at the beginning of 2016, he actually did really well. His prediction matched 538’s for January, and he was much better than the Sam Wang of the Princeton Election Consortium, who gave Hillary a 99% chance of winning, and Wang was a professional forecaster. No, the point I’m trying to get at is that while Bayesian Rationality, as championed by Yudkowsky, is more aware of its mistakes, and while it offers several small, but nevertheless significant improvements to the scientific method, that it still falls victim to the hubris of understanding and predictability.


All of which is to say, that as far as I can tell, while I’m sure there is some difference in Yudkowsky’s framework between an event with a 1% probability which has very little impact (say that the Copenhagen Interpretation turns out to be correct) and a 1% probability which has an enormous impact (say 50+ nukes going off in a war) but given the time he spent in his book on the first vs. the second, whatever that difference might be is not nearly great enough. And this is the critical weakness of Yudkowsky’s Bayesian Rationality when compared to Talebian Antifragility, that within the 2393 pages of his book there is no system for, or even mention of, dealing with the asymmetry between those two examples.


In closing you may feel that I have been too critical of the book, well if that’s so, then you may want to skip the next week or two, because I’m not done. But also, on this subject, I’m critical because this book is already pretty useful, and it comes close enough to being right that criticism actually has a chance of closing the gap, particularly on the subject of asymmetric outcomes and risk. I suspect (perhaps incorrectly) that if Yudkowsky and I sat down that it would be pretty easy to reach a common ground in this area. However next week I have no such hopes because we’re going to be talking about religion, and the strong anti-religious bias of both the book and the larger rationality movement. Though I think you’ll see that two biases are more closely related than you might imagine.





I have no aspirations to steer reality, but if you’d like to help me steer this blog consider donating.

Saturday, November 18, 2017

How Do We Solve the Problems We Create?

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:



Or download the MP3



If you read many self-help books, or listen to any motivational speakers or even if you just read the occasional inspiration quote that get posted by that one friend on facebook (you know the one I’m talking about.) You start to realize that certain stories or analogies get used over and over again. One of the analogies I’ve encountered on several different occasions concerns the problems that arise when you help a chick to hatch. Here’s an example of what I mean from the blog of a licensed clinical social worker:


Some new hatchers assist emerging chicks too soon and/or too thoroughly. Anxiety at this stage is high, especially for first-timers.  They misinterpret the needs of the chick and prematurely intervene, sometimes with dire consequences.  Some of these dire consequences are due directly to the well-intentioned intervention (ex: hemorrhaging due to torn membranes) and some are due to the consequences of the well-intentioned intervention (ex: the chick’s circulation wasn’t allowed to pump hard enough to allow them to warm themselves up once hatched).  The bottom line is, chicks actually need to peck their own way out of their own shell.  Without the strengths developed within their struggle, they are left vulnerable to their environment.


This is true for people too.  Our life experiences (including how we respond to them) are our shells, and figuring out how to navigate them effectively prepares us to effectively navigate our world.


Perhaps you’ve encountered this analogy or maybe you haven’t, but either way, this post is going to be about the necessity of struggle, which is the kind of thing that calls for an analogy, or an inspirational story. But as usual rather than just starting with story, I have to explain the whole thing and make it complicated. In fact, as a further complication, now that I’m re-telling it, and in the process, lending the enormous credibility of my blog to the whole thing, I feel compelled to see if there’s any truth to it.


A quick search seems to indicate that it is one of those things that’s mostly true, though as with so many things there are caveats. Yes, the general recommendation is that you shouldn’t help the chick hatch. That said, it’s not an automatic death sentence for the chick if you do. It does appear that more often than not if you help it hatch it will probably later die, but that may be less about the struggle giving the chicken the necessary tools to live and more the fact that if a chick is too weak to break out of its shell that it’s probably too weak to survive period. So perhaps this isn’t the best analogy, but I’m too lazy to find another one, also if you don’t think a certain amount of  struggle is necessary then I’m honestly not sure what you’re doing here in the first place.


However, in the interests of being thorough I suppose I could spend a small amount of time trying convince those on the fence that struggle is, in fact necessary. Though I would think the chick and the shell thing would be all the proof anyone would need, particularly given how directly and unequivocally I presented it. But I suppose it’s possible it didn’t convince you.


In that case, to understand struggle, let’s start at the highest level, you’re either religious, or you’re not. If you are religious, than struggle is built in to basically all religions both doctrinally and observationally. On the other hand, if you’re not religious than natural selection is all about the struggle for survival. Outside of that, I suppose there’s a third option, where you believe in some sort doctrine-free spiritualism which doesn’t include any struggle at all, something along the lines of The Secret, but if that’s the case then let me say, and let there be no mistake about whether I’m serious, because I am, you’re an idiot. And you should go away. Meaning you’re either an idiot (and we can ignore that category because I just told them to leave.) Or you believe struggle is part of existence.


But wait, you may be saying. You claimed that struggle is necessary. Going from being part of life to being necessary is still a big leap, one which you haven’t made. Very well, for the religious, one has to assume that struggle is necessary on some level or it wouldn’t exist. For the non-religious, non-idiots, it’s a little more complicated, and in fact it is in this area where I’ll be spending most of my time.


From here on out I’m going to assume that we all agree that struggle is part of existence (the idiots having been banished) and that all that’s left is determining whether it’s actually necessary. On this point there are two ways of thinking:


Camp One: These are the people who believe that struggle is so deeply intertwined with how things work from an evolutionary standpoint, that it would be impossible to eradicate it entirely without consequences that are worse than the initial suffering. Such consequences might include, but are certainly not limited to, bodily atrophy, diseases, autoimmune disorders, apathy, depression, lack of offspring etc.


Camp Two: These are people who believe the opposite, that technology will eventually enable us to eliminate struggling (and presumably also pain and suffering and malaria and auto-play videos on websites.) They will admit that perhaps struggle is necessary now, a la the chick and the egg, or needing to exercise to stay healthy, but that it’s on it’s way out. Yes, we once lived in a world where struggle was necessary to toughen us up, develop immunities, exercise willpower, and so forth, but that all of the things which were once “powered” by struggle will eventually be powered some other way, or be done away with entirely.


I think both groups would agree that it’s worthwhile and benevolent to remove unnecessary or counterproductive struggle, and by extension unnecessary pain and suffering. The questions which divides to two are how cautious do we need to be before we declare that something is unnecessary or counterproductive and is there some line, past which, we should not proceed?


At this point you would almost certainly like an example. And one of the best known involves the recent increase in the occurrence of allergies. There are several theories for why this is happening, but almost all of them revolve around allergies being a by product of some overzealous attempts to eliminate a form of natural struggle.


The best known of these theories is the hygiene hypothesis. The idea here being that in the “olden days” children were exposed to enough pathogens, parasites and microorganisms that their immune system had plenty of things to keep it occupied, but that now we live in an environment which is so sterile that the immune system, lacking actual pathogens, decides to overreact to things like peanuts. As I said this is just one theory, but all of the alternative theories also involve the absence of some factor which humanity previously considered a struggle. Also it is interesting, speaking of peanuts, that the NIH recently reversed their recommendation from avoiding peanuts until children were at least three, to recommending that you give peanuts to kids as soon as they're ready for solid food (approximately four months old.) Which obviously follows from this model.


As I said I’m not claiming that we know with absolute certainty that allergies are increasing because we’ve eliminated some necessary struggles. Though I will say that if that is the case, Most affected people, particularly those with the severest allergies, would trade those allergies in a heartbeat for growing up in a slightly less hygienic environment. Which I suppose makes this a point in the group one column. This is something where eliminating the struggle was not worth the tradeoff.


If this trend was limited to allergies, then I wouldn’t be writing about it, but we’re also seeing dramatic increases in the diagnosis rate of autism. And while part of this is certainly due to it being diagnosed more, almost no one thinks that this explains 100% of the increase. On top of allergies and autism, if you were following the news over the summer you may have seen a story about sperm counts halving in the last 40 years. This one is less well understood than the allergy problem, but it almost certainly represents something we’re doing to make life easier, which has the unforeseen side effect of reducing sperm counts, and by extension fertility.


These first three examples may all be genetic issues, but there are also cultural issues with modernity. For example the number of suicides and attempted suicides has skyrocketed in recent years, particularly among young people, whom you would expect to be the most impacted by recent cultural changes. Obviously there are lots of people who feel the increase comes because teens are struggling too much, but any sober assessment of historical conditions would have to conclude that this is almost certainly ridiculous. On the contrary, as I have said previously, if you remove struggle from a children’s life then you also remove the reasons why they might be unhappy. And, if after all these things are removed, they are still unhappy, the logical conclusion, since it’s nothing external, is that it has to be internal, and from that conclusion suicide can unfortunately often follow.


You may disagree with this theory, and may be it is only a temporary blip, unrelated to any of our misguided attempts to make life easier for kids, not evidence of a long term trend, but how sure are you of this, and are you willing to bet the lives of thousands of young people on whether or not you’re right?


On this last point you may be noticing some similarities to a previous post I did about the book Tribe, by Sebastian Junger. As you may or may not remember, stressful situations improved mental health. And as wars become less stressful mental health appears to be getting worse. If you don’t remember that post, this paragraph from Tribe is worth repeating:


This is not a new phenomenon: decade after decade and war after war, American combat deaths have generally dropped while disability claims have risen. Most disability claims are for medical issues and should decline with casualty rates and combat intensity, but they don’t. They are in an almost inverse relationship with one another. Soldiers in Vietnam suffered one-quarter the mortality rate of troops in World War II, for example, but filed for both physical and psychological disability compensation at a rate that was 50 percent higher… Today’s vets claim three times the number of disabilities that Vietnam vets did, despite...a casualty rate that, thank God is roughly one-third what it was in Vietnam.


As I pointed out back then, if you parse this out, Vietnam vets had a disability per casualty rate that was six times higher than World War II vets and current vets have a disability per casualty rate 54 times as high as the World War II vets. All of this is to say that there is significant evidence that making things easier (less of a struggle) doesn’t make things better.


For the most extreme view on this problem let’s turn to a response to 2016 EDGE Question of the Year, What do you consider the most interesting recent [scientific] news? What makes it important? This particular response, by John Tooby, was titled: The Race Between Genetic Meltdown and Germline Engineering. And the gist of the article is that previous to that advent of modern medicine most people died, and this was especially true of individuals with harmful genetic mutations. This is no longer the case, and thus humanity is accumulating an “unsustainable increase in genetic diseases”.


The article makes several fascinating points:


  • On the necessity of a certain number of people to die before reproducing:


For a balance to exist between mutation and selection, a critical number of offspring must die before reproduction—die because they carry an excess load of mutations.


  • On how fast this problem can escalate


Various naturalistic experiments suggest this meltdown can proceed rapidly. (Salmon raised in captivity for only a few generations were strongly outcompeted by wild salmon subject to selection.)


  • He even goes on to say that this may be the explanation of the worldwide decline in birthrates among developed nations


If humans are equipped with physiological assessment systems to detect when they are in good enough condition to conceive and raise a child, and if each successive generation bears a greater number of micro-impairments that aggregate into, say, stressed exhaustion, then the paradoxical outcome of improving public health for several generations would be ever lower birth rates. One or two children are far too few to shed incoming mutations.


This strikes me as one of those obviously true things that no one wants to think about. But it also dovetails in very well with the theme of the post, and brings up an issue central to the claims of the second group, those who believe all struggle and suffering can be eliminated through technology. In this case we know exactly how to fix the problem, it’s even in the title. We just have to master germline, or more broadly, genetic engineering. Furthermore this isn't some hypothetical technology with no real world examples. The CRISPR revolution promises that this is something we could do very soon (if not already). The chief difficulty at this point is not in editing the genes, but in knowing what genes to edit. And I don’t want to minimize the difficulties involved in that effort, but there’s definitely nothing about the idea which seems impossible. Nearly all experts would say it’s not a matter of if, but when.


As a matter of fact mastering genetic and germline engineering would probably help with all of the examples we’ve looked at. Despite what people want to claim there’s a genetic component to nearly everything, certainly with autism, but probably also with allergies and low sperm counts and even suicide risk. In theory anything that can be treated with a pill could be treated with genetic engineering and this treatment would probably involve fewer long term side effects. At least health-wise…


So there you have it, the second group is correct, all we have to do is improve CRISPR to the point where we can genetically modify humans, do some experiments to figure out which genes do what, and the negative mutation load, and the low sperm count and the allergies and the autism, and possibly even the elevated suicide will all go away. Struggle was necessary to healthy development, but once we master the genome it won’t be, at least not for anything that can be fixed with genetics. In other words as Tooby’s title declares, we’re in a race between genetic meltdown and germline engineering. Obviously we have to win that race, but as long as we do that, everything will be fine right?


Are you sure about that? From where I sit, if we develop genetic and germline engineering of the kind Tooby is talking about, that’s not the end of our problems, it may be the end of certain specific problems, but it’s the beginning of a whole new set of problems. (Perhaps you’ve seen the movie Gattaca?)


I know that the current laws on genetic engineering are still embryonic (get it? embryonic?) But it is nevertheless true that most people already recoil at the thought of designer babies, or really anything involving modifying genes much beyond doing it as a means of curing disease. Up until this point I’ve used genetic and germline engineering somewhat interchangeably, but they are different. Germline engineering is the process of making modifications which are heritable. If you use it to make someone exceptionally strong, their children would have a greater chance of being exceptional strong as well. This is why Tooby specifically talks about a race between germline engineering and genetic meltdown, because whatever fixes you made would have to transfer for it to be of any use. One of the reasons this differentiation is important is that the US has mostly banned germline engineering, beyond this you can find countless articles debating whether it’s ethical or not.


If, despite the ban, and the ethical questions and people’s distaste at the idea of designer babies, if Tooby is to be believed, we really don’t have any choice in the matter, which means, along with solving the genetic meltdown problem we buy ourselves a whole host of new problems. Including:


Greater divisions between rich and poor: This problem is bad enough already, but toss in the ability for the rich to increase their child’s IQ and health and suddenly you’ve got gaps which no amount of affirmative action, or protests are going to fix.  Also there’s a non-trivial chance that this ends up being a positive feedback loop. With the new smarter richer groups discovering additional positive mutations to add to the mutations they already have at a faster and faster rate.


Racial problems: This is similar to above but probably even more radioactive. Radioactive enough that I don’t even want to speculate. (I’ll give you one hint: transracial.) But I’m sure you can imagine several potential scenarios where this technology makes everything a whole lot worse.


Bioweapons: If you can develop positive mutations then you can develop negative mutations, and while the delivery for those would still need to be accomplished, none of the technology makes this problem harder and it may make it a lot easier. Which takes us to our next point.


Limited Genetic Diversity: Once people start making modifications they will coalesce around certain mutations, leading to a great number of people whose genetic diversity is significantly less than the “default”. Also as we know there are some “bad” mutations which have good side effects (the classic example being sickle cell anemia.) If a disease mutated to affect one, it would be equally effective against all of them. And following from the last point that disease wouldn’t have to be natural.


Different “breeds”: At some point when this has gone on long enough (and really not even all that long) it’s not inconceivable that you could have various breeds of humans, as different from one another as great danes are from toy poodles. How the world deals with something like this is well beyond my ability to predict, but I can’t imagine that it makes things better.


The good news for Tooby, but the bad news for anyone worried about any of the above is that CRISPR is not the Manhattan Project. It doesn’t take billions of dollars and millions of man hours, it’s something you can do from home. Now germline engineering is more difficult, but not that much more so. Certainly it’s not the kind of thing the US could keep any other country from doing if they wanted to.


All of this has taken us pretty far from the topic of whether struggle is necessary, and our two groups. But if nothing else you can begin to see the complexities involved in group two’s assertion that we can eventually solve everything through technology. Yes you can help a chick hatch, but most of the time it will die. Yes, you can make war safer and less connected to the rest of life, but PTSD will go way up. Yes modern medicine can keep people alive who otherwise would have died, but their negative mutations end up in the gene pool. Yes we can solve that with Germline engineering, but that creates a whole new set of problems. Yes we can make life materially better for everyone by using fossil fuels but the resultant CO2 causes global warming.


This is a complicated subject and I am not urging a retreat to some kind of prelapsarian past. But I think we should question the idea that any struggle is bad, that technology and progress has all the answers, and that we can solve all the problems we create.





You should donate to this podcast because if you don’t bad things will happen. (They will probably happen even if you do, but why take the chance.)