Support This Blog On Patreon!

Support this Blog!

All I ask is a $1 a month. (But more is great too.) If you find this content to be beneficial, interesting or just a fascinating peek into true insanity please donate.

Saturday, June 16, 2018

Things We Cannot Get Wrong

If you prefer to listen rather than read, this blog is available as a podcast here. Or if you want to listen to just this post:



Or download the MP3



Michael Crichton, the author of Jurassic Park, and the Andromeda Strain, along with a bunch of other great books, and who, for my money, died too soon at the age of 66 from lymphoma, said many very astute things (and probably some dumb ones as well) and I’d like begin this post by relating something he said about the limits of expertise, what he labeled Gell-Mann Amnesia:


Media carries with it a credibility that is totally undeserved. You have all experienced this, in what I call the Murray Gell-Mann Amnesia effect. (I call it by this name because I once discussed it with Murray Gell-Mann, and by dropping a famous name I imply greater importance to myself, and to the effect, than it would otherwise have.)


Briefly stated, the Gell-Mann Amnesia effect works as follows. You open the newspaper to an article on some subject you know well. In Murray’s case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward-reversing cause and effect. I call these the “wet streets cause rain” stories. Paper’s full of them.


In any case, you read with exasperation or amusement the multiple errors in a story-and then turn the page to national or international affairs, and read with renewed interest as if the rest of the newspaper was somehow more accurate about far-off Palestine than it was about the story you just read. You turn the page, and forget what you know.


That is the Gell-Mann Amnesia effect…


I have certainly experienced this effect and I imagine the rest of my readers have as well to one extent or another. Crichton’s larger point is about the danger of speculation, but for our purposes the key takeaway is that no matter how authoritative something sounds, there’s a better chance than you think of it being mostly wrong and it may, in fact, advance a point which is the exact opposite of the truth.


I bring this up because we appear to have an example of this happening, and in an area I’m very much interested in. Just recently a Detailed Critique of the “Existential Threats” chapter of Pinker’s Enlightenment Now was posted online. It was written by Phil Torres, a noted scholar of existential risk, and having spent several of the last few posts discussing the book, and in particular Pinker’s dismissal of existential threats (combined with my larger interest in that topic). This seemed right up my alley. The critique is quite long, so I’m mostly going to focus on areas where I have something to add, or where I disagree. In particular, though this may not be an addition or a disagreement, I want to emphasis, that of all the subjects Pinker discusses, the one he (and really all of humanity) can least afford to be wrong about is the subject of existential risk. We can survive if the murder rate is much higher than he thinks, or if we got it wrong on same sex marriage, we can even survive most of the predicted outcomes of climate change, but if we get existential risk wrong, nothing else we got right is ever going to matter.


The key findings of this critique are:


  • Two quotes being used by Pinker in the chapter are in the “wet streets cause rain”
    category (my words) in that, their original meaning is not what Pinker claims and ma
    in fact be “outright contradictory”.


  • The chapter spends most of its time attacking straw men.


  • Pinker’s citations are poorly vetted, and largely non-scholarly, but presented as scholarly.


  • And from those sources, Pinker ignores content which undercuts his arguments. Meaning
    the sources themselves are far more equivocal than Pinker represents.


  • Finally:


Overall, the assessment presented below leads me to conclude that it would be unfortunate if this chapter were to significantly shape the public and academic discussions surrounding “existential risks.” In the harshest terms, the chapter is guilty of misrepresenting ideas, cherry-picking data, misquoting sources, and ignoring contradictory evidence… Because, so far as I can tell, almost every paragraph of the chapter contains at least one misleading claim, problematic quote, false assertion, or selective presentation of the evidence.


Torres then goes on for an additional 20,000 words, and yet, in the end, Pinker’s errors are so dense, at least in this chapter, that he only manages to cover the first third of it. Obviously I have even less space available, so to start with I’d like to focus on the role of pessimism and religion. And to do that I need to identify some of the different groups in this debate.


Pinker opens the chapter by framing things as a battle between those who are entirely optimistic (like himself) and those who are entirely pessimistic. These are the first two categories. What’s interesting is that for all Pinker disparages religion, believers technically don’t fall into the category of those who are entirely pessimistic. To find people who are entirely pessimistic you have to look at people like the antinatalists, who I recently discussed or really hard-core environmentalists. Believers are optimistic about the future, particularly over a very long time horizon, they may just be pessimistic in the short term.


Torres takes immediate issue with framing the issue of existential risks in this fashion, and he points out that many of the people Pinker talks about are very optimistic about the future:


Pinker’s reference to “pessimists” is quite misleading. Many of the scholars who are the most concerned about existential risks are also pro-technology “transhumanists” and “techno-progressives”—in some cases, even Kurzweilian “singularitarians


In other words these people all firmly believe that a technological utopia is not only possible but likely. They just believe it’s even more likely if we can eliminate potential existential threats.


The fact that Pinker simplifies things in this fashion is emblematic of his entire approach to this subject. And frankly represents some pretty appalling shoddiness on his part, but that’s not the point I want to get at.


As I said I want to identify the various groups, so let’s get back to that.


As I pointed out, the first group is composed of the entirely or mostly optimistic, and I think it’s fair to put Pinker in this category. The group of people who believe that things have never been better and in all probability these improvements are going to continue. To put it in terms of my overarching theme, these are people who believe that technology and progress have definitely saved us.


Our second group is the entirely or mostly pessimistic. I already mentioned the antinatalists, but I would also include people who are convinced that the earth will be made uninhabitable by climate change, or that the holocene extinction could lead to some sort of global tipping point. Pinker puts people concerned with AI Risk and those concerned with the possibility that genetic engineering could make it easy to create superbugs in this category, though Torres argues he probably shouldn’t. That said, there are certainly people who believe that technology has definitely doomed us (and that there is no religious salvation around to mitigate this doom) it’s just not clear how large this group is.


As Torres points out in the quote above, there is a third group which contains those who experience a mix of optimism and pessimism. Those who feel that the future is incredibly promising, but caution must be exercised. They believe that technology can save us and hopefully will, but that if we’re not careful it could also doom us. Given that the majority of these people don’t expect any religious salvation you could see why they’re particularly worried about getting it right. For myself, despite not falling into this group it seems clearly superior to group one.


The fourth and final group also contains those who see reasons for both optimism and pessimism, but in this category the optimism is primarily based on faith in a divine being rather than being based on faith in technology and progress. Here, I imagine that Torres and Pinker might set aside their disagreements to declare this group the worst of all. Obviously I disagree with this. And before the end of this post I’ll get around to explaining why.


I suppose it’s not entirely accurate to label that last category as the final group, since there is, of course, the largest group of all: people who never really give much thought as to whether they should be optimistic or pessimistic about the future. Or rather, they may give some degree of thought to their own future, but very little to humanity's future. The big question is how much influence do they wield? Part of the reason why Pinker writes books and why Torres writes rebuttals, is that it’s hoped that the future will not be determined by this group, that it will be determined by people who have taken the time to read books like Enlightenment Now (and even better people who might have read rebuttals like Torres’) though that’s by no means certain. Still, I guess I too will make the same assumption as Pinker and Torres and say no more about this group.


Pinker wants to frame things as a battle between groups one and two, and while I agree that there is a group of hardcore technology pessimists, I don’t think they’re that large. Also, all (or at least the majority) of the people Pinker call out more accurately belong in group three. Which is another way of saying that this is a good example of the strawmanning Torres is complaining about.


If we set group two aside, both because it’s too small, and also because of its relative lack of influence then we end up with most of the attention being focused on the contest between groups one and three. As I said I think group three is clearly superior to group one, but it’s useful to spend a moment examining why this is.


The first big question is what are the risks? I mentioned that this is one thing I want to focus on, in fact it’s the point of the title. The risk of being wrong about existential hazards, are, from a certain perspective, infinite. If we make a mistake and overlook some risk which wipes out humanity, that’s basically an infinite risk, at least from the standpoint of humans. If you’re not comfortable with calling it an infinite risk, then it’s still an enormous risk as Torres points out:


...This is not an either/or situation—and this is why Pinker framing the issue as an intellectual battle between optimists and pessimists distorts the “debate” from the start.


(i) given the astronomical potential value of the future (literally trillions and trillions and trillions of humans living worthwhile lives throughout the universe), and (ii) the growing ability for humanity to destroy itself through error, terror, global coordination failures, and so on, (iii) it would be extremely imprudent not to have an ongoing public and academic discussion about the number and nature of existential hazards and the various mechanisms by which we could prevent such risks from occurring. That’s not pessimism! It’s realism combined with the virtues of wisdom and deep-future foresight.


This is the position of group 3, and as I said I think it’s pretty solid. The risks of not paying attention to existential hazards is enormous. On the other side what is the argument for group one? What are the risks of paying too much attention to existential hazards? Here’s Pinker explaining those risks:


But apocalyptic thinking has serious downsides. One is that false alarms to catastrophic risks can themselves be catastrophic. The nuclear arms race of the 1960s, for example, was set off by fears of a mythical “missile gap” with the Soviet Union. The 2003 invasion of Iraq was justified by the uncertain but catastrophic possibility that Saddam Hussein was developing nuclear weapons and planning to use them against the United States. (As George W. Bush put it, “We cannot wait for the final proof—the smoking gun-that could come in the form of a mushroom cloud.”) And as we shall see, one of the reasons the great powers refuse to take the common-sense pledge that they won’t be the first to use nuclear weapons is that they want to reserve the right to use them against other supposed existential threats such as bioterror and cyberattacks. Sowing fear about hypothetical disasters, far from safeguarding the future of humanity, can endanger it.


Honestly this seems like a pretty weak argument. And Torres points out several problems with it, which I’ll briefly recap here:


  1. There’s a big difference between, for example, the warnings about AI provided by Elon
    Musk and Stephen Hawking and whatever it was Bush was doing.
  2. This overlooks all the times when people warned of catastrophe, and it turns out we should
    have listened. Exhibit 1 for this is always Hitler, but I’m sure I could come up with half a
    dozen others.
  3. This also overlooks times when warnings were acted upon, and the problem was fixed,
    sometimes so well that people now dismiss the idea that there was ever a problem in the
    first place. Pinker offers the Y2K bug as an example of techno-panic, and Torres goes to
    show it really wasn’t, I don’t have to time to get into that, but Pinker seems to assume that
    warnings of catastrophe are never appropriate and always bad, which is almost certainly
    not the case.


To this list I’ll point out that neither of his examples are particularly good.


  1. The Iraq War was bad, and in hindsight, almost certainly a mistake. But it wasn’t a
    catastrophe, certainly not compared with other potential catastrophes throughout history.
    Perhaps when considered only from the perspective of the Iraqis themselves, it might be,
    but I’m not sure even then.
  2. If the nuclear arms race had lead to World War III, then Pinker would certainly have a
    point, but it didn’t. However mistaken you think the arms race was, we avoided actual war.
    This despite the fact that many people were pushing for it. (I think the number of people
    who thought it might be okay went down as the numbers of nukes went up.) How sure are
    we really, that a world where the arms race never happened would be better than the
    world we currently have?


Pinker brings up other risks, which Torres covers as well, but none of them, when set in the balance, outweigh the colossal risks of potential existential hazards.


Before we move on there’s another argument Pinker makes that deserves to be mentioned. One of the key points which determines how risky technology has made things, is the ease with which an individual or a small group can use it to cause massive harm. Pinker claims that technology’s interconnectedness has made it harder. He makes this claim primarily based on the increase in the number of intersecting technologies, all of which would require separate areas of expertise in order for an individual to cause any harm. He concludes that this makes harm less likely than in the past, and that we have been mislead in this respect by the hollywood image of a loan genius. That, rather, it would take a whole “team of scientists” and that maybe they wouldn’t be able to do it either.


This certainly doesn’t match my experience of things, and Torres take serious issue with it as well and goes on to provide nine counter-examples of small groups either causing massive harm or having done all of the work necessary to cause massive harm but stopping in advance of any actual harm. If you read nothing else from the original paper, I would at least review these nine examples. (They start on page 31 and go through page 33.) They’re quite chilling.


The overall feeling I came away with after reading Pinker’s chapter on existential risks, a feeling Torres appears to share, is that Pinker thinks that people who are pessimistic about technology aren’t acting in good faith to prevent some disaster, but rather they’re doing it as part of some strange intellectual exercise, a weird game perhaps. Here’s an example of Pinker expressing this sentiment:


The sentinels for the [old] horsemen [famine, war, etc.] tended to be romantics and Luddites. But those who warn of the [new] higher-tech dangers are often scientists and technologists who have deployed their ingenuity to identify ever more ways in which the world will soon end.


Torres joins me in thinking that this makes it sound like people who are concerned with existential risk, are: “devising new doomsday scenarios [as] a hobby: something done for the fun of it, for its own sake.” and goes on to state, “That’s not the case.” So far Torres and I are in agreement, but I would venture to say that Torres makes a similar claim about people who are worried about some sort of apocalypse for religious reasons, and in a couple of places in his paper he goes out of his way to put as much distance as possible between his worries about existential risk and the apocalyptic worries of traditional religions. And here’s where we finally turn to group four: the religious pessimists (who nevertheless hope for divine salvation).


I’m not an expert on the status of apocalyptic beliefs among all the world’s religions, but I get the impression that it’s pretty widespread. Certainly it’s a major element Christianity, the religion I am the most familiar with, but regardless of how widespread it is, Torres wants to make sure that his worries about existential risk are not lumped in with the apocalyptic concerns of the religious.


The question is why? Why are the religious fears of an apocalypse different than the fears Torres is defending? Torres takes objection to the idea that researchers in existential risk, are “devising new doomsday scenarios [as] a hobby” but where does he suppose that religious apocalyptic fears come from? Is he copying Pinker now, and assuming that this was the hobby of early Christians? Something to spend their free-time on while undergoing persecution and attempting to spread the gospel?


I suppose that’s possible. That if it was, not exactly a hobby, that it at least, served no useful purpose, that it was just pointless baggage which for some reason accumulated within Christianity and did nothing for either the religion in which these ideas accumulated or for the followers of that religion. As I said, this is possible, but it seems unlikely.


Another possibility is the Talebian possibility, that there was something in these beliefs which made those who held them less fragile. It’s not hard to imagine how this could be the case, if apocalyptic beliefs led those who held them to be more prepared for eventual (and historically inevitable) disaster, then it’s easy to see where those beliefs came from and how they might have persisted. This possibility would appear to make a lot more sense than the first possibility.


Of course there’s one final possibility, that there is in fact a religious apocalypse on it’s way, and John of Patmos was actually warning us of something real. But if you reject this possibility, as I assume Torres does, then the second possibility still makes a lot more sense than the first. Which is to say, that it could be argued that it’s a huge support for Torres’ point.


The central argument between Torres and Pinker boils down to a question of whether it’s a net negative to worry about future apocalypses (Pinker’s view) or whether it’s a net positive (Torres’ view). To which I would argue that historical evidence suggests that it’s definitely a net positive, because that’s basically what people have been doing for thousands of years, and it’s safest to assume that they had a reason for doing it. Particularly given the fact that, as I’ve been saying, this is one of those things we cannot get wrong.


In closing I have two final thoughts. First, I think one of the benefits of bringing religion into the discussion, is that it allows us to tap into thousands of years worth of experience. Contrast this with Pinker (and to a lesser extent Torres), who are arguing about a world that has only existed for the last few decades. It’s really difficult to know if we’ve recently reached some new plateau where existential risk is so low that worrying about it causes more harm than good. But if you bring in religion and tradition more broadly the answer to the question is, “probably not”. We probably haven’t reached some new plateau. There probably is reason for concern. The last 70 or so years are probably an anomaly.


Second, I would be remiss if I didn’t mention that Torres ends his paper by referencing The Great Silence, which is another name for Fermi’s Paradox, and points out, as I have on many occasions, that if we don’t have to worry about existential risk, then where is everyone else? Sure, we all agree that there are lots of potential explanations for the silence, but one of them, to which we have no counterfactual, is that Pinker is horribly, fantastically wrong, and that technology introduces a level of fragility which will ultimately and inevitably lead to our extinction, or in any case will be inadequate to save us.






If you think there’s some point to religion, consider donating. If you think there’s some point to being worried about existential risk, consider donating. If you think Pinker could use more humility consider donating. And finally if you think it’s a tragedy that the Netherlands is not in the World Cup, consider donating. Because it is...

14 comments:

  1. Thought on Gell-Mann Amnesia: Say you are an expert in some specific subject and read a newspaper article about the field. You will most likely pick out dozens of errors. However even if that is the case it is nonetheless perfectly rational to trust another article about some totally different topic (say 'far off Palestine').

    Approach this issue with the perspective of just how wrong are the errors in the article you read? Would it be a D- paper in school? That's 60%. A C- is 70%. If you know nothing about 'far off Palestine', a source that's right 60-70% of the time is incredibly helpful.

    Another perspective, the bad apple. If the piece you read is exceptionally misinformed, the key word there is exception. You are more likely to find other articles to be more reliable.

    Yet another perspective, Dunning–Kruger effect. You may in fact be overestimating just how good you are in a field and misjudging the consensus. Crichton was a successful author who saw a few of his books made into major movies. Was he an expert on all of entertainment? There were bigger stars than him. While he might have thought of himself as a master of the game perhaps he was less impressive than his self-assessment.

    ReplyDelete
    Replies
    1. The quote directly addresses some of these concerns by defining a bounded hypothetical. It says that the errors lead to opposite conclusions to what expert analysis would come to. Thus, the errors are irredeemable, because however true some parts might be (to what degree we define "what they got right" seems akin to the infinite shoreline paradox; do you count the spelling of certain words, for example?) the existence of those errors ruins the entire story. So a non-expert reading the story would think they have new knowledge, but if that knowledge is good for anything - say, perhaps, contributing toward a national debate on policy - it is only good for confusing the point. In fact, the more factually accurate the other parts of the article, the greater the potential damage! Because then the person who has read this one article - or perhaps a series of them from the same wrong source - feels like they are able to engage in meaningful discourse on the subject, and should they meet an actual expert in the field they may find themselves arguing against the best available evidence with much more confidence than they have any right to.

      The quotation under discussion resonates, in part, because what it describes is not theoretical for me. I don't want to speak to the expertise of M. Crichton. I remember one of his biology books had the science REALLY wrong (don't remember which one at the moment, but it was embarrasingly bad - not JP1, though, that was ... reasonable). But the quotation Jeremiah used was very true of my own experience. There are a few good science reporters out there, but most of the science reporting you read - including nearly everything in NYT or WaPo or any major paper - is complete bunk. And yes, I have found myself arguing with people who have read some aritcle stating something they now "know" must be true, but that is actually diametrically opposed to all available evidence and believed by nobody in the field.

      It's entirely possible for an article to get things mostly right, and introduce errors that are more or less inconsequential. I think any expert who has read a well-written article has experienced this phenomenon. Part of translating complex technical concepts into something of general interest is that you have to oversimplify, and potentially introduce errors. None of the experts I know have a problem with this type of article. They are, honestly, refreshing to read. Because most of the science reporting - and I'm increasingly convinced most reporting in general - is not worth the time it takes to read it.

      Delete
    2. Certainly everything you list is a possibility, but all the Gell-Mann Amnesia stuff was just me setting things up, throat-clearing if you will. Also you seem to mostly be talking about the average news consumer, where 60-70% accuracy is fine, but Torres is worried about Pinker's stuff being used at a very high level. Having read my post, what grade would you give Pinker's chapter? I don't think it even rates a 60%.

      But to get back to the point about the difference between an average news consumer and a decision maker. Certainly a paper on Iraq could have missed the Sunni/Shia divide and still gotten, in your terms a D-.

      Perhaps it was exactly this sort of paper that Bush read? http://www.politics.ie/forum/foreign-affairs/30170-bush-didnt-know-difference-between-sunnis-shias-2003-a.html

      But it turns out that overlooking that divide ended up being enormously consequential when it came to the invasion of Iraq...

      Delete
    3. Let's go up a notch. Suppose reporting is only 30% accurate. It doesn't validate Gell-Mann Amnesia. Why?

      1. There's only one (or at least) a few ways to be right, many ways to be wrong.
      2. If you're running the world's largest military and are thinking of invading Iraq could you be expected to read more than one article about the country at least?

      If you read multiple articles, the accurate facts are likely to repeat while less accurate ones are not. Also strictly speaking some types of inaccurate facts are functionally accurate. For example, if someone mixed up Sunnis and Shias and assumed Shias were the minority who held power under Saddam and Sunnis the majority, they still would have the essential concept of a country split by religion where a minority ruled the majority (often not nicely). Of course your two way split isn't really accurate either. You forget the Kurds are distinct 3rd group in Iraq

      Delete
    4. I think there's an implicit assumption here that news articles are a.) written independently from one another, and b.) the value signaling for news articles favors accuracy to any degree.

      I would strongly disagree with both these assumptions.

      Delete
    5. 1. True, articles are not independent but tend to be dependent on each other (if there are ten articles about incident X, the odds there will be an eleventh go up).

      2. This does mean false statements can get embedded. For example if ten articles say Saddam 'clearly has WMDs', this tends to get embedded going forward. If we were just talking about a simple model where there's a 30% chance some statement will be wrong...well it's unlikely to see the same wrong statement keep showing up over and over.

      That being the case, you are still better off 'trusting' the media for topics you are not expert on rather than not.

      I would point out dynamics in the news also push to the opposite direction from inaccuracy to accuracy. The 'news cycle' means articles will cluster around a single topic (say taking kids from aliens trying to get asylum). But it is hard to say something new about the same facts and stand out from a dozen other people doing the same thing. Refuting some 'established' fact solves both problems nicely if you can pull it off. Just look for how many times you saw a headline along the lines of "everything you think you know about X is wrong".

      Delete
    6. I think you and I have a fundamentally different view of the news media. You seem to assume news reporting is generally operating in good faith. I don't. Look at the recent news scandals showing news organizations:

      1. Colluding with political organizations to help preferred candidates in debates
      2. Running articles by campaigns to ask about editorial (not factual) advice - then changing the editorial angle based on that feedback!
      3. Taking direct suggestions from campaigns for articles to write. "How about ten ways (so-and-so) sucks. I've provided ten suggestions" of which the resultant article used seven.
      4. Direct implication by leaked emails stating, "I'll plant a story about X for the Monday news cycle in NYT or WaPo. Not sure yet which one I want it to come out in." Then the story came out as suggested, with the specific language suggested.
      5. Literal taking points distributed to news organizations, with statements such as "focus reporting on this angle" or "use this language to describe candidate X".
      6. Among many other egregious examples.

      Politics is not the only place they've been caught doing this garbage, either. So when you see news media constantly focusing on one story and using similar language to do so you seem to assume they're practicing standard reporting practices. I assume much of what makes the front page is planned garbage written by hacks. Or that is impossible to tell the difference, since we know now they have no problem putting phony reporting in there.

      Delete
    7. Few thoughts before we lose this thread:

      1. Sometimes errors represent better knowledge of something than worse. Flubbing the Shia-Sunni divide in Iraq at least indicates some knowledge that Iraq is a country demographically divided by two major groups. In a sense getting this wrong (say thinking Sunni's are the majority), represents a more accurate understanding than the guy who just sees Iraq as 'a bunch of Arabs'.

      2. Imagine you have been kidnapped by North Korea. You are totally isolated and made to do one job. You are to assign and grade research papers given to children of NK elites who are expected to master English. These kids are very spoiled, though. At best they will do lazy research work, and tend to copy from each other. Nonetheless, being the kids of the elites they have full access to the Internet. Likewise you are free to give them any research assignment you please.

      QUESTION: Could you use your position here to keep up to date about what's going on back home? Even though they tend to make a lot of mistakes and even worse tend to copy from each other!? I would say yes you could. I suspect you might find Gell-Mann Amnesia makes complete sense. If you assigned them something you already know well, you'd be distressed at their numerous errors. However if you assigned them stuff you didn't or couldn't know (like keeping up with the latest things happening in the NFL), you'd find yourself reasonably trusting your information source and it would in fact be rational to do so.

      Given this extreme case I don't think any of the usual carping done about the MSM really justifies dismissing it as conservatives are often inclined to do these days.

      Delete
    8. I don't think your examples really drive home the point you're trying to make. Take Iraq, for example. The problem wasn't just that it was a divided country - so was the US prior to the Civil War. The problem was that there was a fundamental distrust where each side believed the other side wanted to kill the other. They had good historical reasons for believing this was the case. Thus, just knowing the country was divided in 2002 wasn't sufficient to understand that "invading a divided nation and removing an oppressive regime" would inevitably lead to civil war. Indeed, that superficial understanding led us to implement the disastrous policy of removing the old partisans from office, etc., etc. We did things that made the situation worse because we thought we understood a situation but we didn't. Shallow reporting - whether biased or not - isn't going to help you understand a foreign subject. This becomes entirely clear when you see reporting on a subject you know, and more so when you hear people you know make conclusions based off of bad reporting.

      To then insist that it's rational to then view other shallow reporting as somehow useful still doesn't make any sense to me. Because we have more than just personal experience a la Gell-Mann Amnesia. We have the really horrible track record of MSM reporting - both on the Right and the Left! The problem isn't that there's some left-wing bias in MSM. The problem is that everyone is playing to an audience and a narrative and getting paid or tricked into doing so. Yes, there's a clear Left-wing bias in reporting from many main-stream news organizations. Yes, that has caused a bunch of Right-wing outlets to spring up and become just as biased. Whatever. That's not a problem to be solved. If we had two partisan reporting angles (as the Right often likes to claim they provide but they seldom deliver) doing in-depth journalism we'd be better off for it. But they're not actually doing any of that, they're all just crafting baseless propaganda pieces.

      The people who incorrectly predicted Obama was a flash in the pan and wouldn't beat out Hillary in 2008 were the same people out there making straight-faced predictions the Next Day. The same people who literally handed Trump the Republican nomination with a bunch of free media because "he couldn't possibly win" now run around trying to not only predict his future behavior (with even worse accuracy) but also tell us his inner-most thoughts. And this isn't a special Trump phenomenon; the Right-wing media loved to mind-read the most inconsequential Obama policies as all part of the Master Plan To Destroy America. Could reading one of those inane articles help you get a kernel of truth about obscure regulatory policy that was in any way worth the time and distorting impact that came with the rest of the piece? I can't see how.

      These people are provably wrong about all the most important details and predictions time and again, but I'm supposed to try and get something useful out of it? We have a saying in the lab you're probably familiar with: Garbage in = garbage out.

      We live in an incredible age where a huge amount of credible information is literally at our fingertips, packed with links to actual sources. I just don't understand an argument that suggests it's in any way useful to waste time sifting through a mountain of biased, paid-for, shoddy reporting-to-a-predetermined-theme in order to possibly glean some shadow of the Truth. Actual, rare reporting in the last decade has revealed that modern journalism has no soul, no scruples, and no interest in doing the job of journalism. I don't see why we should consider them source material at this point. That's something they have to earn. And they've failed to do so.

      Delete
  2. A couple of thoughts:

    1. Perhaps the first error here was in following Pinker's lead by categorizing. The world, and people, are driven by multiple motivations and causes simultaneously. I would say the majority of people fall into a group that can be categorized as, "Tangentially aware of issues others characterize as existential threats to them. They then either believe or don't believe in those specific threats. They have some/little/no power to do anything about them. They share their concerns or lack thereof, or they don't." For example, someone might be really interested in AI risk, but less convinced global climate change could become an existential threat - and vice-versa.

    2. Just a quibble here, but it's an important one: existential threats are different from catastrophic ones. Especially in context with Taleb's ideas in "Skin in the Game". An existential threat is one where humanity wipes itself out entirely. Where we are no longer capable of growth, but either we are destroyed suddenly, or our numbers dwindle past the point where we can perpetuate the species into the future. A catastrophy might set us back a hundred years or more. It might wipe out 99.9% of the human population. But it is different IN KIND to an existential threat. We should still be worried about catastrophic threats, but they require a different type of analysis to existential threats.

    3. I'm highly skeptical of most claims that a threat is existential and not 'merely' catastrophic. For example, the concept of a nuclear winter destroying all life on Earth - or even all human life. Nuclear war would be horrible and catastrophic. I highly doubt it would cause human extinction. This is one of those things I learned in public school that had to be unleared later by reading the weakness of the evidence. Mostly, I think nuclear winter is a zombie theory nobody has any interest in refuting because it's a useful zombie theory. But if we're discussing existential vs. catastrophic, it belongs in catastrophic. Same with smallpox specifically, and bioterrorism in general. The idea that we're going to introduce something that will have a 100% kill rate AND will spread throughout the population to infect 100% of humanity is ... unlikley.

    (I had a roommate who once said, "I almost never speak in absolutes, and even then I'll add a qualifying statement." Which sums up how most scientists I know talk. In this case, please read "unlikely" to mean "this goes against all available evidence, theory, and historical experience; as an expert in this area I can't even think of a mechanism whereby, if humanity were strangely motivated to do so, we could pool our resources to create something capable of doing this thing".)

    I honestly can't think of a single human-caused disaster that would be existential and not catastrophic. In my experience, there is a strong tendency to miscatagorize catastrophe as existential threat, and by doing to vastly overestimate the potential damage a catastrophe might have. In theory we should be able to wipe ourselves out. In practice, I don't think we've gotten to that point yet. That doesn't put me on Pinker's side. I think we should avoid catastrophe as well as existential threats - provided we do so with a clear-eyed understanding of how bad things might get. I am cautiously persuaded of the general applicability of the Adams Law of Slow-Moving Disasters, but to the extent it works it's only applicable if the observation doesn't change general human behavior. So ... I'm not going to tell you to stop worrying about future catastrophe. But I'm not losing sleep over the things we currently know are potential threats.

    ReplyDelete
    Replies
    1. I agree that catastrophic threats are different then existential threats, and I'm also skeptical that a threat is existential and not, as you say, 'merely' catastrophic. I even wrote a post about it:
      https://jeremiah820.blogspot.com/2017/07/the-apocalypse-will-not-be-as-cool-or.html

      I was going to include that caveat, but it took things in a weird tangent, so I left that out. But I think even so most of the points about underestimating downsides still stand.

      Delete
    2. "Mostly, I think nuclear winter is a zombie theory nobody has any interest in refuting because it's a useful zombie theory. "

      1. Nuclear winter theory, at least as I heard it, would be catastrophic but not existential. It does imply, however, that the causalities from a nuclear war would continue long after the last bomb exploded.

      1.1 I would add Infrastructure Winter as well. Lots of places people live are built upon being constantly supplied via ports, roads, trains and airports. NYC, for example, could not host millions of people by growing veggies on rooftops and using Central Park to graze cows and sheep and there is no easy way to just move out millions of people from a densely populated area. The result I think would be that population would 'downsize' even in areas that might be relatively spared.

      2. I'm not sure why you say it's a 'zombie theory'. There isn't any way to experimentally test it by blowing up a few thousand cities but massive volcano eruptions do seem like reasonable proxies.
      https://en.wikipedia.org/wiki/Nuclear_winter seems to show a lot of serious looks at it. The fires set by Saddam after the Kuwait War seem to be a partial problem for the theory, but then it was just oil and gas that burned for 8 months and not cities so I'm thinking that would be a lot less soot in the air....plus if the war never happened wouldn't all that oil and gas have ended up being burned anyway?

      Delete
    3. Most, sure, but it's a different analysis, along the lines of Teleb's ideas in Skin in the Game. Existential threats mean there are no bounds to how strenuously you need to work to avoid the threat. Catastrophic means a highly negative outcome might be reasonably weighed against both its probability and its mitigation costs. You might talk of limiting the impact to 10% of its expected downside. You can't get a percentage of a state variable like 'extinct'.

      I only brought it up because, arguably, when catastrophic possibilities are characterized as existential in nature, the historical outcomes have often been poor. Societies tend to overstate impacts, overreact, and create catastrophe in turn. This is similar to Pinker's criticism, but subtly distinct. I'm not arguing there is nothing to worry about, or that worrying is inherently harmful. I'm arguing we need to be cautious about ensuring we worry to the appropriate degree.

      One more thing to worry about, in other words.

      Delete
  3. 4. Despite my quibbles, I would like to echo the main thrust of this post, which is to point out that there is a tendency of the current generation to believe that new knowledge has supplanted accumulated wisdom (or in the last few decades to believe that it supplants cumulative genetic adaptations). There are lots of good arguments that this is the case. However, anyone who has read enough books from the 19th, 18th, and even 17th centuries will recognize all these arguments as having been recycled from a time when - looking back at where they were at the time - we can now see they were obviously dumb. Of course people in the 1850's weren't sufficently technologically advanced to ignore cumulative historical/genetic wisdom! To think so at the time now appears obviously naive. And the things they proposed to do based on "modern scientific and philosophical understanding" are not only absurd, but could only end in disaster.

    This time it's different? But the arguments are the same. Prudence suggests we should be cautious against our proven tendency to over-confidence that what we know now is sufficient to make up for what we don't even realize we don't know yet.

    ReplyDelete