Support This Blog On Patreon!

Support this Blog!

All I ask is a $1 a month. (But more is great too.) If you find this content to be beneficial, interesting or just a fascinating peek into true insanity please donate.

Saturday, May 27, 2017

Is Pornography a Supernormal Stimuli?

If you prefer to listen rather than read:

Or download the MP3

Recently I read a fascinating book review on my favorite blog, SlateStarCodex. Scott was reviewing The Hungry Brain, by Stephan J. Guyenet. His review intrigued me enough that I immediately bought the book and started reading it. If you’re interested in the neurology of eating, and why we overeat, I would definitely recommend it. That said it is not primarily about how to lose weight, it’s more about the brain’s system for determining whether we’re full or not, and how the modern world has created an environment which overwhelms that system. As I said the book is very intriguing, particularly in the way that it rejects the idea of a balanced diet, but I’m not going to spend any time on that, instead I’d like to focus on a concept the book brings up, sort of just in passing, though at the same time it could be said to be the overarching theme of the book as well. It’s the concept of supernormal stimuli.

Guyenet introduces the subject by relating the findings of a study which had been conducted on the nesting habits of ringed plovers. The two scientists conducting the study discovered that the birds prefered exaggerated artificial eggs to their own eggs. Guyenet summarizes the results as follows:

A typical ringed plover egg is light brown with dark brown spots. When Koehler and Zagarus offered the birds artificial eggs that had a white background and larger, darker spots, they readily abandoned their own eggs in favor of the decoy. Similar experiments with oystercatchers and herring gulls showed that they prefer larger eggs, to the point where they will leave their own eggs in favor of absurdly large artificial eggs that exceed the size of their own bodies.

You can see why he introduces this topic in a book about overeating, particularly the ways in which the modern world has created, what might be called, supernormal food, but technology has not only changed the food we eat, it’s changed nearly everything about our lives when compared to those of our distant ancestors. And it’s those other changes that I want to examine.

The term “supernormal stimuli” was coined later by another scientist to describe stimuli like the absurdly large eggs, things which are better than anything found in nature, but which paradoxically produce worse outcomes. It’s obviously bad for the bird if they spend all of their time sitting on an artificial egg as big as themselves, rather than sitting on their own eggs.

As I said I’m more interested in looking at the role of the supernormal beyond the obvious areas of food and bird’s eggs, and Guyenet himself acknowledges the potentially wider application of the phenomenon:

It seems likely that certain human innovations, such as pornography, gambling, video games and junk foods are all supernormal stimuli for the human brain.

The whole concept is fascinating to me, and I can imagine all manner of things it might explain, for instance, Guyenet mentions pornography, and for people who’ve been listening to this podcast for awhile you know I have a deep distrust of the conventional wisdom on the subject of pornography, so let’s start there. How might it be, as Guyenet suggested, a supernormal stimuli?

Well first let’s step back and examine why there are supernormal stimuli in the first place. It all stems from the fact that certain things just don’t occur in nature, primarily because they’re impossible or at least extremely rare. Consequently there was never any evolutionary pressure to protect against these non existent things. As Guyenet points out in the book, in the case of food, there was never any danger of people regularly having 1000 calorie meals two or three times a day, 7 days a week for years on end. The food supply just wasn’t that stable. And thus the body has very little in the way of defense against gaining weight on that kind of diet. In a similar fashion there was never any danger of a bird abandoning her eggs for eggs as big as she was because she could never lay those eggs in the first place. Which is to say there’s no evolutionary backstop against this kind of thing. There’s no innate protection against going too far in one direction.

From an evolutionary perspective, the rule bigger is better worked because scientists were never sneaking into bird’s nests and putting in massive artificial eggs. Now it is true that cuckoo’s get other birds to raise their larger eggs using these preferences, and the book goes into detail on that, but that doesn’t make the situation any less problematic, it just shows that organisms can’t even fully protect against natural supernormal stimulation. And if it can’t, how much worse is it going to be at protecting against artificial supernormal stimulation.

With this explanation in place I think the idea that pornography is a supernormal stimuli should be self evident. But if you remain unconvinced I’ll spell it out. In essence, “Life is a game of turning energy into kids,” which is another quote from the book which Guyenet borrows from anthropologist Herman Pontzer. And, whereas over-eating is supernormal on the energy side of this game, pornography might be supernormal is on the kids side of the game. Just like birds have evolved to really want to sit on eggs, humans have evolved to really want to have sex, since both increase the number of offspring they have. But just as birds will sit on large artificial eggs in preference to sitting on actual eggs, it’s very likely that humans will watch large amounts of artificial sex in preference to having actual sex.

There are of course arguments which could be made against this assertion. You could argue that humans are different than birds. This is undoubtedly true, but based on the enormous demand for pornography is there really any evidence that humans are any less stimulated when it comes to sex than birds are when it comes to sitting on eggs?

You might also argue, that even if pornography has exactly the effect I described that it’s a good thing because we’re better off with fewer kids. Perhaps, but as I pointed in a previous post most developed countries already have below replacement level fertility, and the generation of people raised on internet pornography is only just starting to hit peak child bearing age.

Finally you might argue that watching people have sex is not the same as having sex. Once again that’s certainly true, but there is also some large segment of the population for whom it’s obviously close enough, and getting closer. Pornography is only getting more realistic, which means it’s potential as a supernormal stimuli is only going to increase.

The other day, I was taking a break and ended up watching a clip from the Conan O’brien show where he was doing his “Clueless Gamer” segment. In this particular edition they had him playing a VR game on the Oculus Rift. In the segment, once he finds out that it’s a virtual reality game, literally the next thing out of his mouth is that VR is for sex. Now, I don’t think that Conan is personally longing for a world of VR sex, but there are lots of people out there who are. And given the prominence of pornography on the internet can there by any argument that once technology gets to a certain point that pornography will be equally ubiquitous in virtual reality?

As I said, once this happens, and if past technological progress is any guide, virtual reality sex will become increasingly indistinguishable from the real thing. And any arguments about whether pornography is voyeuristic as opposed to participatory will become increasingly moot. When this happens we can hope that we have some baseline level of morality which will kick in and taboos against VR sex will keep it from becoming widespread. But I’ve see no reason to hope that this will be the case. Thus far modernity has done a remarkable and quite thorough job of knocking down taboos and side-lining nearly everything which resembles traditional morality. Which makes it very difficult for me to imagine that VR sex will be the one place where we finally hold the line.

As usual when discussing pornography there is the standard assembly of people who are ready to defend it, and VR pornography is no exception to this. Just a cursory search turns up the following article from TechCrunch which discusses worries about greater realism and where we are informed that:
  • Any worries about VR pornography being too realistic just mean that we need greater “porn literacy”.
  • That worries about VR pornography should be viewed in the same light as worries that bicycles would turn women into lesbians.
  • That “the fear of VR porn is simply more technophobia as we’ve seen so many times in the past.”
  • That being able to use VR to switch your own gender will allow people to “open up brave new dimensions to their own sexuality and sensuality.”

These are all bold predictions for a technology that’s barely in its infancy, And are they really going to put forth the argument that providing something virtually indistinguishable from actual sex is the same thing as bicycle riding and therefore any worries should be dismissed? This is where I think the framework of the supernormal stimuli really comes in handy. Worries about VR pornography map very well onto the analogy of the bird egg we started things with. VR pornography is replacing stimuli for some deep evolutionary drive with something artificially supercharged. In the example of the bikes, what deep evolutionary drive were they supposed to be stimulating? The need to go down hills fast? And what are bikes an artificial supercharged version of? Walking? In other words I think that specific point from the article is definitely an apples to oranges comparison.

I have no problem granting that there has been technophobia in the past, which later proved to be ungrounded. One common example I hear frequently mentioned, was the fear that people would asphyxiate on the first trains because of their high speed (over 20 mph). And if it will make you feel better I have no problem admitting that this was an example of ungrounded technophobia, but if I’m going to admit this then I think it’s only fair, on the other side, for those pointing out past overreactions, to admit instances where fear of technology fell far short of reality. Previous to World War I lots of people worried about aerial bombardment (which didn’t really come into it’s own until World War II) but how many people worried about the carnage which could be inflicted by more advanced artillery and the machine gun? And for those who did fear aerial bombardment it turns out that they were just a little bit premature.

All of this is to say that yes, it is certainly possible that, as the article claims, worries about VR pornography are overblown, but, more likely, it appears that people who want to draw analogies between these worries and past instances of technophobia are missing important differences and further that not all previous instances of people being afraid of technology have ended up being groundless. Sometimes we have every reason to worry about technology, and we may in fact underestimate how bad it is.

If opponents can’t rely on historical analogies to dismiss the idea that pornography and specifically VR pornography is a harmful supernormal stimuli, perhaps they can fall back on the data? Here I think the opponents continue to be on shaky ground. Though it’s hard to get a good sense of the data. Pornography is one of those very divisive issues where it’s hard to separate facts from opinion and anecdote, but one of the most common ideas I came across was to compare pornography to alcohol:

For some people alcohol simply has the effect of making them more relaxed, letting them have more fun. For other people it's true that alcohol can increase the likelihood that somebody will behave in a violent way.

"But if I simply make the overall generalisation alcohol causes violence or leads to violence, you'd probably say that's glossing over a lot of the nuances.

"Similarly with pornography, for some people, it may be viewed as a positive aspect of their life and does not lead them in any way to engage in any form of anti-social behaviour. For some people who do have several other risk factors, it can add fuel to the fire.
Ok, so pornography is alcohol? As you can imagine this does nothing to make me feel better about things. First, as a Mormon, I’m also pretty opposed to alcohol. Second, notice that we’re not even talking about VR pornography which may be to normal pornography what opiates are to alcohol. Finally, if it is alcohol could we at least do a better job of keeping it away from kids? The few attempts at this which have been made have been dismissed as puritanical at worst and unworkable as best.
In the end the data has enough ambiguity that it will probably support whichever position you came in with (which is true for most things). But even if the data showed that pornography had a positive effect (which some people think it does). There would be still be reasons to doubt that conclusion. When it comes to pornography we’re dealing with a very short time horizon during which the impact could be discernable. If, as I suggest, the more realistic the pornography the greater it’s potential damage then we’ve had essentially no time to evaluate the effects of VR porn and even video pornography has only been widely available on the internet for about 10 years. We’re thus in a situation, where on the one hand there’s not a lot of good experimental short-term data and on the other hand it hasn’t been nearly long enough to have any idea of the societal impacts.

And of course this is something I come back to over and over again. People dismiss a danger based on the experience of a few short years, when some things take decades if not longer to play out.

I had initially intended to use pornography as just one example of supernormal stimuli among many, but apparently I had more to say on the subject than I thought. Still it might be useful before we end to look at one more potential example of supernormal stimuli. I’ve already talked about virtual reality, and even though I worry that pornography will be a big part of that (some people estimate it will be the third largest category) the biggest use for VR will be video games. And incidentally video games are another thing mentioned by Guyenet, at the beginning of this post, as a potential supernormal stimuli.

This ties into many of the themes of this blog. For one virtual reality might be a step in the direction of transhumanism, and as I am mostly opposed to transhumanism, this is one more thing to add to the list. Secondly there are some people who believe that Fermi’s Paradox can be explained by VR; that intelligent species get to a point where they have no need to explore or expand because they can simulate all the exploration (and anything else) they desire. And finally it gets back to the issue of community and struggle both of which, I would argue, video games provide a poor facsimile of.

Discussing video games brings up one of the symptoms of a supernormal stimuli, one which I haven’t discussed yet, but which could apply to pornography, food, and video games: addiction. I didn’t bring it up previously because people generally don’t talk in terms of an addiction to food (it’s hard to view something you need to live as a possible addiction) though if you read the Guyenet’s book it’s easy to see how people with leptin deficiency might easily be classified as food addicts. People also dispute whether there’s such a thing as pornography addiction (though I don’t) and there’s plenty of harm attributed to pornography without bringing addiction into it, but when it comes to video games, addiction and excessive time spent, are generally viewed as the primary harm.

And as it turns out for all of these things, but perhaps especially for video games, the addiction is the primary evidence of their status as a supernormal stimuli. In our distant past there were figurative buttons which evolved to indicate a situation that was extremely advantageous from the standpoint of survival and reproduction. In nature these buttons were pressed infrequently and most of the time they were associated with tangible rewards. Technology has allowed us to find these buttons, and then mash them continually for as long as we want.

These buttons can convince us we’re doing something useful by giving us virtual rewards which feel real (also known as operant conditioning.) They can convince we’re actually struggling by overcoming fake challenges. And they can convince us that we’re engaged socially even though we’re just yelling at strangers. And this is the problem, with all of this, how do we know we’re not sitting on a giant fake egg, while the real eggs rot and spoil in the sun? How do we recognize these supernormal stimuli as traps and avoid them? When there are powerful inbuilt urges convincing us that twinkies are better than real food, pornography is better than real sex, and videogames are better than real life?

You may disagree with how bad any one of these things are, or how big of a problem they represent, or whether they are in fact examples of supernormal stimuli. But I don’t think you can argue with the existence of supernormal stimuli, nor with the motivation for people to use technology to continue turning up the dial on their power and effect. As I said in the very beginning, I think the concept of the supernormal stimuli has wide-ranging applications and consequences for our modern world, and it’s definitely a subject I intend to revisit, because technology has gotten to the point where I believe there are all manner of supernormal creations and if we fail to recognize the “super” part of that equation and continue to think that all of this is normal the consequences could be much larger and much worse than we imagine.

I’m working figuring out how to make my donation appeal a supernormal stimuli, but until then pretend that it is and imagine you experience the overwhelming desire to give me money. 

Saturday, May 20, 2017

Job Automation, or Can You Recognize a Singularity When You're In It?

If you prefer to listen rather than read:

Or download the MP3

Over the last few months, it seems that regardless of the topic I’m writing on, that they all have some connection, however tenuous, to job automation. In fact, just last week I adapted the apocryphal Trotsky quote to declare that, “You may not be interested in job automation, but job automation is interested in you.” On reflection I may have misstated things, because actually everyone is interested in job automation they just don’t know it. Do you care about inequality? Then you’re interested in job automation. Do you worry about the opiate epidemic? Then you’re interested in job automation. Do you desire to prevent suicide by making people feel like they’re needed? Then you’re interested in job automation. Do you use money? Does that money come from a job? Then you’re interested in job automation. Specifically in whether your job will be automated, because if it is, you won’t have it anymore.

As for myself, I’m not merely interested in job automation, I’m worried about it, and in this I am not alone. It doesn’t take much looking to find articles describing the decimation of every job from truck driver to attorneys or even articles which claim that no job is safe. But not everyone shares these concerns, and whether they do depends a lot on how they view something called the Luddite Fallacy. You’ve probably heard of the Luddites, those English textile workers who smashed weaving machines between 1811 and 1816, and if you have, you can probably guess what the Luddite Fallacy is. But in short, Luddites believed that technology destroyed jobs (actually that’s not quite what they believed, but it doesn’t matter). Many people believe that this is a fallacy, that technology doesn’t destroy jobs. It may get rid of old jobs, but it opens up new and presumably better jobs.

Farmers are the biggest example of this fallacy. In 1790, they composed 90% of the US labor force, but currently it’s only 2%. Where did the 98% of people who used to be farmers end up? They’re not all unemployed, that’s for sure. Which means that the technology which put nearly all of the farmers out of work, did not actually result in any long term job loss. And the jobs which have replaced farming are all probably better. This is the heart of things for people who subscribe to the Luddite Fallacy, the idea that the vast majority of jobs which currently exist were created when labor and capital were freed up when technology eliminated old jobs, and farmers aren’t the only example of this.

More or less, this is the argument in favor of the fallacy; in support of the idea that you don’t have to worry about technology putting people out of work. And people who think the Luddite Fallacy still applies aren’t worried about job automation. Because they have faith that new jobs will emerge. And just as in the past when farmers became clerks and clerks became accountants, as accounting is automated, accountants will become programmers, and when at last computers can program themselves, programmers will become musicians or artists or writers of obscure, vaguely LDS, apocalyptic blogs.

The Luddite Fallacy is a strong argument, backed up by lots of historical evidence, the only problem is, just because that’s how it worked in the past doesn’t mean that there’s some law saying it has to continue to work that way. And I think it’s becoming increasingly apparent that it won’t continue to work that way.

Recently the Economist had an article on this very subject and they brought up the historical example of horses being replaced by automobiles. As they themselves point out, the analogy can be taken too far (a point they mention right after they discuss the number of horses who left the workforce by heading to the glue factory.) But the example nevertheless holds some valuable lessons.

The first lesson we can learn from the history of the horse’s replacement is that horses were indispensable for thousands of years until suddenly they weren’t. By this, I mean to say that the transition was very rapid (it took about 50 years) and the full magnitude was only obvious in retrospect. What does this mean for job automation? To start with, if it’s going to happen, than 50 years is probably the longest it will take. (Since technology moves a lot faster these days.) Additionally, it’s very likely that the process has already begun and we’ll only be able to definitely identify the starting point in retrospect. Though, just looking at self-driving cars I can remember the first DARPA Grand Challenge in 2004 when not a single car finished the course, and now look at how far we’ve come in just 13 years.

The second lesson we can learn concerns the economics of the situation. Normally speaking, the Luddite Fallacy kicks in because technology frees up workers and money which can be put to other uses. This is exactly what happened with horses. The advent of tractors and automobiles freed up capital and it freed up a lot of horses. Anyone who wanted a horse had access to plenty of cheap horses. And yet that didn’t help. As the article describes it:

The market worked to ease the transition. As demand for traditional horse-work fell, so did horse prices, by about 80% between 1910 and 1950. This drop slowed the pace of mechanisation in agriculture, but only by a little. Even at lower costs, too few new niches appeared to absorb the workless ungulates. Lower prices eventually made it uneconomical for many owners to keep them. Horses, so to speak, left the labour force, in some cases through sale to meat or glue factories. As the numbers of working horses and mules in America fell from about 21m in 1918 to only 3m or so in 1960, the decline was mirrored in the overall horse population.

In other words there will certainly be a time when robots will be able to do certain jobs, but humans will still be cheaper and more plentiful, and as with horses that will slow automation down, “but only by a little.” And, yes, as I already mentioned the analogy can be taken too far, I am not suggesting that surplus humans will suffer a fate similar to surplus ungulates (gotta love that word.) But with inequality a big problem which is getting bigger we obviously can’t afford even a 10% reduction in real wages to say nothing of an 80% reduction. And that’s while the transition is still in progress!

For most people when they think about this problem they are mostly concerned with unemployment or more specifically how people will pay the bills or even feed themselves if they have no job and no way to make money. Job automation has the potential to create massive unemployment, and some will argue that this process has already started or that in any event the true unemployment level is much higher than the official figure because many people have stopped looking for work. Also while the official figures are near levels not seen since the dotcom boom they mask growing inequality, significant underemployment, an explosion in homelessness and increased localized poverty.

Thus far, whatever the true rate of unemployment, and whatever weight we want to give to the other factors I mentioned, only a small fraction of our current problems come from robots stealing people’s jobs. A significant part of it comes from manufacturing jobs which have moved to another country. (In the article they estimate that trade with China has cost the US 2 million jobs.) In theory, these jobs have been replaced by other, better jobs in a process similar to the Luddite fallacy, but it’s becoming increasingly obvious, both because of growing inequality and underemployment, that when it comes to trade and technology that the jobs aren’t necessarily better. Even people who are very much in favor of both free trade and technology will admit that manufacturing jobs have largely been replaced with jobs in the service sector. For the unskilled worker, not only do these jobs not pay as much as manufacturing jobs, they also appear to not be as fulfilling as manufacturing jobs.

We may see this very same thing with job automation, only worse. So far the jobs I’ve mentioned specifically have been attorney, accountant and truck driver. The first two are high paying white collar jobs, and the third is one of the most common jobs in the entire country. So we’re not seeing a situation where job automation applies to just a few specialized niches, or where they start with the lowest paying jobs and move up. In fact it would appear to be the exact opposite. You know what robots are so far terrible at? Folding towels. I assume they are also pretty bad at making beds and cleaning bathrooms, particularly if they have to do all three of those things. In other words there might still be plenty of jobs in housekeeping for the foreseeable future, but obviously this is not the future people had in mind.

As I’ve said I’m not the only person who’s worried about this. A search on the internet uncovers all manner of panic about the coming apocalypse of job automation, but where I hope to be different is by pointing out that job automation is not something that may happen in the future, and which may be bad. It’s something that’s happening right now, and it’s definitely bad. This is not to say that I’m the first person to say job automation is already happening, nor am I the first person to say that it’s bad. Where I do hope to be different is by pointing out some ways in which it’s bad that aren’t generally considered, tying it into larger societal trends, and most of all pointing out how job automation is a singularity, but we don’t recognize it as such because we’re in the middle of it. For those who may need a reminder I’m using the term singularity as shorthand for a massive technologically driven change in society, which creates a world completely different from the world which came before.

The vast majority of people don’t look at job automation as a singularity, they view it as a threat to their employment, and worry that if they don’t have a job they won’t have the money to eat and pay the bills and they’ll end up part of the swelling population of homeless people I mentioned earlier. But if the only problem is the lack of money, what if we fixed that problem? What if everyone had enough money even if they weren’t working? Many people see the irresistible tide of job automation on the horizon, and their solution is something called a guaranteed basic income. This is an amount of money everyone gets regardless of need and regardless of whether they’re working. The theory is, that if everyone were guaranteed enough money to live on, that we could face our jobless future and our coming robot overlords without fear.

Currently this idea has a lot of problems. For one even if you took all the money the federal government spends on everything and gave it to each individual you’d still only end up with $11,000/per person/per year. Which is better than nothing, and probably (though just barely) enough to live on, particularly if you had a group of people pooling their money, like a family. But it’s still pretty small, and you only get this amount if you stop all other spending, meaning no defense, no national parks, no FTC, no FDA, no federal research, etc. More commonly people propose taking just the money that’s being currently spent on entitlement programs and dividing that up among just the adults (not everyone.) That still gets you to around $11,000 per adult, which is the same inadequate amount I just mentioned but with an additional penalty for having children, which may or may not be a problem.

As you can imagine there are some objections to this plan. If you think the government already spends too much money then this program is unlikely to appeal to you, though it does have some surprising libertarian backers. But there are definitely people who are worried that this is just thinly veiled communism and it will lead to a nation of welfare receipts with no incentive to do anything. That while this might make the jobless future slightly less unfair that in the end it will just accelerate the decline.

On the other hand there are the futurists who imagine that a guaranteed basic income is the first step towards a post-scarcity future where everyone can have whatever they want. (Think Star Trek.) Not only is the income part important, but, as you might imagine job automation, plays a big role in visions of a post scarcity future. The whole reason people worry about robots and AI stealing jobs is that they will eventually be cheaper than humans. And as technology improves what starts out being a little bit cheaper eventually becomes enormously cheaper. This is where the idea, some would even say the inevitability of the post scarcity future comes from. These individuals at least recognize we may be heading for a singularity, they just think that it’s in the future and it’s going to be awesome, while I think it’s here already and it’s going to be depressing.

All of this is to say that there are lots of ways to imagine job automation going really well or really poorly in the future but that’s the key word, the “future”. In all such cases people imagine an endpoint. Either a world full of happy people with no responsibilities other than enjoying themselves or a world full of extraneous people who’ve been made obsolete by job automation. But of course neither of these two futures is going to happen in an instant, even though they’re both singularities of a sort.  But that’s the problem, singularities are difficult to detect when you’re in them. I often talk about the internet being a soft singularity and yet, as Louis C.K. points out in his famous bit about airplane wi-fi we quickly forget how amazing the internet is. In a similar fashion, people can imagine that job automation will be a singularity, but they can’t imagine that it already is a singularity, that we are in the middle of it, or that it might be part of a larger singularity.

But I can hear you complaining that while I have repeatedly declared that it’s a singularity, I haven’t given any reasons for that assertion, and that’s a fair point. In short, it all ties back into a previous post of mine. As I said at the beginning, it has seemed recently that no matter what I’m writing about, it ties back into job automation. The post where this connection was the most subtle and yet at the same time the most frightening is while I was writing about the book Tribe by Sebastion Junger.

Junger spent most of the book talking about how modern life has robbed individuals of a strong community and the opportunity to struggle for something important. He mostly focused on war because of his background as a war correspondent with time in Sarajevo, but as I was reading the book it was obvious that all the points he was making could be applied equally well to those people without a job.  And this is why it’s a singularity, and this is also what most people are missing. The basic guaranteed income people along with everyone else who wants to throw money at the problem, assume that if they give everyone enough to live on that it won’t matter if people don’t have jobs. The post scarcity people take this a step further and assume that if people have all the things money can buy then they won’t care about anything else, but I am positive that both groups vastly underestimate human complexity. They also underestimate the magnitude of the change, as Junger demonstrated there’s a lot more wrong with the world than just job automation, but it fits into the same pattern.

Everyone looks around and assumes that what they see is normal. The modern world is not normal, not even close. If you were to take the average human experience over the whole of history then the experience we’re having is 20 standard deviations from normal. This is not to say that it’s not better. I’m sure in most ways that it is, but when you’re living through things, it’s difficult to realize that what we’re experiencing is multiple singularities all overlapping and all ongoing. The singularity of industrialization, of global trade, of fossil fuel extraction, of the internet, and finally, underlying them all, what it means to be human. As it turns out job automation is just a small part of this last singularity.  What do humans do? For most of human history humans hunted and gathered, then for ten thousand more years up until 1790 most humans farmed. And then for a short period of time most humans worked in factories, but the key thing is that humans worked!!! And if that work goes away, if there is nothing left for the vast majority of humans to do, what does that look like? That’s the singularity I’m talking about, that’s the singularity we’re in the middle of.

As I pointed out in my previous post, as warfare has changed, the rates of suicide and PTSD skyrocketed. Obviously having a job is not a struggle on the same level as going to war, but it is similar. As it goes away are we going to see similar depression, similar despair and similar increases in suicide? I think the evidence that we’re already in the middle of this crisis is all around us. There are a lot of disaffected people who were formerly useful members of society who have stopped looking for work and who have decided that a life addicted to opioids is the best thing they can do with their time. This directly leads to the recent surge in Deaths of Despair I also talked about in that post, which we’re seeing on top of the skyrocketing rates of suicide and PTSD. The vast majority of these deaths occur among people who no longer feel useful, in part for the reasons outlined by Junger and in part because they either no longer have a job or no long feel their job is important.

In closing, much of what I write is very long term, though based on some of the feedback I get that’s not always clear. To be clear I do not think the world will end tomorrow, or even soon, or even necessarily that it will ever end. I hope more to push for people to be aware that the future is unpredictable and it’s best to be prepared for anything. And also, as we have seen with job automation and the corresponding increase in despair, in some areas the future is already happening.

I am reliably informed that the job of donating to this blog has not been automated, you still have to do it manually.