Culture & Ethics Icon Culture & Ethics

A Transhumanist Asks, "Why Not Be Superheroes?"

Samson fights a lion.jpg

An article at TechCrunch poses the question, “Who’s Afraid of Superhumanity?” Writer Michael Solana argues with “detractors of germline engineering,” who he says

are betrayed by their true opinion: We can probably get the technology safely working, but what about the cultural ramifications?

What’s to stop a very small group of people, now enhanced genetically well beyond the average human in every way imaginable, from winning at the zero sum game of life? They’d just take all the resources, right? The superpeople would be supercapitalists –Donald Trump with a mind twice as formidable as Einstein’s, and we would all be made his slaves. The Trumpettes, he would maybe call us. The Trumpkins. The Trumped.

This is of course insane, and the problem is mostly fiction. From Brave New World to Gattaca, we’ve not heard a story that deals in the genetic enhancement of humans that doesn’t end in gross inequality or disaster. But the inequality piece is the most pernicious. So we’ve arrived at the politics of class, the primary motivation behind most criticism of technology, genetic or otherwise, as one could read plainly from the very start of the now-defunct Valleywag. But Gattaca isn’t real. It’s just a horror story that we’ve told ourselves.

What About a Redraft?

Let’s take the most incorrigible rich people alive. Let’s take the worst Bravo! reality television stars in history (so just, like, any random five I guess), and give them superchildren with superintelligence. Twenty young people are affected at the start, and 20 years later?

With the additional cognitive ability, they’ve cured every known disease and solved the aging problem. They’ve cracked sustainable nuclear fusion, and solved the energy problem. They’ve cracked gravity, and freed us for the heavens.

It doesn’t matter how many superpeople there are. A world with any superpeople in it is a better world.

MInd-and-Technology3.jpgIt’s worth considering. With regard to human enhancement, where should the line be drawn? Well, it’s not entirely clear. We can stipulate for purposes of a discussion that envisioned enhancements above our current norm are at least theoretically possible — and, eventually, coming. I mean here, take a normal healthy human and enhance qualities like beauty or intelligence. There is enough manipulative science out there, enough fascination, and enough money to push that ball forward eventually.

Now, the case of fixing some defect in the genome — say a propensity for heart disease — is less problematic. But the real driving force behind the Transhumanist idea is actually making, well, a superhuman, just as Solana says. A superhero. This case, I think, shouldn’t be lumped in with discussions about doctors and operating tables, as it just muddies the waters and groups together medical intervention for health with medical intervention for enhancement.

This isn’t the best comparison, but when I was in college (ages ago) I played football, and even at our level, now NCAA Division III, talk of steroids was in the locker room. Most people feel like taking anabolic steroids is cheating — yet in some sports the majority of athletes are enhancing themselves physically this way. What’s the difference between anabolics and taking, say, protein supplements? One is anathema, one is smart training. The issue always seemed grey to me, but the fact that steroids have been in sports since at least the 1950s and there still exists a general ethical condemnation of them speaks to the very human feelings people have on better than average enhancement. It’s not a Luddite concern. It’s a moral one.

Returning to Transhumanist claims, there’s a book, The Techno-Human Condition, that does an admiral service in explaining the general landscape here. The argument is that Transhumanists tend to view progress in individual terms. If each individual could be made more intelligent, then society in general would be better off. The authors, Braden Allenby and Daniel Sarewitz, call into question this claim as essentially myopic and na�ve. For one thing, intelligence itself is notoriously difficult to define, and a simplistic view of human capabilities in terms of “IQ” or some such measure seems destined to treat reality like a comic book.

Allenby and Sarewitz construct a framework of types: Type 1, 2, and 3 technologies. Type 1 technologies are the “things” we see: an airplane, say, which is a marvelous piece of technology. Type 2 technologies are the entire system in which the Type 1 technology is embedded (i.e., “aviation”), which includes layers of law, logistics, and human needs. Type 3 technologies are major movements that have long-term significance for society. Type 3 is slippery but involves considerations of energy use, communication among people, and broad-brush issues that arise when transportation changes with the introduction of a Type 1 technology.

Their argument is that Transhumanists and their agenda are almost exclusively about Type 1. Transhumanists see human progress in terms of individual enhancement, and thus fail miserably to grasp the real conditions for social change and progress. Everyone wants to be a superhero, but there’s a reason they’re the stuff of comics and movies for kids: reality is much more complex.

The real danger of Transhumanism isn’t that we can’t figure out a way to make Jane prettier or Johnny “smarter” on an IQ test. (However, especially in the latter case our ideas of human “smarts” are simplistic, as noted. In fact the intelligence question — really the raison d’�tre of Transhumanism, along with living forever — is the least likely to bear fruit with technology.) But the real issue is that the atomistic focus misses something more profound and necessary.

One needn’t be a Luddite to see this bigger point. Transhumanists miss the forest for the trees. Real progress is likely jeopardized if we spend our spare time trying to enhance a person, rather than improve the conditions of human society generally.

Image: Samson fights with a lion, by Lucas Cranach the Elder [Public domain], via Wikimedia Commons.

Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.

Share

Tags

Continuing SeriesHealth & WellnessMind and TechnologySciencetechnologyViews