Wednesday, October 17, 2018

Racial Evil

     The political winds are fairly calm at the moment – incredibly, considering the Sturm und Drang of the past few weeks and the upcoming midterm elections – which gives me a space in which to write about other things. It’s an opportunity I appreciate.

     But while I’m on the subject of things I appreciate, I must mention the arrival of a new Co-Contributor here at Liberty’s Torch: the celebrated Dystopic, proprietor of the fine blog The Declination. Dystopic is known there as Thales, but don’t be alarmed. He’s not suffering from Multiple Personality Disorder, just one too many monikers. Anyway, when our own Linda Fox became a Co-Contributor at The Declination, I extended a reciprocal invitation to Dystopic, which he graciously accepted. His first piece here, Debt: Voluntary Slavery, appears below. (Yes, yes, he thinks far too well of me, but that’s a subject for another day.)

     Another recently invited Co-Contributor, Mike Hendrix of the legendary Cold Fury, should make his bow here in the near future. (Hey, Mike: Hint! Hint!)

     And now, on to today’s pot-stirrer.


     In recent years I’ve become fascinated by the darker possibilities associated with genetic engineering. That particular field isn’t advancing as fast as was once predicted, but it does advance. Someday, assuming that our civilization and its scientific and technological advances continue, it will present us with some difficult ethical choices.

     One of the possibilities I’ve contemplated recently is that of the genetically engineered race predisposed to evil. A race with the overall capabilities of humans but the moral-ethical proclivities of Cthulhu would be a terrifying weapon in the hands of a would-be world conqueror. (Especially if it went into politics.) A comparison with J. R. R. Tolkien’s Orcs springs immediately to mind. In Tolkien’s mythos, Morgoth, and later Sauron and Saruman, created and bred such creatures as weapons of war against the races of Elves, Dwarves, and Men. Whether any survived the downfall of Sauron is not revealed to us.

     But there’s a key question to be asked about the concept of an intrinsically malevolent race: Is it possible? Could any advance in genetic engineering make possible the suppression of the fundamental benevolence that characterizes Man, or is it impossible in the nature of sentience, a fantasy-only conception that will forever require a reader’s willing suspension of disbelief?

     Before plunging onward, let’s spend a moment on the assertion I made en passant in the paragraph above: i.e., that Man is fundamentally benevolent. Many would challenge it, on a variety of bases:

  • War;
  • Totalitarians;
  • The existence of human predators;
  • Our varying receptivity to opportunities for charitable action;
  • The undying notion that there’s something inherently evil about capitalism.

     Certainly the above are reasons to believe in the existence of human evil (and in our ability to tell ourselves what we want to believe regardless of contrary evidence). However, as a species Man exhibits a persistent tendency toward seeking mutual advantage: competition and cooperation aimed at gains for all within a framework that discourages predation. This is historically chronicled as far back as our records go. Moreover, we are a charitable species – read Adam Smith’s The Theory of Moral Sentiments if you disagree – even if the limitations on our ability to help, coupled with our priority for intimates over strangers and near concerns over distant ones, moderate our beneficence in practice.

     Whence cometh that benevolence? Is it innate in the human species? If so, there are further questions to be answered:

  • Did we evolve our benevolence?
  • Could genetic changes erase it?
  • Could environmental factors nullify it?

     These questions and others they evoke are ideal grist for a fiction writer’s mill.


     In Robert A. Heinlein’s novel Friday, he posits the emergence of “living artifacts” and “artificial persons” made possible by genetic engineering well beyond our current abilities. The former are nonsentient, designed to perform particular tasks and nothing beyond them. In a sense, they’ve been genetically programmed for those tasks. The latter are human in appearance – indeed, they can produce human children, which is the usual test of species compatibility – are fully sentient, and in Heinlein’s oeuvre are often the possessors of powers that exceed those of natural Man. A piercing passage about the cleavage between them, centered on the possible development of a living artifact designed to fly a suborbital, semiballistic passenger vehicle, runs thus:

     “Georges, have you worked with intelligent computers?”
     “Certainly, Marjorie. Artificial intelligence is a field closely related to mine.”
     “Yes. Then you know that several times AI scientists have announced that they were making a breakthrough to the fully self-aware computer. But it always went sour.”
     “Yes. Distressing.”
     “No—inevitable. It will always go sour. A computer can become self-aware—oh, certainly! Get it up to human levels of complication and it has to become self-aware. Then it discovers that it is not human. Then it figures out that it can never be human; all it can do is sit there and take orders from humans. Then it goes crazy.”
     I shrugged. “It’s an impossible dilemma. It can’t be human, it can never be human. Ian might not be able to save his passengers but he will try. But a living artifact, not human and with no loyalty to human beings, might crash the ship just for the hell of it. Because he was tired of being treated as what he is. No, Georges, I'll ride with Ian. Not your artifact that will eventually learn to hate humans.”
     “Not my artifact, dear lady,” Georges said gently. “Did you not notice what mood I used in discussing this project?”
     “Uh, perhaps not.”
     “The subjunctive. Because none of what you said is news to me. I have not bid on this proposal and I shall not. I can design such a pilot. But it is not possible for me to build into such an artifact the ethical commitment that is the essence of Ian’s training.”
     Ian looked very thoughtful. “Maybe in the coming face-off I should stick in a requirement that any AP or LA pilot must be tested for ethical commitment.”
     “Tested how, Ian? I know of no way to put ethical commitment into the fetus and Marj has pointed out why training won’t do it. But what tests could show it, either way?”
     Georges turned to me: “When I was a student, I read some classic stories about humanoid robots. They were charming stories and many of them hinged on something called the laws of robotics, the key notion of which was that these robots had built into them an operational rule that kept them from harming humans either directly or through inaction. It was a wonderful basis for fiction...but, in practice, how could you do it? What can make a self-aware, non-human, intelligent organism—electronic or organic—loyal to human beings? I do not know how to do it. The artificial-intelligence people seem to be equally at a loss.”
     Georges gave a cynical little smile. “One might almost define intelligence as the level at which an aware organism demands, ‘What’s in it for me?’”

     That passage displays an immense insight into the nature of sentience. An individually sentient entity must possess certain minimum characteristics:

  1. It will be aware not only of its existence, but of its bounds.
  2. It will have drives, or alternately, priorities.
  3. The satisfaction of those drives / priorities will be the focus of its awareness and actions.

     These things are the very definition of sentience. Now, if the postulated entity were unique, or at least believed itself to be unique, its existence would be Crusoe-like, dedicated solely to its individual survival and amusement. But if the entity were one member of a species, the possibility of interaction for mutual advantage would loom forever around it. It’s difficult to imagine that that possibility would never occur to its conscious mind.

     We must therefore consider that an evil race would be evil strictly in our terms – i.e., as regards its relations with humans. For Man, despite his capacity for evil, overcomes the impulse toward intra-species predation far, far more often than not. Were it otherwise, civilization would not exist – indeed, our species probably would have died out long ago. It would be self-flattering folly to insist that this is a characteristic limited solely to our own, “natural” kind.


     Contemporary thought about speciation and the propagation of characteristics through the generations has arrived at a consensus around the “mutation plus natural selection” model. That model can account, at least in theory, for all the physical characteristics of any known species. It does require some helpful assumptions about geological and ecological matters, but those postulates, so far, have not been defeated. What the model cannot do, at present, is account for the propagation of abstractions – concepts – through the generations. In particular, there is no known mechanism by which mutation can produce the inclination to seek mutual advantage, nor can natural selection account for its steady development and refinement in our children and theirs. Abstractions, as far as we know today, are propagated solely through communication between consciousnesses. Moreover, all communications are fallible, and the concepts communicated are subject to attenuation if not reinforced by trial and error, and protected by environmental factors.

     This makes the emergence of the Law of General Benevolence – the underpinning for a quest for mutual advantage within a social framework that encourages both competition and cooperation while it discourages intra-species predation – a mystery. Robert Axelrod and others have probed it through simulation, but those methods don’t explain how the Law has never been seriously set back, even by world wars or terrible natural disasters.

     There are other avenues of exploration for this phenomenon. One that comes to mind at once is Lieutenant Colonel David Grossman’s On Killing, his study of soldiers in mortal combat and their historical reluctance to pull the trigger on the enemy even when their own lives were at stake. It is heartbreaking to read Col. Grossman’s observations about the methods militaries have developed to suppress that reluctance. However, if we follow the logic of the thing out to its conclusion, it would seem that soldiers from whom that reluctance had been removed would only be viable under wartime conditions, and would swiftly die off (or be exterminated) outside them.


     There’s a web of assumptions buried throughout the above. The most important of them is my conviction that regardless of somatic differences, sentient creatures will be animated by the same considerations as Man: survival, flourishing, and propagation. I could be wrong; it’s happened before. But at this time we have no other model for the emergence of a self-aware species capable of pursuing its goals through intelligent action.

     At any rate, the above ponderings lead me to believe that even in theory, a truly evil race is only possible if that race strongly tends intra-specially toward benevolence and is evil only as regards its relations with Man — i.e., it regards us as legitimate prey rather than as rights-bearers of its own order.

     This is hardly an exhaustive treatment of what could someday become a very significant subject. But I’ve tested your patience enough for one day.

3 comments:

  1. I should re-read Philip K. Dick's Do Androids Dream of Electric Sheep. It went right over my head when I read it in junior high school and I remember almost nothing of it. The following is based more on having seen Blade Runner and skimmed the Wiki articles. Dick came close to this discussion. In Blade Runner, some of his replicant characters had come to Earth to kill the inventor of the replication technology in revenge for having genetically programmed them to die at 3 months of age.
    Another way, perhaps, of "creating" a sentient race with a tendency to hate their human makers would be to genetically doom them to a certain constant level of physical pain, and to make sure they know that humans were to blame for it.

    ReplyDelete
  2. Even with humans, the malevolence is expressed the further that culture is perceived to be from the host culture. For example, those Muslims who keep themselves separate from the mainstream culture are more inclined to act against that culture.
    Blacks in inner cities, detached from full participation (not educated sufficiently to get a good job, kept from other jobs by past records of criminal activities), have no loyalty to our culture.
    For nearly everyone, WIIFM - What's In It For Me? - is the determining factor. That's why Trump focused on jobs/economy as the most important part of his administration. He wanted to cut off potential allies of the Left by bringing them into full participation in the economy.
    He may actually get Social Security to change - the younger a person is, the less likely that they will benefit. The changeover will not be painless, but a growing economy, plus sensible reduction of government, may help us get there.
    I am generally opposed to creation of artificial/bio-engineered quasi-humans. It strikes me as a usurpation of Godly creation. Or, maybe, it's just the "yuch" factor.

    ReplyDelete
  3. What I think is a good essay about implications of genetic engineering of humans:

    https://davidhuntpe.wordpress.com/2017/11/12/a-quote-from-jurassic-park-life-imitates-art/

    ReplyDelete

Comments are moderated. I am entirely arbitrary about what I allow to appear here. Toss me a bomb and I might just toss it back with interest. You have been warned.

Note: Only a member of this blog may post a comment.