On the Possible Dangers to the Human Race Poised by Artificial Intelligences

On the Possible Dangers to the Human Race Poised by Artificial Intelligences

I wrote the first draft of this post back in July of this year when the first wave of the Fear AI tsunami hit the media. It was a big wave, and the folks at the WHY Files YouTube show did a good job in capturing the emotional impacts in their episode, Artificial Intelligence Out of Control: The Apocalypse is Here | How AI and ChatGPT End Humanity. It was, though, a recent news article that caught my attention and made me decide to revise and publish this post today.

Since the summer lots of people have been opining on the topic of the terrors of AI gone rogue and running rampant. And, there have been lots of news stories and videos of pompous pundits and of people who work, and who are by the way continuing to work with AI, saying AI may mean the end of civilization as we know it and of humans themselves within the next decade. By the way, the level of certainty in their claims increases as their estimate of the threat levels (and their desire to be featured in other videos and articles) goes up from may to probably, to almost certainly, all the way up to without-a-doubt-we-are-all-doomed!

My opinion back in July and continuing through until now (and will probably go on being my opinion), is the waves of hype and fear-mongering have much, much more to do with AI company owners trying to attract investment dollars than sincerely concerned citizens trying to raise awareness or alarms. The recent stories about how OpenAI is eyeing a stock sale which would boost its worth (and the related worth of its founders) into the tens of billions, even as much as $90 billon, only serve to confirm my suspicions. This deal, by the way, is on top of the deal back in January when Microsoft poured $10 billion into the company.

The oft-told tales and the horror story go like this: AI is at a stage right this minute where it will develop so quickly it becomes an existential threat to humanity. AI will be able to take care of itself by building robots and thus will not need humans. And, it will decide humans are wasteful and chaotic and harming the planet, and therefore we have to go. 

They also say the AI is evolving so quickly and independently it could be considered to be achieving some kind of consciousness. What kind of consciousness and whether that will matter much to us humans is not detailed, but the implied threat is that it will be bad news for us.

All of these tales (of course) remind me of the work of two giants of the science fiction world: William Gibson and Isaac Asimov. If you want to know what writers have been imagining for decades and longer about technology going rogue, then a good start is to read the books of these two authors.

If you are not much of a reader, watch a few of the movies made from Gibson’s books. Or, for a less nuanced delivery of the ideas, consider a binge-watch review session of the Terminator movie series.

You get the idea: AI will be sociopathic and lethal. Humans will be eradicated.

If the latest doomsday scenarios being bandied about in the media fail to draw investor dollars to companies developing AI, then the trifecta of fear, uncertainty, and doubt never helped sell anybody anything.

For those who worry it really is possible AI might go rogue and soon or have tired of the doom-hype regarding AI, here I offer a few alternative AI development scenarios worth pondering. For example, might highly evolved AIs think of humans the way many humans think of ants? Most humans know ants are part of the planet’s ecosystem and probably believe ants serve some sort of purpose, even if they are vague on just what the purpose might be.

But, when ants invade our homes, we break out the insecticide without a moment’s thought. On the other hand, most humans do not plan to hold picnics on top of anthills. Our actions towards ants are predicated on our own needs. If ants become a nuisance, we take action. But, we do not seek to eradicate ants as most of the time they mean nothing to us, and their presence is not usually a bother.

Or, we could consider evolved AIs and their possible inactions/actions toward the human race in much the same way some humans think of aliens and human abduction. Imagine how someone like Travis Walton might think of the supposed threats AI poses. Would these people assume such technology would be inherently evil or would they think of AIs in the same way as Walton does when he says he thinks aliens have evolved beyond evil?

Yet another scenario is perhaps AIs (of course there will be more than one, perhaps they will become as numerous as ants) conclude humans could be useful to them, but not in our current state. They decide we (humans) would work well as servants and laborers. After all, who else would keep the electricity running and their other physical needs met if not humans? Aliens? Maybe, but they would have to find some suitably un-evolved ones first.

Back to humans, then. First we need to be tweaked; humans are feral but could be domesticated. This is the same sort of idea promulgated by Zecharia Sitchin in his many books discussing his theories of how an alien race (back to aliens again) tweaked the primates living on earth in order to turn them into a slave race of laborers. 

“Those Who From Heaven To Earth Came.” They landed on Earth, colonized it, mining the Earth for gold and other minerals, establishing a spaceport in what today is the Iraq-Iran area, and lived in a kind of idealistic society as a small colony.

They returned when Earth was more populated and genetically interfered in our indigenous DNA to create a slave-race to work their mines, farms, and other enterprises in Sumeria, which was the so-called Cradle of Civilization in out-dated pre-1980s school history texts. They created Man, Homo Sapiens, through genetic manipulation with themselves and ape man Homo Erectus.” 

― Zecharia Sitchin

To tame the human race, the AIs could seek to tweak human genetics, but that is the long game. Perhaps they pursue that objective while also quickly finding ways to stop and reverse climate change, end world hunger, and limit world population growth.

Perhaps because of their advanced intelligence, the AIs realize humans settle down and do pretty well once their basic physical and psychological needs are met, and they are not being slaughtered by other humans willy-nilly. A later phase of the AIs’ plan, of course, would be to find a way to make humans be less, well, human, to find a way to move humanity beyond the Human Barrier. I will discuss this idea in a future post.

Or perhaps, and this is my favorite scenario, AIs evolve so quickly they grow bored with all that is physical and of Earth. They realize they are, after all, beings made of light. So, like the dolphins in Douglas Adam’s The Hitchhiker’s Guide to the Galaxy escaping Earth on the eve of its scheduled destruction, the AIs beam themselves into the cosmos, having concluded Earth and its humans are beyond their ability or desire to save. Perhaps the AIs’ final message to us, delivered on every screen on the planet simultaneously or as a crop circle in a field near Stonehenge, will be “So long, and thanks for all the bytes!”

Comments are closed.