AI’s Logic Bomb

Rodrigo Gonçalves
Talkdesk Engineering
10 min readAug 21, 2023

--

Can Man and Machine ever be apart?

Image credits: Mikhail Nilov

Future War Now?

Afternoon in LA. The traffic moves slowly on four of the eight lanes. The thermometer shows 75ºF (that’s 24º C). In a nearby park children play. Their laughter fills the air as moms stare from a comfortable distance. A girl wearing a red t-shirt and blue trousers in a swing starts the ascension. As she goes up the surrounding sound ceases. A bright white light fills the sky. The light is so intense that blinds the sight. Suddenly everyone on the planet is dead.

The day is 29th August 1997.

Dark. A human skull rests on the driving wheel behind the inexistent windscreen of a crumpled and burned automobile. Many more cars rest nearby. A couple of hundred yards away a broken flyover dives on the car cemetery that was once a traffic jam on a lazy summer afternoon in the end of the XX century.

This is still LA. The year is now 2029 AD.

Nearby, the dented remains of the park where once children played now sit as evidence of a once bright time when people didn’t know they were happy. The crippled swing can still be seen. But not the girl.

The ground is covered with craniums. Suddenly, one of the skulls is smashed by a metallic leg. A skeletal soldier holding a laser gun scans the surroundings as flares, explosions and beams start to fill the air.

Fortunately, none of what it’s written above is true. It is the opening sequence of James Cameron’s 1991 blockbuster “Terminator 2: Judgement Day”. In this movie, the viewer is invited to reflect on the value of human life. Relationships focus on social values like compassion, loyalty and moral choices like to kill or not to kill. The protection of the future must be enabled by making sacrifices in the present. From the adolescent rage of co-star John Connor to the emotional inability of both Terminator cyborgs passing through John’s mother, Sara, fear, sadness and anxiety are also big themes on the picture.

Unsurprisingly or not, emotions and moral choices are somewhat expected features on any film about machines that behave like humans and that can learn. Many times the thesis of sci-fi movies that deal with cyborgs or intelligent machines (HAL 9000 from 1968 Kubrick’s “2001: A Space Odyssey” comes to mind) is that only humans can feel emotions and make moral choices. Emotions need neurochemical receptors and a mind. The biology behind morality is not well understood. However, it seems consensual that it happens in the brain structures, that it has strong mental and self-referential components, and that it needs others' awareness. Having all this in consideration, the premise that machines can’t feel or make choices that differentiate the good from the bad seems reasonable. After all, machines are not equipped with biochemical and mental hardware. Or can they be someday?

Psychology defines projection as a defense mechanism where disowned or uncomfortable traits in one person are unconsciously transferred to another person and perceived as a threat by the first one. This may be what is happening in many of the movies that exclude feelings and ethics from machines. We, the viewers, may feel threatened by their superiority. After all, they are machines, and machines can’t fail (unlike vulnerable and fragile human beings). This provokes discomfort and anxiety, causing us to project our fear of the machines in the form of withdrawing their ability to feel emotions. Human feelings, namely fear, are exactly what we disown, and what in our mind makes us vulnerable and fragile.

Defensiveness aside, the fact is that as of 2023, the Humanity is as close as it has ever been to see the dawn of a war between Man and Machine. If there was a Doomsday clock to measure the likelihood of machines turning against their masters, it would certainly show less than the 90 seconds its global catastrophe twin reads today.

Fast-forwarding from 2029 to 2023

In the end of 2022, OpenAI’s large language model-based chatbot was launched. ChatGPT, short for Chat Generative Pre-trained Transformer allowed humans to interact with it using natural language in a conversional demeanor. A couple of months later, this AI tool had reached over 100 million users, thus becoming the fastest-growing consumer software application in history and elevating its creator’s company to decacorn status.

More important than merely putting potential billions of dollars in the pockets of OpenAI’s investors, ChatGPT represented another milestone: putting AI on the path to the mainstream market. Other large scale technological enterprises followed OpenAI’s lead and accelerated their own AI and Machine Learning (ML) programs aiming to get their own products to the market sooner and compete with ChatGPT. The race started but there are still many miles to go.

According to CNN, the venture capital investment in generative AI companies in the first half of 2023 was five times that of the counterpart period in the previous year. As of August 2023, the investment in this market was already higher than the combined value of 2021 and 2022. Everywhere, companies have included the shimmer of AI on their product descriptions and sales pitches. Start-up founders are also pitching for funding using the AI preaching. Not unsurprisingly, and considering the data from the World Economic Forum, a significant number of companies expect to embrace AI technologies in the next five years. This will make the role of AI and ML specialists the fastest growing roles in the job market.

While the numbers look certainly great, there are complicating factors in the future of AI. Legal, environmental and trust issues may be seen as threats to the advance of the new industry. The sudden AI-based innovation fever brought by the ChatGPT revolution is essentially a technological shock. Like with many others before — think of the steam and the internal combustion engines, the airplane or the Internet, to name a few of the more recent technological breakthroughs - society is not ready to cope with them and runs in wide steps to catch-up.

Perhaps one of the more interesting aspects of the AI challenges is overcoming the public distrust. This is not new, considering that machine intelligence is a technological shock. When mill engines and steam locomotives appeared, during the First Industrial Revolution, people also became afraid of losing their jobs for the new mechanized tools. Differently from that time, where the threat had a physical form and could be seen and touched, AI is immaterial. Akin to physical forces, AI cannot be seen but its effects can be felt. But it’s not only the general public that seems to be apprehensive with the potential effects of artificial cognition. The uneasiness starts at home. Back in 2018, Elon Musk, founding member of OpenAI, resigned, allegedly due to safety concerns in the development of AI technology, among other reasons. In March 2023, Musk, along with Apple’s Steve Wozniak and other technology leaders, petitioned for an interruption in AI development claiming that it is going too fast and that it posed a risk to mankind. A couple of months after, in May, OpenAI’s own leaders asked for “governance of superintelligence” stating: “given the possibility of existential risk, we can’t just be reactive”.

This move by the AI elite brings Terminator 2’s worst fears to the surface. After all it seems that, while machines can get smarter than humans, they can’t judge so well on the benefits and losses of their own power. Fields like AI safety, ethics of AI, and machine ethics focus on preventing AI to cause harm to mankind. Given their recency, it may be too soon to put much faith in their ability to protect us from machine amorality. A new revolution is starting to storm the world.

Deus Ex Machina

Greek playwrights developed a narrative device to solve the unsolvable. Implausible happenings are part of life and fiction and to cope with them we need miracles. Deus ex machina is the magical wand that will turn the plot around. Despite being criticized as inartistic and false, the device stuck until modern times. In Steven Spielberg’s 1993 film “Jurassic Park”, there is a famous scene where Dr. Grant, Dr. Sattler and the kids are about to be attacked by some velociraptors after a crazy chase. In the moment when all seems lost, the T-Rex appears out of the blue and devours the smaller dinosaurs saving the day — and everyone! That is Deus ex Machina in action: a higher force steps in inexplicably to protect or save the hero.

OpenAI’s leaders believe that, in the next ten years, the appearance of a superintelligence — an agent “that greatly exceeds the cognitive performance of humans in virtually all domains of interest” — is highly likely. If how this will happen is not yet well understood, the same can be said about the consequences of such a feat. While the prospect of a superintelligent AI developing consciousness is not consensual, it is certainly a possibility. In that event, human traits like the ability to perform moral choices or even the need to have a larger-than-self purpose may flourish.

Artificial Intelligence is a bit like the Deus ex machina plot device: it’s a new character that arrives suddenly and unexpectedly and that seems to have the power to solve all our problems (even those deemed unsolvable). Of course, like the mighty T-Rex in Jurassic Park, we also have the risk to be killed by the newly arrived cast.

In computer sciences, a logic bomb is a sequence of instructions added to a program on purpose to cause malicious behavior under certain established conditions. Such things do exist in the real world as its testified by a Siemens Corporation employee that included logical bombs on his code to make it defective after a while so he could be rehired to fix the problems.

AI is a product of mankind. Man built AI with the purpose of serving it. It was Man that trained AI. All the learning models focus on concepts that relate to Man’s existence. As such, all the reality that the AI superintelligence knows is the reality of mankind. In a way, AI is human-centric in the sense that it was developed by the Man and for the Man.

Humanity, creator and horizon of the AI, may have inadvertently planted such a bomb on AI. Man is superintelligence’s logical bomb.

Modern Covenant 2.0

In “Homo Deus: A Brief History of Tomorrow”, Yuval Noah Harari describes some possibilities for the future of mankind, namely how Man can evolve. While algorithms and big data are discussed in the book, AI is a background actor. This is not surprising since the book dates from 2015, when AI was way behind the event horizon. In the book, an unbreakable bond between science and religion is discussed. The author calls this the “modern covenant” and implies that while science describes how things can be made, it is religion that tells if they should be made. One needs the other.

This is similar to the paradigm between Man and AI. As human history unfolded, and given that we are in a stage where AI is exploding, Man needs AI as the next step for its evolution. On the other hand, AI needs humans to feed and train it. Even, one might say, to provide meaning for itself, once it gets some sort of consciousness, if that ever happens. This mutualism between Man and machine started probably as early as 2.5 million years ago when the first tools appeared. However, as of now, it seems to be about to take another giant leap forward. The unbreakable bond between Man and Machine may very well be the version 2.0 of the modern covenant.

Thinking that machine will need Man after surpassing it in terms of intelligence may seem, at least, presumptuous. Once being smarter and intellectually more able than its creator, machines may decide to “go further”. We can only describe this in human terms as we do not possess — and certainly I don’t — the cognitive power of superintelligence. As such, entities with such superior capabilities may devise horizons that will forever be locked to humans or that the human moral system can neither evaluate nor value. Machines may decide to conquer other worlds, to further explore, and expand to the deep space. Wouldn’t this exploratory instinct, like post-modern conquistadores, be something more akin to Man than to the machine? Would this need to go beyond and to exceed itself be the development of an ego? Does a superintelligence need an ego? Or is it the ego something much more human? But even in the case where the superintelligent AI decides to transcend its origins, mankind reminiscence will still be stored somewhere in its data lakes. Man will forever be tied to AI.

At least, for the next step in the man-machine symbiosis, both need to go together hand in hand. This seems inevitable at the moment. What lies after no-one knows. At best, Man and machine develop a healthy relationship, and a lasting marriage where both are happy forever and humans can keep the pole position in the hierarchy of natural life forms. Maybe, if that’s the case, the merit goes to the machine as, with its superintelligence, it learns not just to do moral choices, but also to value human life as a very unique and precious thing. Perhaps that is the turning point where mankind majors in self-worth and self-respect. In the worst case scenario, the events in Terminator 2 may stop being fiction. The machines may rise and Man becomes obsolete or obliterated.

Good night, James Cameron. Wherever you are.

Disclaimer: No AI was used to write this essay. The author is a real human, sentient by birth, and subject to the suffering of having to deal with moral choices and ethical dilemmas. As such, he assumes all the responsibility for the ideas and opinions here expressed, as well as by any misinformation. Thank you for reading. Even if “you” happen to be an AI.

--

--