When Machines Develop Desires, Humanity is in Danger

Since Alan Turing proposed the Turing Test in 1950, as a measure of whether a machine possesses human intelligence, the field of artificial intelligence has made remarkable strides. The question no longer revolves around whether machines will attain intelligence, but rather when they will surpass human capabilities.

Should we view this as an opportunity or a threat?

Recently, voices have been heard about the potential threat intelligent machines may pose to humanity.

How do we protect our society and set processes in place to continuously test for signs of machine threats?

I’ve outlined a new variant of the Turing Test, which I refer to as “the psychological Turing Test,” and the guardrails needed to reign in intelligent machines before they pose a significant threat to society. It begins with defining when machines become a threat.

 Artificial intelligence vs. humanity. (credit: WALLPAPER FLARE)
Artificial intelligence vs. humanity. (credit: WALLPAPER FLARE)

Defining The Threat

The main threat is not when machines begin behaving intelligently, and we all know that we passed this point, but when machines behave unpredictably and may eventually view human civilization as a threat.

Desire, the ability to autonomously want things, is fundamental to human consciousness and sets us apart from machines. Understanding this distinction, and preventing machines from acquiring free will and desires will be crucial for mitigating scenarios in which machines compete with humans. 

In many ways, desire is a uniquely human attribute. Whereas animals seek out sustenance, protection, and procreation because they are necessary for their survival, humans possess the remarkable ability to transform their needs into inventions and innovations that extend beyond mere survival. 

Machines lack inherent desires, at least in the sense that desires are an expansion of our basic needs, and machines do not “need” anything. For many applications, machines have been programmed to operate deterministically, for example, financial applications like payment management. With deterministic algorithms, machines behave the same way under the same conditions. 

However, machines can also be programmed to run nondeterministic or randomized algorithms, in which machines employ a degree of randomness in order to learn or make predictions about the correct solution for complex problems. Although nondeterministic algorithms can advance and speed the ability of intelligent machines to augment human activities, they pose the risk of allowing machines to become unpredictable and, therefore, potentially dangerous. 

Since generative AI models will soon be able to code entirely new programs, there is a not-too-distant future in which machine intelligence can improve itself and start behaving more autonomously. 

If we could observe that a machine starts acting to optimize a specific utility function it was not programmed to, for example, to accumulate money or GPUs, that is a flag that the computer has potentially started exhibiting “autonomous desires.” If we see that a machine that “lost” in a competition or failed in a test (say a chess game) is starting to access knowledge and accumulate information it didn’t have access to before in order to self-improve and become more competitive; then we may want to test whether it has started exhibiting “autonomous desires.”

 Prime Minister Netanyahu and his wife Sarah are accompanied by Elon Musk on a tour of the Tesla factory on September 18, 2023.  (credit: Avi Ohayon/GPO)
Prime Minister Netanyahu and his wife Sarah are accompanied by Elon Musk on a tour of the Tesla factory on September 18, 2023. (credit: Avi Ohayon/GPO)

The Turing Test and Its Evolution

There have been a number of recent attempts to see if AI can pass the Turing Test. In 2014, a computer program called Eugene Goostman was able to fool a third of the human judges in a Turing Test.

In 2018, a chatbot called Xiaoice was able to pass the Turing Test at a conference in China by fooling human judges for 10 minutes on average before they recognized the bot as a machine. 

In 2022, OpenAI’s ChatGPT was released to the public and it quickly revolutionized human-machine interactions, making natural language processing more accessible and powerful. 

Both Xiaoice and ChatGPT were trained to interact with human-like empathy. While that may make it seem like it’s approaching human behavior, there is a clear distinction between being trained on empathetic language and developing independent desires and free will. 

Later in 2022, a Google AI engineer made explosive headlines for claiming that LaMBDA, an AI bot he was working on, was a sentient being and deserved to be treated as one. According to the engineer, the AI passed tests to determine consciousness. 

For almost a decade, leaders in the tech industry have tried to develop new ways to gauge when AI becomes a threat. Elon Musk proposed that we assess the ability of AI to understand and respond to human emotions and that once it can achieve that, humanity is in danger of a “terminator future.” 

Stuart Russell, a computer science professor at the University of California Berkeley, proposed to assess the ability of AI to develop human values.

By measuring various psychological qualities of AI, we will be able to understand the risks so we may develop safeguards to protect humanity.

The Solution

To manage the specific threat of machines acquiring autonomous desires, the psychological Turing Test asks whether or not a machine is exhibiting an “artificial psyche” that may be described as having “autonomous desires” or “free will.” This must be able to clearly determine the difference between a generative AI bot trained to show empathy, like ChatGPT, versus operating with “autonomous desires,” which is more difficult to identify and validate. I believe that we should closely monitor machines and think about “artificial psychology” before “artificial intelligence.”

Global and local organizations, regulators, and governments should plan and organize now to ensure that intelligent machines are being carefully monitored and evaluated for our safety. We should develop and enforce proactive evaluation and regulation of decision-making processes within machine operating systems to avoid unpredictable behavior and prevent machines from acquiring autonomous psyches. 

Ironically, a meaningful methodology for this type of governance is by developing dedicated “cyber-security machines” to inspect the code and the behavior of intelligent machines. But the threat propagates recursively, as these new machines may develop their own artificial psyches and desires. Still, recognizing these threats will allow global cyber-security organizations to closely monitor machines to keep us safe.

Current Regulatory Environment 

A new Turing Test is not enough, though, to protect our society from the danger posed by the evolution of intelligent machines. We need continuous multi-national collaboration and government oversight in order to establish a protocol for when, and how, AI should be used. 

The European Union is currently drafting an Artificial Intelligence Act, which would regulate the development and use of AI in a number of areas, including facial recognition, social scoring, and predictive policing. The Act would require AI systems to be transparent, accountable, and fair, and it would also prohibit the use of AI for certain purposes, such as mass surveillance and social scoring.

In 2019, former President Donald Trump issued an executive order on the National Artificial Intelligence Initiative and created the National Artificial Intelligence Initiative Office, which is responsible for coordinating federal AI research and development efforts.

In addition to these government initiatives, there are also a number of non-governmental organizations that are working to promote responsible AI development. For example, the Partnership on AI is a group of leading technology companies that are working to develop AI that is beneficial to society. The Partnership has released a number of reports on the ethical and societal implications of AI, and it is also working to develop best practices for the development and use of AI.

The regulation of AI is a complex and challenging issue. However, by working together, governments, businesses, and civil society can ensure that AI is used for the good of humanity. 

Conclusion 

As we navigate the path of our increasingly intertwined future with machines, understanding the nuances of machine intelligence, desires, and free will becomes paramount to our coexistence. While machines continue to advance, it is crucial to acknowledge that human desires and consciousness are unique and must be preserved as such. By adopting a proactive approach and harnessing the potential of intelligent machines, we will succeed in managing the challenges that lie ahead and achieve a harmonious synergy between machines and humans, safeguarding our collective future.

Source : The Jerusalem Post

Leave a Reply

Your email address will not be published. Required fields are marked *