Artificial Intelligence, Morality and The Singularity

The idea that one day, in the near future, man will create a machine that encompasses the same level of intelligence as humans or, the singularity, has been a topic of popular discussion recently. The noted science fiction writer Vernor Vinge first introduced the term in 1983, but it did not get any widely-known appeal until the futurist Ray Kurzweil introduced his book, The Singularity is Near. Moreover, the singularity has been a topic of the science fiction-genre in film as well, with the idea appearing in films such as 2001: A Space Odyssey, Blade Runner, and more recently Ex Machina. Interestingly, the basic idea and central argument for singularity dates back to the 1965 article, Speculations Concerning the First Ultraintelligent Machine by I. J. Good, here he states:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Good’s central argument is based on the fact that these machines’ ultimate “goal” is to become increasingly more intelligent than its predecessor.  Although Good’s argument is formulated constructively, the claim only concerns itself with intelligence without making an account for any other aiding factors that inform a highly intelligent being.

Using humanity as my leading example for sentient intelligent beings, I will argue that Good ought to consider a form of morality in his first premise of the singularity argument, as his claim fails to account for such a phenomena. Using humans as the only leading example for the highest form of intelligence that is known, it is unavoidable that, when ultraintelligence is born, ethical and moral questions arise along with them: Will these ultraintelligent machines have, imbedded in their programming, morality? Does intelligence presuppose morality? Do ultraintelligent machines need morality? What is the relationship between intelligence and morality? In this paper I will consider the Kantian and Humeian accounts for the relationship between morality and intelligence and the ramifications of such modes of ‘programming’ for ultraintelligent machines.

In order to understand the relationship between artificial intelligence, morality and the singularity, Good’s formal argument should be understood in-depth. Good presents his argument by presupposing the existence of artificial intelligence (AI). AI, for the sake of argument, is an intelligence that is equal to a human level of intelligence or greater. By this account, Good then presumes that if there is AI then there will be AI+. AI+ is an artificial intelligence far greater than that of the smartest human. Good’s argument for the singularity is as follows:

  1. There will be AI+.
  2. If there is AI+, there will be AI++.

—————

  1. There will be AI++.

Good presets his claim with the understanding that, when AI+ is created, there will be an explosion of intelligence that will evolve in a manner that humanity cannot even imagine to prepare for. His argument is sound insofar as the point at which AI+ is created, but his claim does not account for the creation of AI, thus begging the question of what AI consists of initially.

When one ponders the existence of sophisticated intelligence one must inevitably draw parallels between human intelligence, be it that humans are the only form of high intelligence (that we know of). One might pose the claim that comparing artificial intelligence and human intelligence is anthropomorphizing something that is unable to acquire a wholesome understanding of humanity. This claim would be a slippery slope in that simply, if a human creates the initial AI, this initial AI is a product of humanity and thus, inherently human-like—humans understand intelligence by our own nature and nurture i.e. a programmers only knowledge of sophisticated sentient intelligence is by that of humans, thus the AI’s programming will be similar to humans’ evolutionary-sanctioned intelligence. Interestingly, what is perplexing about humanity’s collective intelligence is how much of a factor morality plays into how we enact as individuals as well as a collective species. If one creates an AI, one must consider the relationship between intelligence and morality—using humans as the leading example—intelligence and morality have an inherent connection to one another. Two widely known and opposing views of this relationship are by that of Immanuel Kant (1724-1804) and David Hume (1711-1776).

Kant argued that a moral agent must enact based on a standard of rationality or level of intelligence—this is what he called the ‘Categorical Imperative’. In this, he showed how moral requirements in and of themselves are essential to one’s rational agency. Kant was, in his moral theory, searching for an objectivity or Truth to morality. His argument is to show how humans ought to act in a ‘universal’ manner. Kant’s deontological ethical view is simplified best when he states, “Act only according to that maxim whereby you can at the same time will that it should become a universal law.” Kant saw a distinct connection between moral agency and intelligence. The more rational one is, the more likely to act morally that person is.

When looking at Kant’s moral view under the context of the creation of an initial AI (with consideration of AI+ and AI++), the question of the importance of moral coding into AI systems becomes very pertinent. If AI and AI+ were programmed with Kant’s moral theory and continued to evolve their intelligences exponentially, what will their morality look like as perceived by us? Will AI++ be so evolved that its morality is so sophisticated, when evaluated by humans, it seems to be non-existent? If these questions are not considered in programming the initial AI, then the results could be catastrophic. For the sake of argument, assume that an AI’s intellectual programming is to protect the earth to the best of its ability i.e. natural disasters. Once this AI evolves into further sophisticated forms (AI++ or AI+++) it might view humanity as a threat to the earth and, based on its intellectual and moral programming, logically conclude that wiping out the entire human race is the most moral act due to humanity’s numerous negative effects on the planet. All the while, this could be perceived entirely different. The AI could see humanity as a blessing to the planet and guide our species to be the best versions of ourselves in order for us to grow as a collective consciousness. Whatever the case (let’s hope it ends up being the latter) we ought to consider these possibilities because the future of our species would depends on it. As an alternative to Kant, Hume’s moral theory is worthy of consideration as well.

Hume’s moral philosophy is based on his distinctions of empiricism. Hume argues that reason, in and of itself, cannot be a motive to one’s will, but rather is the ‘slave of the passions’, as he calls it. Unlike Kant, Hume holds that morality is not derived from nor related to reason, “Morals excite passions, and produce or prevent actions. Reason of itself is utterly impotent in this particular. The rules of morality, therefore, are not conclusions of our reason.” Hume sees morality as something more organic and unlike Kant, did not see a connection between intelligence and morality.

Placing Hume’s theory under the context of the singularity argument is complicated. If, in the programming of an AI, intelligence is the main component/goal, but a consideration of ‘Humeian’ moral agency is taken into account, the results could be unexpectedly strange. An AI system could, under this programming, be far too emotional due to ethical and existential dilemmas and fail to partake in any intellectual activities, thus rendering it useless. Or, alternatively, a ‘Humeian’ AI could discount the human race altogether because it sees morality as a ‘silly’ misstep in human evolution, and end our existence without a second thought.

When considering Good’s argument for the singularity, taking into account morality and its ramifications of it being empirically understood into the programming of an AI is of grave importance. The questions that arise with the creation of an AI as well as AI+ and so on, ought to be considered and critically thought about as in-depth as possible. The singularity is not just great content for a flashy science fiction film—this is something that could happen in this life-time. Dismissing it as ‘not important’ is a fallible reaction to something that could change the entire framework of human history, if not end it. Without a doubt, morality is something that is inherent to who we are as a species—weather its problematic or not, moral agency is unique to who we are now and how we evolve into the future. If AI’s are our next step in the evolutionary web, we ought to take with that step a part of what makes us stand out as sophisticated sentient intelligence on this planet.

 

Leave a comment