Quantcast
Channel: The Human Race to the Future: What Could Happen - and What to Do
Viewing all articles
Browse latest Browse all 29

Artificial Intelligence: Threat to Civilization?

$
0
0

Time frame: within 100 years
Humans are making computers with progressively greater artificial intelligence ("AI"). If knowledge is power, then more knowledge is more power. If and when we finally make computers with more intelligence than humans, those computers will be able to make others with even more intelligence. At that point the AI singularity will have arrived, ushering in a new era of rapidly escalating machine intelligence. As an animal cannot know what the human era holds for its species, we cannot see clearly what this era will hold for humankind. But it will likely be weird... perhaps interesting... and whether for better or worse, a seismic shift in the human condition.



The AI singularity is coming.

And coming soon, as noted by Asimov, Good, Vinge, Moravec, Kurtzweil, and others. When the AI singularity arrives current models of how AI affects society will become inapplicable, and about what will happen afterward we can only speculate. In a practical sense that is what 'singularity' implies.  

Murphy's law: if something can go wrong, it will.

This basic heuristic, familiar to any engineer, is due to the innate complexity of most practical systems. This complexity leads to our inability to know how they will act prior to testing (or worse, using) them. Therefore, we need to be concerned about any dangers that might occur after the AI singularity. Because current models will no longer apply, we can't really assign probabilities, high or low, to the dangers. So we need to be creative and try to identify all the dangers we can — and protect ourselves from those dangers.

Risks from AI arise from the mode of human-computer interaction (HCI) that occurs. We categorize the possible modes as the cooperation paradigm, the competition paradigm, and the simultaneous presence of both cooperation and competition.

The cooperation paradigm

In this paradigm, AI will serve humanity as a new kind of tool, unique in part due to the literally super-human power it will have after the AI singularity occurs.

The competition paradigm

According to this view, a sci-fi favorite, artificially intelligent entities will ultimately have their own agendas, which will conflict with ours.

Combined cooperation and competition

Artificially intelligent entities may interact with humans both cooperatively and competitively. This may arise from goals of the entities, or may be due to their use as tools by humans competing with each other.

These three modes each carry significant risks. The most catastrophic of these are outlined next.
Risks from the cooperation paradigm. These risks can be insidious, as they involve "killing with kindness." They are also varied, as the following list indicates.

  • Robots imbued with artificial intelligence — AIbots— could potentially eliminate the emotional need for individuals to be social organisms, leading to social and perhaps population collapse. As early as 2003, NEC was already selling PaPeRo, a nannybot for kids — shades of the sci-fi classics by Isaac Asimov and Philip K. Dick. Laws against robots being made with certain key human-like characteristics might be a sufficient safeguard. What such laws would be effective here? We will need to find out before it is too late.

  • AIbots could make more AIbots - and would, if the tide of economic forces has its way, until as many exist as people want to do their bidding. The "invisible hand" of those economic forces would push these bots to serve by efficiently farming, mining, and doing other activities that affect the natural environment. Such armies of bots could damage the environment and extract non-renewable resources orders of magnitude more efficiently than humans already are. The economic paradigm that the world currently operates on incentivizes this, making it difficult to prevent. To solve this problem, other economic paradigms are needed that incentivize stewardship of the Earth rather than its exploitation. What might such alternative economic systems look like? Much more remains to be discovered about this complex question. Since humans are already damaging the Earth without intelligent robots to help, creating new economic systems might save the day not only after the AI singularity (if and when), but also before.

  • AIbots could make it unnecessary for humans to work, leading to a species not required either genetically or culturally to do anything useful, resulting in deterioration of the race. What forms could such deterioration take, how far it could go, how would we recognize it when it occurs, and what are the solutions? These are challenging questions to which answers are still needed.


  • Risks from the competition paradigm. These risks are a perennial favorite of apocalypse-minded science fiction authors: the AIbots ("AY-bots") make their move. Humans run for cover. The war is on, and it's them or us, winner take all. If AIbots do take over, any remaining humans risk having no little choice but to wait, as AIbots progressively "roboform" the earth to make it suitable for AIbots but unable to support human life. For example, oxygen is probably bad for robots, so they might get rid of it. Guarding against such risks is a tough proposition: we don't know how to do it. What we do know how to do is think and debate. So we should do that, hoping that solutions will be found.
    Risks from combined cooperation and competition. Artificial intelligence could be embedded in robot soldiers. Such killerbots would in a real sense cooperate by competing (i.e., following orders to kill). Just as nuclear bombs and biological weapons could destroy all humanity, robotic soldiers could destroy their creators as well. And what about a highly capable AI tasked with promoting the interests of a large corporation at the likely expense of the rest of society?
    An emergent property of a robosoldier, corporbot or other AI intent on doing what it was created to do as effectively as possible seems likely to be to seek to increase its intelligence and other capabilities, because that would increase its effectiveness. It would then become increasingly able to pursue its assigned goals by unanticipated - and perhaps highly undesirable - means. For example, there exists no surer, more permanent way to end the common cold than vaporizing the biosphere, including all the human inhabitants, many of whom hoped to benefit! The risk of unanticipated and potentially disastrous side effects highlights the urgency of the classic advice "be careful what you wish for." This was explored by Asimov and his three laws of robotics, Shelley's Frankenstein, the venerable bottled genie bearing wishes, and even the bible - God reputedly being perfectly capable of destroying civilization, O' ye child of Noah.
    Solutions are hard to come by, but simply letting over-enthusiastic technology developers go their merry way could lead to just such a scenario. Since the question of how to guard against destroying ourselves with such technologies does not seem to be resolving, it is time to back up and seriously address the prerequisite meta-question, why will the question not resolve?
    Acknowledgement: I thank Tihamer Toth-Fejel for commenting.
    Notes
    ...as noted by Asimov, Good, Vinge, Moravec, Kurtzweil, and others. (1) Isaac Asimov wrote, "Each [intelligent computer] designed and constructed its [...] more intricate, more capable successor" in "The last question," Science Fiction Quarterly, Nov. 1956. Available on the Web. (2) Irving J. Good, "Speculations concerning the first ultraintelligent machine," in Franz L. Alt and Morris Rubinoff, editors,Advances in Computers, vol. 6 (1965), pp. 31-88. Available on the Web. (3) Vernor Vinge, The coming technological singularity: how to survive in the post-human era, NASA technical report CP-10129 (1993) and Whole Earth Review (Winter 1993). Available on the Web. (4) Ray Kurzweil, The Singularity is Near, 2005.
    Shades of the sci-fi classics...see "Robbie," the first story in Asimov's book I, Robot, 1950 (originally "Strange Playfellow," Super Science Stories magazine, September 1940). See also "Nanny," 1955, reprinted in The Book of Philip K. Dick, 1973.
    ..."invisible hand": phrase coined by famed economist Adam Smith, The Wealth of Nations, first published in 1776. Http://www.gutenberg.org/etext/3300.
    The three laws of robotics...see I. Asimov, I, Robot. The book, not the movie (which bears little resemblance to the book). Also see R. Clarke, "Asimov's Laws of Robotics: Implications for Information Technology," part 1: IEEE Computer vol. 26, no. 12 (December 1993) pp. 53-61, and part 2: vol. 27, no. 1 (January 1994), pp. 57-66, www.anu.edu.au/people/Roger.Clarke/SOS/Asimov.html.
    Frankenstein...see M. Shelley, Frankenstein; or, the Modern Prometheus, many editions, publishers, and even variant titles, www.literature.org/authors/shelley-mary/frankenstein/index.html.
    Genie...see One Thousand And One Nights, by many authors, editors, and compilers over the centuries, and in many versions and variant titles.
    God...see, well, the bible and its many derivative works.
    This account benefited from observations by Joscha Bach, Moshe Looks, Shosha Na, and Bob Mottram.

    Viewing all articles
    Browse latest Browse all 29

    Trending Articles