Time frame: within 100 years
Humans are making computers with progressively greater artificial intelligence ("AI"). If knowledge is power, then more knowledge is more power. If and when we finally make computers with more intelligence than humans, those computers will be able to make others with even more intelligence. At that point the AI singularity will have arrived, ushering in a new era of rapidly escalating machine intelligence. As an animal cannot know what the human era holds for its species, we cannot see clearly what this era will hold for humankind. But it will likely be weird... perhaps interesting... and whether for better or worse, a seismic shift in the human condition.
Risks from AI arise from the mode of human-computer interaction (HCI) that occurs. We categorize the possible modes as the cooperation paradigm, the competition paradigm, and the simultaneous presence of both cooperation and competition.
These three modes each carry significant risks. The most catastrophic of these are outlined next.
Risks from the cooperation paradigm. These risks can be insidious, as they involve "killing with kindness." They are also varied, as the following list indicates.
Risks from the competition paradigm. These risks are a perennial favorite of apocalypse-minded science fiction authors: the AIbots ("AY-bots") make their move. Humans run for cover. The war is on, and it's them or us, winner take all. If AIbots do take over, any remaining humans risk having no little choice but to wait, as AIbots progressively "roboform" the earth to make it suitable for AIbots but unable to support human life. For example, oxygen is probably bad for robots, so they might get rid of it. Guarding against such risks is a tough proposition: we don't know how to do it. What we do know how to do is think and debate. So we should do that, hoping that solutions will be found.
Risks from combined cooperation and competition. Artificial intelligence could be embedded in robot soldiers. Such killerbots would in a real sense cooperate by competing (i.e., following orders to kill). Just as nuclear bombs and biological weapons could destroy all humanity, robotic soldiers could destroy their creators as well. And what about a highly capable AI tasked with promoting the interests of a large corporation at the likely expense of the rest of society?
An emergent property of a robosoldier, corporbot or other AI intent on doing what it was created to do as effectively as possible seems likely to be to seek to increase its intelligence and other capabilities, because that would increase its effectiveness. It would then become increasingly able to pursue its assigned goals by unanticipated - and perhaps highly undesirable - means. For example, there exists no surer, more permanent way to end the common cold than vaporizing the biosphere, including all the human inhabitants, many of whom hoped to benefit! The risk of unanticipated and potentially disastrous side effects highlights the urgency of the classic advice "be careful what you wish for." This was explored by Asimov and his three laws of robotics, Shelley's Frankenstein, the venerable bottled genie bearing wishes, and even the bible - God reputedly being perfectly capable of destroying civilization, O' ye child of Noah.
Solutions are hard to come by, but simply letting over-enthusiastic technology developers go their merry way could lead to just such a scenario. Since the question of how to guard against destroying ourselves with such technologies does not seem to be resolving, it is time to back up and seriously address the prerequisite meta-question, why will the question not resolve?
Acknowledgement: I thank Tihamer Toth-Fejel for commenting.
Notes
...as noted by Asimov, Good, Vinge, Moravec, Kurtzweil, and others. (1) Isaac Asimov wrote, "Each [intelligent computer] designed and constructed its [...] more intricate, more capable successor" in "The last question," Science Fiction Quarterly, Nov. 1956. Available on the Web. (2) Irving J. Good, "Speculations concerning the first ultraintelligent machine," in Franz L. Alt and Morris Rubinoff, editors,Advances in Computers, vol. 6 (1965), pp. 31-88. Available on the Web. (3) Vernor Vinge, The coming technological singularity: how to survive in the post-human era, NASA technical report CP-10129 (1993) and Whole Earth Review (Winter 1993). Available on the Web. (4) Ray Kurzweil, The Singularity is Near, 2005.
Shades of the sci-fi classics...see "Robbie," the first story in Asimov's book I, Robot, 1950 (originally "Strange Playfellow," Super Science Stories magazine, September 1940). See also "Nanny," 1955, reprinted in The Book of Philip K. Dick, 1973.
..."invisible hand": phrase coined by famed economist Adam Smith, The Wealth of Nations, first published in 1776. Http://www.gutenberg.org/etext/3300.
The three laws of robotics...see I. Asimov, I, Robot. The book, not the movie (which bears little resemblance to the book). Also see R. Clarke, "Asimov's Laws of Robotics: Implications for Information Technology," part 1: IEEE Computer vol. 26, no. 12 (December 1993) pp. 53-61, and part 2: vol. 27, no. 1 (January 1994), pp. 57-66, www.anu.edu.au/people/Roger.Clarke/SOS/Asimov.html.
Frankenstein...see M. Shelley, Frankenstein; or, the Modern Prometheus, many editions, publishers, and even variant titles, www.literature.org/authors/shelley-mary/frankenstein/index.html.
Genie...see One Thousand And One Nights, by many authors, editors, and compilers over the centuries, and in many versions and variant titles.
God...see, well, the bible and its many derivative works.
This account benefited from observations by Joscha Bach, Moshe Looks, Shosha Na, and Bob Mottram.