What is the artificial intelligence singularity? At its core it is simply the claim that when (if) computers get smart enough to figure out how to make themselves smarter, they will do so. At that point computers will improve by themselves at a high rate, and without human assistance or even permission. Humans will then lose control of the process of making better computing devices. With the world under the sway of entities much smarter than ourselves, the world will be not only be very different, it will be unimaginably different, since we can no more understand, outwit, or control an entity much smarter than ourselves than a cow can a person. At least that's the theory. The counterpoint is the claim that for humans to build an intelligent robot is as absurd as for monkeys to reach the moon by climbing a tree pointing toward the sky.
Singularities. A singularity is a situation in which a mathematical model stops working. For example, for what x is the equation x/2=5 true? x=10, because then x/2=5; no singularity there. But what about x/0=5? Then there is no solution because ordinary arithmetic does not define what happens if you divide by zero. Thus the result is said to be "undefined." When whatever you are dividing with, be it money on Connecticut income tax form CT-1040, line 53, volume of the mass at the center of a black hole, or whatever, happens to be zero, you've encountered a singularity, and there is no answer. The Connecticut tax authorities might take a dim view of the matter, but astrophysicists are concerned: it is thought that the gravity inside a black hole can become stronger than the ability of matter to resist compression, leading it to be squeezed into a point of zero volume. Since calculating density requires dividing by volume (density=mass/volume), the density of matter at the center of a black hole would be undefined, and thus a singularity.
By analogy, the AI singularity occurs when computers get smart enough to build computers even more intelligent, which in turn build even smarter ones. With no end in sight, that account breaks down by giving no answer to the question of how far the process will go, seemingly predicting an unending spiral further and further toward infinite intelligence.
Of course, infinite intelligence can't happen, any more than a misguided Connecticut resident could have caused the entire state financial infrastructure to go "poof!" by trying to fill out line 53 without having any 2008 Connecticut adjusted gross income. Similarly, there is something strange at the center of a black hole, but it is real and definable even if we don't yet know what it is. (And maybe the density problem is merely that density is a derived, not a basic physical quantity, and should not be applied to black holes.) Singularities are properties of defective descriptions of real phenomena, not of real phenomena themselves.
The AI singularity. For the AI singularity, the real phenomenon involves computers getting steadily more powerful. They will come to outpace human intelligence in more and more ways. Computers have long exceeded our intelligence in speed and reliability of mathematical calculation. They can play chess more intelligently (as measured by being better). Each generation of modern computer can only be designed with computers, and to a greater and greater extent with each new generation. This process will continue but there will never be a moment when computers suddenly become smarter than humans and take over, because (1) intelligence is so complex and undefinable a concept that no single satisfactory measure of it exists, or can exist, and (2) intelligence as a technological and social force is so dependent for its power on interaction with others that, without an embedding social fabric, it would be mostly powerless. Thus we won't wake up one day to find our previously loved machines suddenly informing us, "all your base are belong to us." But the trends do suggest that they are gaining greater and greater intelligence and influence on our lives, with who knows what revolutionary results.
So what is intelligence and how can we tell how much of it computers have? The tricky question of properly defining and measuring intelligence does not seem to be solvable even for people. Nonetheless, everyone knows intelligence exists and that some have more of it. The classical approach to defining when computers have intelligence is the Turing Test, named after British code breaker and war hero Alan Turing, who later committed suicide by eating a poisoned apple (like Snow White) after being convicted of homosexuality, and then "treated" with hormone injections in accordance with the legal process of the time.
The essence of the Turing Test is that, in a keyboard conversation, if one can't tell whether one is texting with a chatbot or a person, and it is a chatbot, the chatbot should be considered intelligent. This is a clever idea, but not perfect. One problem is that it assumes that writing intelligent-seeming text messages actually requires intelligence. Another is that a person can tell the difference between text messages produced by intelligent and non-intelligent entities. The third major problem is that it ignores the possibility that a computer could be intelligent yet still unable to pass the test, much as a person not fluent in your language, though intelligent, would be unable to pretend fluency.
Turing Test considered harmful. The Turing Test is useful if imperfect, and has inspired a regular contest. Since 1991 the "Loebner Prize" has been awarded yearly to the owner of the chatbot contestant best able to fool a panel of human judges (the first chatbot was Weizenbaum's ELIZA, described in 1967). As an amusing side note, AI pioneer Marvin Minsky is on record as offering cash to anyone who can get Loebner to stop sponsoring the "stupid" prize. For his part, Loebner (a single gentleman and advocate of legalizing prostitution) argues this actually makes Minsky a co-sponsor of the prize, since he would have to give his cash offering to the owner of the first chatbot to fully pass the test, thus winning the Grand Loebner Prize and ending the annual competitions.
The Turing test is clearly suspect on logical grounds alone (as explained above), and most anyone working on chatbots will confirm that in practice, they don't consider their impressive creations to be truly intelligent. The truth is that intelligence is meaningful mostly socially: someone is intelligent mostly because society is in general agreement that their behavior is intelligent, and behavior is intelligent mostly because it operates on socially constructed problems. Einstein may have worked mostly alone, but he was hardly in isolation. The problems he worked on were determined by the social community of physicists together with the structure of the universe. His solutions are intelligent because he was a physicist and other physicists agree that they are. That is why he thought about the cosmos the way he did, instead of, for example, as Theosophy founder Madame Blavatsky did.
Chatbot performance appears to be improving year after year, so progress is being made. Indeed, the winner's performance in the Loebner Prize competitions over time would appear to be one way to measure progress in computer intelligence. Although not a perfect metric, it is an interesting one. Other metrics, also imperfect but very different from chatbot performance and from each other, also exist.
Computers can do arithmetic, long considered an example of intelligence in humans, but much faster. Computers are getting even faster and more accurate over time as bit widths and FLOPS increase, driven by hardware improvements in clock speed, concurrency, etc.
Progressively increasing computer chess performance resulted in a specially built computer that beat human champion Garry Kasparov as early as 1997. Game playing in general is a fruitful source of potential ways to measure improvements in computer abilities that humans need intelligence for, because games tend to provide a clear context that supports quantifying performance. For example robots compete in soccer in the robocup games, held yearly since 1997.
A trend toward improvement in a composite of different tasks indicative of computer intelligence is more convincing than improvement in any one metric, in part because intelligence itself is a such a complex, composite attribute. A useful approach might be to keep a running count of human games that machines are able to play better than humans. Chess is already there, but soccer is not (hopeful robocup organizers have the goal to "By the year 2050, develop a team of fully autonomous humanoid robots that can win against the human world soccer champion team.").
What we can do. Relax. The AI singularity will not suddenly rear up, changing your life dramatically for either the worse or, according to radical singularitarianism, the better. Singularitarians are unjustifiably extreme in their optimism, but every age has its messianic movements and its apocalypticists. If there is to be something like an AI singularity, it is happening already and at a fairly steady rate, thereby contradicting the typical definition of "singularity" in favor of steady, if brisk, increase. Computers are already far in advance of human arithmetic intelligence, and society has leveraged that into a many benefits. This will continue to happen with other computer capabilities as the number that exceed human intelligence grows progressively.
Movies have long relied an the concept of a human "mad scientist" who creates a robot of great capabilities. That will never happen. It takes a village to raise a child; it takes a sizeable community of skilled humans to create even a pencil: referring to everything from chopping trees for the body to making rubber and metal for the eraser, L. E. Read notes in "I, Pencil," "...not a single person on the face of this earth knows how to make me." Even the simplest computer is far more complex than a pencil. For a robot to create a robot of greater capability than itself would require either large numbers of humans and other computers to help, just as it does now, or a single robot with the intelligence, motor skills, and financial resources of thousands of humans and their computers, factories, banks, etc. How many thousands? No way to know for sure. But consider that human societies of thousands, once isolated, have lost even basic pre-industrial technologies. Tasmania is a well-known example.
An important need is for metrics that can tell us, in practical terms, the rate of progress by which artificial intelligence is marinating society. Arithmetic, the Turing Test, and chess are interesting but do not fit the bill alone. As factors in a richer, composite metric they can contribute. Another factor that might be useful is to count the rate of new AI applications becoming available over time. Such a metric should be debated and converged upon by society.
Notes
"...money on Connecticut income tax form CT-1040, line 53...": see for example http://www.ct.gov/drs/lib/drs/forms/2008forms/incometax/ct-1040.pdf. If your modified Connecticut adjusted gross income was zero in that year, you would have hit a singularity had you chosen to try to fill out the form.
"all your base are belong to us": a passage in the English translation of the Japanese video game Zero Wing that went viral in the early 21st century. See e.g. http://www.youtube.com/watch?v=5fV_KxVwZjU&feature=fvst
"Turing test": see A. M. Turing, Computing Machinery and Intelligence, Mind, Oct. 1950, vol. LIX, no. 236, pp. 433-460. E.g. http://mind.oxfordjournals.org/cgi/reprint/LIX/236/433.
ELIZA: J. Weizenbaum, Contextual Understanding by Computers, Communications of the ACM, vol. 10, no. 8, August 1967, pp. 474-480. See also J. Weizenbaum, Computer Power and Human Reason, W. H. Freeman and Co., 1976.
"Loebner Prize": see http://www.loebner.net/Prizef/loebner-prize.html.
"As an amusing side note...": see http://loebner.net/Prizef/minsky.html.
"Progressively increasing computer chess performance": e.g. R. Kurzweil, The Singularity is Near, Penguin Books, 2005, pp. 274-278.
"By the year 2050, develop a team of fully autonomous humanoid robots that can win against the human world soccer champion team." Http://124.146.198.189/images/goal1.gif, accessed via http://www.robocup.org/ as of this writing.
"It takes a village to raise a child": Hillary Clinton, It Takes a Village: And Other Lessons Children Teach Us, Simon & Schuster, 1996.
"L. E. Read's 'I, Pencil'": The Freeman, December 1958. Also http://en.wikisource.org/wiki/I,_Pencil, etc.
"human societies of thousands, once isolated, have not maintained even basic pre-industrial technologies": G. Clark, A Farewell to Alms: A Brief Economic History of the World, Princeton University Press, 2007.