Tag Archives: AI. artificial intelligence

A circular image of the center of a building.

Predicting the Singularity

Futurists differ on the technical marvels and cultural changes that will precede the singularity. In this context, let us define the singularity as a point in time when an artificially intelligent machine exceeds the combined cognitive intelligence of the entire human race. In effect, there is no widely accepted vision of the decade leading to the singularity. There are reasons why this is the case.

The most obvious reason is that futurists differ on when the singularity will occur. Respected artificial intelligence technology futurists, like Ray Kurzweil and the late James Martin (1933 – 2013), predict the singularity will occur on or about 2045. At the 2012 Singularity Summit, Stuart Armstrong, a University of Oxford James Martin research fellow, conducted a poll regarding artificial generalized intelligence (AGI) predictions (i.e., the timing of the singularity) and found a median value of 2040. If you scour the Internet, you can find predictions that are substantially earlier and a century later. Therefore, let me preface everything I say with “caveat emptor,” Latin for “Let the buyer beware.” In this context, you may interpret it, “Let the reader be skeptical.” Although I strongly believe that my predictions regarding the singularity are correct, I also caution that the reader be skeptical and examine each prediction using their own judgment to ascertain its validity.

After much research and thought, I have concluded that the world will experience the singularity between 2040 -2045. In effect, I agree with Kurzweil, Martin, and the 2012 Armstrong survey. That suggests that the singularity will occur within the next twenty-five years. In the next post, I’ll explain how I arrived at my projection in the next post.

A-life

Should We Consider Strong Artificially Intelligent Machines (SAMs) A New Life-Form?

What is a strong artificially intelligent machine (SAM)? It is a machine whose intelligence equals that of a human being. Although no SAM currently exists, many artificial intelligence researchers project SAMs will exist by the mid-21st Century. This has major implications and raises an important question, Should we consider SAMs a new life-form? Numerous philosophers and AI researchers have addressed this question. Indeed, the concept of artificial life dates back to ancient myths and stories. The best known of these is Mary Shelley’s novel Frankenstein, published in 1823. In 1986, American computer scientist Christopher Langton, however, formally established the scientific discipline that studies artificial life (i.e., A-life).

No current definition of life considers any A-life simulations to be alive in the traditional sense (i.e., constituting a part of the evolutionary process of any ecosystem). That view of life, however, is beginning to change as artificial intelligence comes closer to emulating a human brain. For example, Hungarian-born American mathematician John von Neumann (1903–1957) asserted, “life is a process which can be abstracted away from any particular medium.” In effect, this suggests that strong AI represents a new life-form, namely A-life.

In the early 1990s, ecologist Thomas S. Ray asserted that his Tierra project, a computer simulation of artificial life, did not simulate life in a computer, but synthesized it. This begs the following question, “How do we define A-life?”

The earliest description of A-life that comes close to a definition emerged from an official conference announcement in 1987 by Christopher Langton, published subsequently in the 1989 book Artificial Life: The Proceedings of an Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems:

Artificial life is the study of artificial systems that exhibit behavior characteristics of natural living systems. It is the quest to explain life in any of its possible manifestations, without restriction to the particular examples that have evolved on Earth. This includes biological and chemical experiments, computer simulations, and purely theoretical endeavors. Processes occurring on molecular, social, and evolutionary scales are subject to investigation. The ultimate goal is to extract the logical form of living systems.

There is little doubt that both philosophers and scientists lean toward recognizing A-life as a new life-form. For example, noted philosopher and science fiction writer Sir Arthur Charles Clarke (1917–2008) wrote in his book 2010: Odyssey Two, “Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect.” Noted cosmologist and physicist Stephen Hawking (b. 1942) darkly speculated during a speech at the Macworld Expo in Boston, “I think computer viruses should count as life. I think it says something about human nature that the only form of life we have created so far is purely destructive. We’ve created life in our own image” (Daily News, [August 4, 1994]). The main point is that we are likely to consider strong AI a new form of life.

After reading this post, What do you think?

A humanoid robot with an extended hand under the text 'The Artificial Intelligence Revolution' questioning AI's role in serving or replacing humans.

Louis Del Monte FMMK Talk Radio Interview on The Artificial Intelligence Revolution

You can listen and/or download my interview with Johnny Tan of FMMK talk radio discussing my new book, The Artificial Intelligence Revolution. We discuss and explore the potential benefits and threats strong artificially intelligent machines pose to humankind.

Click here to listen or download the interview “The Artificial Intelligence Revolution”

Side profile of a futuristic humanoid robot with a white face and visible mechanical components against a pale blue background.

Is Strong Artificial Intelligence a New Life-Form? – Part 4/4 (Conclusion)

In our previous posts, we discussed that there is an awareness that SAMs (i.e., strong artificially intelligent machines) may become hostile toward humans, and AI remains an unregulated branch of engineering. The computer you buy eighteen months from now will be twice as capable as the one you can buy today.

Where does this leave us regarding the following questions?

  • Is strong AI a new life-form?
  • Should we afford these machines “robot” rights?

In his 1990 book The Age of Intelligent Machines, Kurzweil predicted that in 2099 organic humans will be protected from extermination and respected by strong AI, regardless of their shortcomings and frailties, because they gave rise to the machines. To my mind the possibility of this scenario eventually playing out is questionable. Although I believe a case can be made that strong AI is a new life-form, we need to be extremely careful with regard to granting SAMs rights, especially rights similar to those possessed by human. Anthony Berglas expresses it best in his 2008 book Artificial Intelligence Will Kill Our Grandchildren, in which he notes:

  • There is no evolutionary motivation for AI to be friendly to humans.
  • AI would have its own evolutionary pressures (i.e., competing with other AIs for computer hardware and energy).
  • Humankind would find it difficult to survive a competition with more intelligent machines.

Based on the above, carefully consider the following question. Should SAMs be granted machine rights? Perhaps in a limited sense, but we must maintain the right to shut down the machine as well as limit its intelligence. If our evolutionary path is to become cyborgs, this is a step we should take only after understanding the full implications. We need to decide when (under which circumstances), how, and how quickly we take this step. We must control the singularity, or it will control us. Time is short because the singularity is approaching with the stealth and agility of a leopard stalking a lamb, and for the singularity, the lamb is humankind.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A metallic skull with glowing red eyes and wires attached, set against a black background.

Is Strong Artificial Intelligence a New Life-Form? – Part 3/4

Can we expect an artificially intelligent machine to behave ethically? There is a field of research that addresses this question, namely machine ethics. This field focuses on designing artificial moral agents (AMAs), robots, or artificially intelligent computers that behave morally. This thrust is not new. More than sixty years ago, Isaac Asimov considered the issue in his collection of nine science-fiction stories, published as I, Robot in 1950. In this book, at the insistence of his editor, John W. Campbell Jr., Asimov proposed his now famous three laws of robotics.

  1. A robot may not injure a human being or through inaction allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except in cases where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Asimov, however, expressed doubts that the three laws would be sufficient to govern the morality of artificially intelligent systems. In fact he spent much of his time testing the boundaries of the three laws to detect where they might break down or create paradoxical or unanticipated behavior. He concluded that no set of laws could anticipate all circumstances. It turns out Asimov was correct.

To understand just how correct he was, let us discuss a 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine?

Where would robots learn self-preservation? An obvious answer is on the battlefield. That is one reason some AI researchers question the use of robots in military operations, especially when the robots are programmed with some degree of autonomous functions. If this seems farfetched, consider that a US Navy–funded study recommends that as military robots become more complex, greater attention should be paid to their ability to make autonomous decisions (Joseph L. Flatley, “Navy Report Warns of Robot Uprising, Suggests a Strong Moral Compass,” www.engadget.com). Could we end up with a Terminator scenario (one in which machines attempt to exterminate the human race)? This issue is real, and researchers are addressing it to a limited extent. Some examples include:

  • In 2008 the president of the Association for the Advancement of Artificial Intelligence commissioned a study titled “AAAI Presidential Panel on Long-Term AI Futures.” Its main purpose was to address the aforementioned issue. AAAI’s interim report can be accessed at http://research.microsoft.com/en-us/um/people/horvitz/AAAI_Presidential_Panel_2008-2009.htm.
  • Popular science-fiction author Vernor Vinge suggests in his writings that the scenario of some computers becoming smarter than humans may be somewhat or possibly extremely dangerous for humans (Vernor Vinge, “The Coming Technological Singularity: How to Survive in the Post-Human Era,” Department of Mathematical Sciences, San Diego State University, 1993).
  • In 2009 academics and technical experts held a conference to discuss the hypothetical possibility that intelligent machines could become self-sufficient and able to make their own decisions (John Markoff, “Scientists Worry Machines May Outsmart Man,” The New York Times, July 26, 2009). They noted: 1)Some machines have acquired various forms of semiautonomy, including being able to find power sources and independently choose targets to attack with weapons. 2)Some computer viruses can evade elimination and have achieved “cockroach intelligence.”
  • The Singularity Institute for Artificial Intelligence stresses the need to build “friendly AI” (i.e., AI that is intrinsically friendly and humane). In this regard Ni ck Bostrom, a Swedish philosopher at St. Cross College at the University of Oxford, and Eliezer Yudkowsky, an American blogger, writer, and advocate for friendly artificial intelligence, have argued for decision trees over neural networks and genetic algorithms. They argue that decision trees obey modern social norms of transparency and predictability. Bostrom also published a paper, “Existential Risks,” in the Journal of Evolution and Technology that states artificial intelligence has the capability to bring about human extinction.
  • In 2009 authors Wendell Wallach and Colin Allen addressed the question of machine ethics in Moral Machines: Teaching Robots Right from Wrong (New York: Oxford University Press). In this book they brought greater attention to the controversial issue of which specific learning algorithms to use in machines.

While the above discussion indicates there is an awareness that SAMs may become hostile toward humans, no legislation or regulation has resulted. AI remains an unregulated branch of engineering, and the computer you buy eighteen months from now will be twice as capable as the one you can buy today.

Where does this leave us? We will address the key questions in the next post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A futuristic humanoid robot with a sleek design and expressive face, holding one hand up as if presenting something.

Is Strong Artificial Intelligence a New Life-Form? – Part 1/4

When an intelligent machine fully emulates the human brain in every regard (i.e., it possesses strong AI), should we consider it a new life-form?

The concept of artificial life (“A-life” for short) dates back to ancient myths and stories. Arguably the best known of these is Mary Shelley’s novel Frankenstein. In 1986 American computer scientist Christopher Langton, however, formally established the scientific discipline that studies A-life. The discipline of A-life recognizes three categories of artificial life (i.e., machines that imitate traditional biology by trying to re-create some aspects of biological phenomena).

  • Soft: from software-based simulation
  • Hard: from hardware-based simulations
  • Wet: from biochemistry simulations

For our purposes, I will focus only on the first two, since they apply to artificial intelligence as we commonly discuss it today. The category of “wet,” however, someday also may apply to artificial intelligence—if, for example, science is able to grow biological neural networks in the laboratory. In fact there is an entire scientific field known as synthetic biology, which combines biology and engineering to design and construct biological devices and systems for useful purposes. Synthetic biology currently is not being incorporated into AI simulations and is not likely to play a significant role in AI emulating a human brain. As synthetic biology and AI mature, however, they may eventually form a symbiotic relationship.

No current definition of life considers any A-life simulations to be alive in the traditional sense (i.e., constituting a part of the evolutionary process of any ecosystem). That view of life, however, is beginning to change as artificial intelligence comes closer to emulating a human brain. For example Hungarian-born American mathematician John von Neumann (1903–1957) asserted that “life is a process which can be abstracted away from any particular medium.” In particular this suggests that strong AI (artificial intelligence that completely emulates a human brain) could be considered a life-form, namely A-life.

This is not a new assertion. In the early 1990s, ecologist Thomas S. Ray asserted that his Tierra project (a computer simulation of artificial life) did not simulate life in a computer but synthesized it. This begs the following question: How do we define A-life?

The earliest description of A-life that comes close to a definition emerged from an official conference announcement in 1987 by Christopher Langton that was published subsequently in the 1989 book Artificial Life: The Proceedings of an Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems.

Artificial life is the study of artificial systems that exhibit behavior characteristic of natural living systems. It is the quest to explain life in any of its possible manifestations, without restriction to the particular examples that have evolved on earth. This includes biological and chemical experiments, computer simulations, and purely theoretical endeavors. Processes occurring on molecular, social, and evolutionary scales are subject to investigation. The ultimate goal is to extract the logical form of living systems.

Kurzweil predicts that intelligent machines will have equal legal status with humans by 2099. As stated previously, his batting average regarding these types of predictions is about 94 percent. Therefore it is reasonable to believe that intelligent machines that emulate and exceed human intelligence eventually will be considered a life-form. In this and later chapters, however, I discuss the potential threats this poses to humankind. For example what will this mean in regard to the relationship between humans and intelligent machines? This question relates to the broader issue of the ethics of technology, which is typically divided into two categories.

  1. Roboethics: This category focuses on the moral behavior of humans as they design, construct, use, and treat artificially intelligent beings.
  2. Machine ethics: This category focuses on the moral behavior of artificial moral agents (AMAs).

We will discuss the above categories in the up coming posts, as we continue to address the question: “Is Strong AI a New Life-Form?”

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A glowing green digital face composed of circuit board patterns on a dark, tech-themed background.

The Beginning of Artificial Intelligence – Part 2/2 (Conclusion)

AI research funding was a roller-coaster ride from the mid-1960s through about the mid-1990s, experiencing incredible highs and lows. By the late 1990s through the early part of the twenty-first century, however, AI research began a resurgence, finding new applications in logistics, data mining, medical diagnosis, and numerous areas throughout the technology industry. Several factors led to this success.

Computer hardware computational power was now getting closer to that of a human brain (i.e., in the best case about 10 to 20 percent of a human brain).

  • Engineers placed emphasis on solving specific problems that did not require AI to be as flexible as a human brain.
  • New ties between AI and other fields working on similar problems were forged. AI was definitely on the upswing. AI itself, however, was not being spotlighted. It was now cloaked behind the application, and a new phrase found its way into our vocabulary: the “smart (fill in the blank)”—for example the “smartphone.” Here are some of the more visible accomplishments of AI over the last fifteen years.
    • In 1997 IBM’s chess-playing computer Deep Blue became the first computer to beat world-class chess champion Garry Kasparov. In a six-game match, Deep Blue prevailed by two wins to one, with three draws. Until this point no computer had been able to beat a chess grand master. This win garnered headlines worldwide and was a milestone that embedded the reality of AI into the consciousness of the average person.
    • In 2005 a robot conceived and developed at Stanford University was able to drive autonomously for 131 miles along an unrehearsed desert trail, winning the DARPA Grand Challenge (the government’s Defense Advanced Research Projects Agency prize for a driverless vehicle).
    • In 2007 Boss, Carnegie Mellon University’s self-driving SUV, made history by swiftly and safely driving fifty-five miles in an urban setting while sharing the road with human drivers and won the DARPA Urban Challenge.
    • In 2010 Microsoft launched the Kinect motion sensor, which provides a 3-D body-motion interface for Xbox 360 games and Windows PCs. According to Guinness World Records since 2000, the Kinect holds the record for the “fastest-selling consumer electronics device” after selling eight million units in its first sixty days (in the early part of 2011). By January 2012 twenty-four million Kinect sensors had been shipped.
    • In 2011, on an exhibition match on the popular TV quiz show Jeopardy!, an IBM computer named Watson defeated Jeopardy!’s greatest champions, Brad Rutter and Ken Jennings.
    • In 2010 and 2011, Apple made Siri voice-recognition software available in the Apple app store for various applications, such as integrating it with Google Maps. In the latter part of 2011, Apple integrated Siri into the iPhone 4S and removed the Siri application from its app store.
    • In 2012 “scientists at Universidad Carlos III in Madrid…presented a new technique based on artificial intelligence that can automatically create plans, allowing problems to be solved with much greater speed than current methods provide when resources are limited. This method can be applied in sectors such as logistics, autonomous control of robots, fire extinguishing and online learning” (www.phys.org, “A New Artificial Intelligence Technique to Speed the Planning of Tasks When Resources Are Limited”).

The above list shows just some of the highlights. AI is now all around us—in our phones, computers, cars, microwave ovens, and almost any consumer or commercial electronic systems labeled “smart.” Funding is no longer solely controlled by governments but is now being underpinned by numerous consumer and commercial applications.

The road to being an “expert system” or a “smart (anything)” focused on specific well-defined applications. By the first decade of the twenty-first century, expert systems had become commonplace. It became normal to talk to a computer when ordering a pharmaceutical prescription and to expect your smartphone/automobile navigation system to give you turn-by-turn directions to the pharmacy. AI clearly was becoming an indispensable element of society in highly developed countries. One ingredient, however, continued to be missing. That ingredient was human affects (i.e., the feeling and expression of human emotions). If you called the pharmacy for a prescription, the AI program did not show any empathy. If you talked with a real person at the pharmacy, he or she likely would express empathy, perhaps saying something such as, “I’m sorry you’re not feeling well. We’ll get this prescription filled right away.” If you missed a turn on your way to the pharmacy while getting turn-by-turn directions from your smartphone, it did not get upset or scold you. It simply either told you to make a U-turn or calculated a new route for you.

While it became possible to program some rudimentary elements to emulate human emotions, the computer did not genuinely feel them. For example the computer program might request, “Please wait while we check to see if we have that prescription in stock,” and after some time say, “Thank you for waiting.” However, this was just rudimentary programming to mimic politeness and gratitude. The computer itself felt no emotion.

By the end of the first decade of the twenty-first century, AI slowly had worked its way into numerous elements of modern society. AI cloaked itself in expert systems, which became commonplace. Along with advances in software and hardware, our expectations continued to grow. Waiting thirty seconds for a computer program to do something seemed like an eternity. Getting the wrong directions from a smartphone rarely occurred. Indeed, with the advent of GPS (Global Positioning System, a space-based satellite navigation system), your smartphone gave you directions as well as the exact position of your vehicle and estimated how long it would take for you to arrive at your destination.

Those of us, like me, who worked in the semiconductor industry knew this outcome—the advances in computer hardware and the emergence of expert systems—was inevitable. Even consumers had a sense of the exponential progress occurring in computer technology. Many consumers complained that their new top-of-the-line computer soon would be a generation behind in as little as two years, meaning that the next generation of faster, more capable computers was available and typically selling at a lower price than their original computers.

This point became painfully evident to those of us in the semiconductor industry. For example, in the early 1990s, semiconductor companies bought their circuit designers workstations (i.e., computer systems that emulate the decision-making ability of a human-integrated circuit-design engineer), and they cost roughly $100,000 per workstation. In about two years, you could buy the same level of computing capability in the consumer market for a relatively small fraction of the cost. We knew this would happen because integrated circuits had been relentlessly following Moore’s law since their inception. What is Moore’s law? I’ll discuss this in the next post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Image: iStockPhoto.com (licensed)

Digital illustration of a human head with a microchip embedded in the forehead, symbolizing AI or brain-computer interface technology.

The Beginning of Artificial Intelligence – Part 1/2

While the phrase “artificial intelligence” is only about half a century old, the concept of intelligent thinking machines and artificial beings dates back to ancient times. For example the Greek myth “Talos of Crete” tells of a giant bronze man who protected Europa in Crete from pirates and invaders by circling the island’s shores three times daily. Ancient Egyptians and Greeks worshiped animated cult images and humanoid automatons. By the nineteenth and twentieth centuries, intelligent artificial beings became common in fiction. Perhaps the best-known work of fiction depicting this is Mary Shelley’s Frankenstein, first published anonymously in London in 1818 (Mary Shelley’s name appeared on the second edition, published in France in 1823). In addition the stories of these “intelligent beings” often spoke to the same hopes and concerns we currently face regarding artificial intelligence.

Logical reasoning, sometimes referred to as “mechanical reasoning,” also has ancient roots, at least dating back to classical Greek philosophers and mathematicians such as Pythagoras and Heraclitus. The concept that mathematical problems are solvable by following a rigorous logical path of reasoning eventually led to computer programming. Mathematicians such as British mathematician, logician, cryptanalyst, and computer scientist Alan Turing (1912–1954) suggested that a machine could simulate any mathematical deduction by using “0” and “1” sequences (binary code).

The Birth of Artificial Intelligence

Discoveries in neurology, information theory, and cybernetics inspired a small group of researchers—including John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon—to begin to consider the possibility of building an electronic brain. In 1956 these researchers founded the field of artificial intelligence at a conference held at Dartmouth College. Their work—and the work of their students—soon amazed the world, as their computer programs taught computers to solve algebraic word problems, provide logical theorems, and even speak English.

AI research soon caught the eye of the US Department of Defense (DOD), and by the mid-1960s, the DOD was heavily funding AI research. Along with this funding came a new level of optimism. At that time Dartmouth’s Herbert Simon predicted, “Machines will be capable, within twenty years, of doing any work a man can do,” and Minsky not only agreed but also added that “within a generation…the problem of creating ‘artificial intelligence’ will substantially be solved.”

Obviously both had underestimated the level of hardware and software required for replicating the intelligence of a human brain. By setting extremely high expectations, however, they invited scrutiny. With the passing years, it became obvious that the reality of artificial intelligence fell short of their predictions. In 1974 funding for AI research began to dry up, both in the United States and Britain, which led to a period called the “AI winter.”

In the early 1980s, AI research began to resurface with the success of expert systems, computer systems that emulate the decision-making ability of a human expert. This meant the computer software was programmed to “think” like an expert in a specific field rather than follow the more general procedure of a software developer, which is the case in conventional programming. By 1985 the funding faucet for AI research was reinitiated and soon flowing at more than a billion dollars per year.

However, the faucet again began to run dry by 1987, starting with the failure of the Lisp machine market that same year. The Lisp machine was developed in 1973 by MIT AI lab programmers Richard Greenblatt and Thomas Knight, who formed the company Lisp Machines Inc. This machine was the first commercial, single-user, high-end microcomputer and used Lisp programming (a specific high-level programming language). In a sense it was the first commercial, single-user workstation (i.e., an extremely advanced computer) designed for technical and scientific applications.

Although Lisp machines pioneered many commonplace technologies, including laser printing, windowing systems, computer mice, and high-resolution bit-mapped graphics, to name a few, the market reception for these machines was dismal, with only about seven thousand units sold by 1988, at a price of about $70,000 per machine. In addition Lisp Machines Inc. suffered from severe internal politics regarding how to improve its market position, which caused divisions in the company. To make matters worse, cheaper desktop PCs soon were able to run Lisp programs even faster than Lisp machines. Most companies that produced Lisp machines went out of business by 1990, which led to a second and longer-lasting AI winter.

In the second segment of this post we will discuss: Hardware Plus Software Synergy

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte