Tag Archives: artificial intelligence

A stylized blue and white vintage microphone with musical notes in the background.

“The Artificial Intelligence Revolution” Interview Featured On Blog Talk Radio

My interview on Johnny Tan’s program (From My Mama’s Kitchen®) is featured as one of “Today’s Best” on Blog Talk Radio’s home page. This is a great honor. Below is the player from our interview. It displays a slide show of my picture as well as the book cover while it plays the interview.

Discover Moms and Family Internet Radio with FMMK Talk Radio on BlogTalkRadio
A humanoid robot with an extended hand under the text 'The Artificial Intelligence Revolution' questioning AI's role in serving or replacing humans.

Louis Del Monte FMMK Talk Radio Interview on The Artificial Intelligence Revolution

You can listen and/or download my interview with Johnny Tan of FMMK talk radio discussing my new book, The Artificial Intelligence Revolution. We discuss and explore the potential benefits and threats strong artificially intelligent machines pose to humankind.

Click here to listen or download the interview “The Artificial Intelligence Revolution”

Digital illustration of a human head with a microchip embedded in the forehead, symbolizing AI or brain-computer interface technology.

When Will A Computer Equal a Human Brain?

If we want to view the human brain in terms of a computer, one approach would be to take the number of calculations per second that an average human brain is able to process and compare that with today’s best computers. This is not an exact science. No one really knows how many calculations per second an average human brain is able to process, but some estimates (www.rawstory.com/rs/2012/06/18/earths-supercomputing-power-surpasses-human-brain-three-times-over) suggest it is in the order of 36.8 petaflops of data (a petaflop is equal to one quadrillion calculations per second). Let us compare the human brain’s processing power with the best current computers on record, listed below by year and processing-power achievement.

  • June 18, 2012: IBM’s Sequoia supercomputer system, based at the US Lawrence Livermore National Laboratory (LLNL), reached sixteen petaflops, setting the world record and claiming first place in the latest TOP500 list (a list of the top five hundred computers ranked by a benchmark known as LINPACK (related to their ability to solve a set of linear equations) to decide whether they qualify for the TOP500.
  • November 12, 2012: The TOP500 list certified Titan as the world’s fastest supercomputer per the LINPACK benchmark, at 17.59 petaflops. Cray Incorporated, at the Oak Ridge National Laboratory, developed it.
  • June 10, 2013: China’s Tianhe-2 was ranked the world’s fastest supercomputer, with a record of 33.86 petaflops.

Using Moore’s law (i.e., computer processing power doubles every eighteen months), we can extrapolate that in terms of raw processing power (petaflops), computer processing power will meet or exceed that of the human mind by about 2015 to 2017. This does not mean that by 2017 we will have a computer that is equal to the human mind. Software plays a key role in both processing power (MIPS) and AI.

To understand the critical role that software plays, we must understand what we are asking AI to accomplish in emulating human intelligence. Here is a thumbnail sketch of the capabilities that researchers consider necessary.

  • Reasoning: step-by-step reasoning that humans use to solve problems or make logical decisions
  • Knowledge: extensive knowledge, similar to what an educated human would possess
  • Planning: the ability to set goals and achieve them
  • Learning: the ability to acquire knowledge through experience and use that knowledge to improve
  • Language: the ability to understand the languages humans speak and write
  • Moving: the ability to move and navigate, including knowing where it is relative to other objects and obstacles
  • Manipulation: the ability to secure and handle an object
  • Vision: the ability to analyze visual input, including facial and object recognition
  • Social intelligence: the ability to recognize, interpret, and process human psychology and emotions and respond appropriately
  • Creativity: the ability to generate outputs that can be considered creative or the ability to identify and assess creativity

This list makes clear that raw computer processing and sensing are only two elements in emulating the human mind. Obviously software is also a critical element. Each of the capabilities delineated above requires a computer program. To emulate a human mind, the computer programs would need to act both independently and interactively, depending on the specific circumstance.

In terms of raw computer processing, with the development of China’s Tianhe-2 computer, we are on the threshold of having a computer with the raw processing power of a human mind. The development of a computer that will emulate a human mind, however, may still be one, two, or even more decades away, due to software and sensing requirements.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Side profile of a futuristic humanoid robot with a white face and visible mechanical components against a pale blue background.

Is Strong Artificial Intelligence a New Life-Form? – Part 4/4 (Conclusion)

In our previous posts, we discussed that there is an awareness that SAMs (i.e., strong artificially intelligent machines) may become hostile toward humans, and AI remains an unregulated branch of engineering. The computer you buy eighteen months from now will be twice as capable as the one you can buy today.

Where does this leave us regarding the following questions?

  • Is strong AI a new life-form?
  • Should we afford these machines “robot” rights?

In his 1990 book The Age of Intelligent Machines, Kurzweil predicted that in 2099 organic humans will be protected from extermination and respected by strong AI, regardless of their shortcomings and frailties, because they gave rise to the machines. To my mind the possibility of this scenario eventually playing out is questionable. Although I believe a case can be made that strong AI is a new life-form, we need to be extremely careful with regard to granting SAMs rights, especially rights similar to those possessed by human. Anthony Berglas expresses it best in his 2008 book Artificial Intelligence Will Kill Our Grandchildren, in which he notes:

  • There is no evolutionary motivation for AI to be friendly to humans.
  • AI would have its own evolutionary pressures (i.e., competing with other AIs for computer hardware and energy).
  • Humankind would find it difficult to survive a competition with more intelligent machines.

Based on the above, carefully consider the following question. Should SAMs be granted machine rights? Perhaps in a limited sense, but we must maintain the right to shut down the machine as well as limit its intelligence. If our evolutionary path is to become cyborgs, this is a step we should take only after understanding the full implications. We need to decide when (under which circumstances), how, and how quickly we take this step. We must control the singularity, or it will control us. Time is short because the singularity is approaching with the stealth and agility of a leopard stalking a lamb, and for the singularity, the lamb is humankind.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A futuristic humanoid robot with a sleek design and expressive face, holding one hand up as if presenting something.

Is Strong Artificial Intelligence a New Life-Form? – Part 2/4

In our last post we raised questions regarding the ethics of technology, which is typically divided into two categories.

  1. Roboethics: This category focuses on the moral behavior of humans as they design, construct, use, and treat artificially intelligent beings.
  2. Machine ethics: This category focuses on the moral behavior of artificial moral agents (AMAs).

Let us start by discussing roboethics. In 2002 Italian engineer Gianmarco Veruggio coined the term “roboethics,” which refers to the morality of how humans design, construct, use, and treat robots and other artificially intelligent beings. Specifically it considers how AI may be used to benefit and/or harm humans. This raises the question of robot rights, namely what are the moral obligations of society toward its artificially intelligent machines? In many ways this question parallels the moral obligations of society toward animals. For computers with strong AI, this idea may even parallel the concept of human rights, such as the right to life, liberty, freedom of thought and expression, and even equality before the law.

How seriously should we take roboethics? At this point no intelligent machine completely emulates a human brain. Kurzweil, however, predicts that such a machine will exist by 2029. By some accounts he is a pessimist, as bioethicist Glenn McGee predicts that humanoid robots may appear by 2020. Although predictions regarding AI are often optimistic, Kurzweil, as mentioned, has been on target about 94 percent of the time. Therefore it is reasonable to believe that within a decade or two we will have machines that fully emulate a human brain. Based on this it is necessary to take the concepts of robot rights and the implications regarding giving robots rights seriously. In fact this is beginning to occur, and the issue of robot rights has been under consideration by the Institute for the Future and by the UK Department of Trade and Industry (“Robots Could Demand Legal rights,” BBC News, December 21, 2006).

At first the entire concept of robot rights may seem absurd. Since we do not have machines that emulate a human brain exactly, this possibility does not appear to be in our national consciousness. Let us fast-forward to 2029, however, and assume Kurzweil’s prediction is correct. Suddenly we have artificial intelligence that is on equal footing with human intelligence and appears to exhibit human emotion. Do we, as a nation, concede that we have created a new life-form? Do we grant robots rights under the law? The implications are serious; if we grant strong-AI robots rights equal to human rights, we may be giving up our right to control the singularity. Indeed robot rights eventually may override human rights.

Consider this scenario. As humans, we have inalienable rights, namely the right to life, liberty, and the pursuit of happiness (not all political systems agree with this). In the United States, the Bill of Rights protects our rights. If we give machines with strong AI the same rights, will we be able to control the intelligence explosion once each generation of strong-AI machines (SAMs) designs another generation with even greater intelligences? Will we have the right to control the machines? Will we be able to decide how the singularity unfolds?

We adopted animal rights to protect animals in circumstances in which they are unable to protect themselves. We saw this as humane and necessary. However, animal rights do not parallel human rights. In addition humankind reserves the right to exterminate any animal (such as the smallpox virus) that threatens humankind’s existence. Intelligent machines pose a threat that is similar and perhaps even more dangerous than extremely harmful pathogens (viruses and bacteria), which makes the entire issue of robot rights more important. If machines gain rights equal to those of humans, there is little doubt that eventually the intelligence of SAMs will eclipse that of humans. There would be no law that prevents this from happening. At that point will machines demand greater rights than humans? Will machines pose a threat to human rights? This brings us to another critical question: Which moral obligations (machine ethics) do intelligent machines have toward humankind?

We will address the above question in our next post.

Source: The Artificial Intelligence Revolution (2014), Louis Del Monte