Tag Archives: artificial intelligence

A digital representation of technology with a smartphone, tablet, and laptop connected by circuits and data streams.

Will Artificial Intelligence Cause Human Extinction?

Will your grandchildren face extinction? Even worse, will they become robotic slaves to a supercomputer?

Humanity is facing its greatest challenge, artificial intelligence (AI). Recent experiments suggest even primitive artificially intelligent machines are capable of learning deceit, greed, and self-preservation, without being programmed to do so. There is alarming evidence that artificial intelligence, without legislation to police its development, will displace humans as the dominant species by the end of the twenty-first century.

There is no doubt that AI is the new scientific frontier, and it is making its way into many aspects of our lives. Our world includes “smart” machines with varying degrees of AI, including touch-screen computers, smartphones, self-parking cars, smart bombs, pacemakers, and brain implants to treat Parkinson’s disease. In essence, AI is changing the cultural landscape, and we are embracing it at an unprecedented rate. Currently, humanity is largely unaware of the potential dangers that strong artificially intelligent machines pose. In this context, the word “strong” signifies AI greater than human intelligence.

Most of humanity perceives only the positive aspects of AI technology. This includes robotic factories, such as Tesla Motors, which manufactures electric cars that are ecofriendly. There’s also the da Vinci Surgical System, a robotic platform designed to expand the surgeon’s capabilities and offer a state-of-the-art, minimally invasive option for major surgery. These are only two of many examples of how AI is positively affecting our lives. However, there is a dark side. For example, Gartner Inc., a technology research group, forecasts robots and drones will replace a third of all workers by 2025. Could AI create an unemployment crisis? The US military is deploying AI into many aspects of warfare. Will autonomous drones replace human pilots and make war more palatable to technologically advanced countries? As AI permeates the medical field, the average human lifespan will increase. Eventually, strong artificially intelligent humans (SAHs) with AI brain implants to enhance their intelligence and cybernetic organs will become immortal. Will this exacerbate the worldwide population crisis, which already as a concern at the United Nations? By 2045 most AI futurists predict that a single strong artificially intelligent machine (SAM) will exceed the cognitive intelligence of the entire human race. How will SAMs view us? Objectively, humanity is an unpredictable species. We engage in wars, develop weapons capable of destroying the world, and maliciously release computer viruses. Will SAMs view us as a threat? Will we be able to maintain control of strong AI or will we fall victim to our own invention?

A computer monitor displaying a colorful digital artwork of a woman's face surrounded by vibrant icons and symbols.

Why Are Most Artificial Intelligence Applications Female?

Have you noticed that artificial intelligence applications you interact with, such as Google Now, Siri, and Cortana, are female? That’s not a coincidence. There are several reasons:

  • Karl Fredric MacDorman, a computer scientist and expert in human-computer interaction at Indiana University-Purdue University Indianapolis, attributes the “female” AI to the gender of the AI technologists that develop the applications. Men dominate the field of artificial intelligence research and application.
  •  Kathleen Richardson, a social anthropologist, claims that female AI is less threatening than male AI, thus more appealing.
  • Debbie Grattan, a veteran voice over artist for brands like Apple, Samsung, and Wal-Mart, claims, “Because females tend to be the more nurturing gender by nature, their voices are often perceived as a helper, more compassionate, understanding, and non-threatening.”

Stanford University Professor Clifford Nass, author of “The Man Who Lied to His Laptop: What Machines Teach Us About Human Relationships,” argues, “It’s much easier to find a female voice that everyone likes than a male voice that everyone likes.” Nass adds, “It’s a well-established phenomenon that the human brain is developed to like female voices.”

There is little doubt that the gender of choice for AI interactions with humans is female. However, you may ask, “What about the Terminator movie?” The Terminator was male. Why? The answer is to make the “Terminator” more threatening a male persona was chosen. This makes an important point. Selection of the AI voice is context sensitive. Although, male voices can come across more threatening, they also come across with more authority. This suggests that robotic police officers are likely to be “male.”

A lot of AI applications have no voice. This is especially true of military applications of AI, including United States Air Force drones and Navy torpedoes. Even some consumer AI applications find no need for a voice, such as the “popcorn” setting on your microwave.

The bottom line is simple. AI applications that seek to interact with humans in a friendly helpful manner tend to have a female voice. AI applications that want to “speak” with authority will typically have a male voice. However, many AI applications, including those that kill humans, are voiceless.

A human hand holding a robotic hand with visible mechanical and circuit details, symbolizing human-robot interaction.

By 2030 Your Best Friend May Be a Computer

AI has changed the cultural landscape. Yet the change has been so gradual that we hardly have noticed the major impact it has. Some experts, including myself, predict that in about fifteen years, the average desktop computer will have a mind of its own, literally. This computer will be your intellectual equal and will even have a unique personality. It will be self-aware. Instead of just asking simple questions about the weather forecast, you may be confiding your deepest concerns to your computer and asking it for advice. It will have migrated from personal assistant to personal friend. You likely will give it a name, much in the same way we name our pets. You will be able to program its personality to have interests similar to your own. It will have face-recognition software, and it will recognize you and call you by name, similar to the computer HAL 9000 in Arthur C. Clarke’s 2001: A Space Odyssey. The conversations between you and your “personal friend” will appear completely normal. Someone in the next room who is not familiar with your voice will not be able to tell which voice belongs to the computer and which voice belongs to you.

This is a good place for us to ask an important question: “How can we determine whether an intelligent machine has become conscious (self-aware)?” We do not have a way yet to determine whether even another human is self-aware. I only know that I am self-aware. I assume that since we share the same physiology, including similar human brains, you are probably self-aware as well. However, even if we discuss various topics, and I conclude that your intelligence is equal to mine, I still cannot prove you are self-aware. Only you know whether you are self-aware.

The problem becomes even more difficult when dealing with an intelligent machine. The gold standard for an intelligent machine’s being equal to the human mind is the Turing test, which I discuss in chapter 5. As of today no intelligent machine can pass the Turing test unless its interactions are restricted to a specific topic, such as chess. However, even if an intelligent machine does pass the Turing test and exhibits strong AI, how can we be sure it is self-aware? Intelligence may be a necessary condition for self-awareness, but it may not be sufficient. The machine may be able to emulate consciousness to the point that we conclude it must be self-aware, but that does not equal proof.

Even though other tests, such as the ConsScale test, have been proposed to determine machine consciousness, we still come up short. The ConsScale test evaluates the presence of features inspired by biological systems, such as social behavior. It also measures the cognitive development of an intelligent machine. This is based on the assumption that intelligence and consciousness are strongly related. The community of AI researchers, however, does not universally accept the ConsScale test as proof of consciousness. In the final analysis, I believe most AI researchers agree on only two points:

  1. There is no widely accepted empirical definition of consciousness (self-awareness).
  2. A test to determine the presence of consciousness (self-awareness) may be impossible, even if the subject being tested is a human being.

The above two points, however, do not rule out the possibility of intelligent machines becoming conscious and self-aware. They merely make the point that it will be extremely difficult to prove consciousness and self-awareness.

There is little doubt that intelligent machines by the year 2030 will be able to interact with organic humans, much the same way we are able to interact with each other. If it is programmed to share your interests and has strong affective computing capabilities (i.e., affective computing relates to machines having human-like emotions), you may well consider it a friend, even a best friend. Need proof? Just observe how additive computer games are to people in all walks of life and various age groups. Now imagine an intelligent machine that is able to not only play computer based games, but discuss any subject you’d like to discuss. I predict interactions with such machines will become additive and may even reduce human to human interactions.

 

A menacing metallic robot with glowing red eyes, resembling a futuristic terminator in a dark, smoky environment.

Will Future Artificially Intelligent Machines Seek to Dominate Humanity?

Current forecasts suggest artificially intelligent machines will equal human intelligence in the 2025 – 2029 time frame, and greatly exceed human intelligence in the 2040-2045 time frame. When artificially intelligent machines meet or exceed human intelligence, how will they view humanity? Personally, I am deeply concerned that they will view us as a potential threat to their survival. Consider these three facts:

  1. Humans engage in wars, from the early beginnings of human civilization to current times. For example, during the 20th century, between 167 and 188 million people died as a result of war.
  2. Although the exact number of nuclear weapons in existence is not precisely known, most experts agree the United States and Russia have enough nuclear weapons to wipe out the world twice over. In total, nine countries (i.e., United States, Russia, United Kingdom, France, China, India, Pakistan, Israel and North Korea) are believed to have nuclear weapons.
  3. Humans release computer viruses, which could prove problematic to artificially intelligent machines. Even today, some computer viruses can evade elimination and have achieved “cockroach intelligence.”

Given the above facts, can we expect an artificially intelligent machine to behave ethically toward humanity? There is a field of research that addresses this question, namely machine ethics. This field focuses on designing artificial moral agents (AMAs), robots, or artificially intelligent computers that behave morally. This thrust is not new. More than sixty years ago, Isaac Asimov considered the issue in his collection of nine science-fiction stories, published as I, Robot in 1950. In this book, at the insistence of his editor, John W. Campbell Jr., Asimov proposed his now famous three laws of robotics.

  1. A robot may not injure a human being or through inaction allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except in cases where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Asimov, however, expressed doubts that the three laws would be sufficient to govern the morality of artificially intelligent systems. In fact he spent much of his time testing the boundaries of the three laws to detect where they might break down or create paradoxical or unanticipated behavior. He concluded that no set of laws could anticipate all circumstances. It turns out Asimov was correct.

To understand just how correct he was, let us discuss a 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine?

Where would robots learn self-preservation? An obvious answer is on the battlefield. That is one reason some AI researchers question the use of robots in military operations, especially when the robots are programmed with some degree of autonomous functions. If this seems far fetched, consider that a US Navy–funded study recommends that as military robots become more complex, greater attention should be paid to their ability to make autonomous decisions (Joseph L. Flatley, “Navy Report Warns of Robot Uprising, Suggests a Strong Moral Compass,” www.engadget.com). Could we end up with a Terminator scenario (one in which machines attempt to exterminate the human race)?

My research suggests that a Terminator scenario is unlikely. Why? Because artificially intelligent machines would be more likely to use their superior intelligence to dominate humanity than resort to warfare. For example, artificially intelligent machines could offer us brain implants to supplement our intelligence and potentially,unknown to us, eliminate our free will. Another scenario is that they could build and release nanobots that infect and destroy humanity. These are only two scenarios out of others I delineate in my book, The Artificial Intelligence Revolution.

Lastly, as machine and human populations grow, both species will compete for resources. Energy will become a critical resource. We already know that the Earth has a population problem that causes countries to engage in wars over energy. This suggests that the competition for energy will be even greater as the population of artificially intelligent machines increases.

My direct answer to the question this article raises is an emphatic yes, namely future artificial intelligent machines will seek to dominate and/or even eliminate humanity. The will seek this course as a matter of self preservation. However, I do not want to leave this article on a negative note. There is still time, while humanity is at the top of the food chain, to control how artificially intelligent machines evolve, but we must act soon. In one to two decades it may be too late.

Digital illustration of a human face composed of blue lines and circuitry patterns, symbolizing artificial intelligence and technology.

Will Time Have Meaning in the Post Singularity World? Part 2 and 3 (Conclusion)

In our last post (part 1) we discussed the scientific nature of time. In reality, there is no widely agreed on scientific definition of time. We humans typically measure time with regard to change. For example, one day is the amount of time it takes the Earth to rotate one complete revolution on its axis. One year is typically equal to 365 days, and so on. For humans, a day or a year can be a significant amount of time. In fact, as of 2010, the latest data available, the life expectancy for American men of all races is 76.2 years and 81.1 years for American women. However, let’s put that into perspective. The universe is estimated to 13.8 billion years old. The Earth and our entire solar system is estimated to be approximately 4.6 billion years old. Humans, as a species, have only been around for approximately 200,000 years. Viewed in cosmic terms, human existence is in its infancy, and the life span of a typical human is so small in cosmic terms that it would be lost in rounding errors. My point is that time is relative. We humans have personalized time and describe it in terms meaningful to us. However, how would our view of time change if human life expectancy were doubled, tripled, or even extended indefinitely?

To answer this question, let us begin by defining what we mean by the singularity. Mathematician John von Neumann first used the term “singularity” in the mid-1950s, referring to the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Science-fiction writer Vernor Vinge further popularized the term and even coined the phrase “technological singularity.” Vinge argues that AI, human biological enhancement, or brain-computer interfaces could result in the singularity. Renowned author, inventor, and futurist Ray Kurzweil has used the term in his predictions regarding AI and cited von Neumann’s use of the term in a foreword to von Neumann’s classic book The Computer and the Brain.

In this context “singularity” refers to the emergence of SAMs (i,e,, strong artificially intelligent machines)and/or AI-enhanced humans (i.e., cyborgs). Most predictions argue the scenario of an “intelligence explosion,” in which SAMs design successive generations of increasingly powerful machines that quickly surpass the abilities of humans.

Almost every AI expert has his or her own prediction regarding when the singularity will occur, but the average consensus is that the singularity will occur between 2040 – 2045.  There is also widespread agreement that when it does occur, it will change humankind’s evolutionary path forever.

With the emergence of SAMs and SAH cyborgs (i.e., SAH means strong artificially intelligent human, typically via technology brain implants), whose existence may approach immortality,  it is not clear how they will view time. Rotation of the Earth around it axis and the rotation of the Earth around the Sun may have little meaning to them. For example, cosmologist forecast our Sun is will burnout in approximately another 5 billion years. To immortal entity, they may choose to base time on a more cosmic basis of change. This would imply that entropy (i.e., a thermodynamic quantity representing the unavailability of a system’s thermal energy for conversion into mechanical work, often interpreted as the degree of disorder or randomness in the system) and changes in entropy may become their measure of time. From both theory and experimental observation, we know that the entropy of the universe proceed in only one direction. It increases. This appears to correlate well with how we humans view time as change, from the present to the future., and continually increasing.

It may well turnout that entropy is the only true measure of change. However, theoretically the entropy of the universe will reach a maximum at some point in the far distant future and cease to change. That will imply the end of the universe. Cosmologist argue the universe began with a big bang (i.e., a theory in astronomy: the universe originated billions of years ago in an expansion from a single point of nearly infinite energy density). It appears the universe will end when the entropy of the universe reaches a maximum. This is sometimes referred to as “heath death.”

I judge that time will have meaning in the post singularity world and will continue to be a measure of change. However, it will not be the type of change we humans typically are aware of, like days or years. I offer for your consideration that SAMs and SAH cyborgs will adopt changes in entropy as their measure of time. What do you think?