Category Archives: Artificial Intelligence

A large piece of ice on the beach

What Caused the Second “AI Winter”?

In our last post, we stated, “When the early founders of AI set extremely high expectations, they invited scrutiny. With the passing years, it became obvious that the reality of artificial intelligence fell short of their predictions. In 1974, funding for AI research began to dry up, both in the United States and Britain, which led to a period called the ‘AI Winter,’ and optimism regarding AI turned to skepticism. The first AI Winter lasted until the early 1980s.”

In early the 1980s, researchers in AI began to abandon the monumental task of developing strong AI and began to focus on expert systems. An expert system, in this context, is a computer system that emulates the decision-making ability of a human expert. This meant the computer software allowed the machine to “think” equivalently to an expert in a specific field, like chess for example. Expert systems became a highly successful development path for AI. By the mid-1980s, the funding faucet for AI research was flowing at more than a billion dollars per year.

Unfortunately, the funding faucet began to run dry again by 1987, starting with the failure of the Lisp machine market that same year. MIT AI lab programmers Richard Greenblatt and Thomas Knight, who formed the company Lisp Machines Inc., developed the Lisp machine in 1973. The Lisp machine was the first commercial, single-user, high-end microcomputer, which used Lisp programming (a specific high-level programming language) to tackle specific technical applications.

Lisp machines pioneered many commonplace technologies, including laser printing, windowing systems and high-resolution bit-mapped graphics, to name a few. However, the market reception for these machines was dismal, with only about seven thousand units sold by 1988, at about $70,000 per machine. In addition, the company, Lisp Machines Inc., suffered from severe internal politics regarding how to improve its market position. This internal strife caused divisions in the company. To make matters worse, cheaper desktop PCs soon were able to run Lisp programs even faster than Lisp machines. Most companies that produced Lisp machines went out of business by 1990, which led to a second and longer-lasting AI Winter.

If you are getting the impression that being an AI researcher from the 1960s through the late 1990s was akin to riding a roller coaster, your impression is correct. Life for AI researchers during that timeframe was a feast or famine-type existence.

While AI as a field of research experienced funding surges and recessions, the infrastructure that ultimately fuels AI, integrated circuits, and computer software, continued to follow Moore’s law. In the next post, we’ll discuss Moore’s law and its role in ending the second AI Winter.

A view of the mountains from above.

What Caused the First “AI Winter”?

The real science of artificial intelligence (AI) began with a small group of researchers, John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon. In 1956, these researchers founded the field of artificial intelligence at a conference held at Dartmouth College. Their work and their students’ work soon amazed the world, as their computer programs taught computers to solve algebraic word problems, provide logical theorems, and even speak English.

By the mid-1960s, the Department of Defense began pouring money into AI research. Along with this funding, unprecedented optimism and expectations regarding the capabilities of AI technology became common. In 1965, Dartmouth’s Herbert Simon helped fuel the unprecedented optimism and expectations by predicting, “Machines will be capable, within twenty years, of doing any work a man can do.” In 1967, Minsky not only agreed but also added, “Within a generation…the problem of creating ‘artificial intelligence’ will substantially be solved.”

Had the early founders been correct in their predictions, all human toil would have ceased by now, and our civilization would be a compendium of technological wonder. It is possible to speculate that every person would have a robotic assistant to ease their way through their daily chores, including cleaning their houses, driving them to any destination, and anything else that fills our daily lives with toil. However, as you know, that is not the case.

Obviously, Simon and Minsky had grossly underestimated the level of hardware and software required to achieve AI that replicates the intelligence of a human brain (i.e., strong artificial intelligence). Strong AI is also synonymous with general AI. Unfortunately, underestimating the level of hardware and software required to achieve strong artificial intelligence continues to plague AI research even today.

When the early founders of AI set extremely high expectations, they invited scrutiny. With the passing years, it became obvious that the reality of artificial intelligence fell short of their predictions. In 1974, funding for AI research began to dry up, both in the United States and Britain, which led to a period called the “AI Winter,” and optimism regarding AI turned to skepticism.

The first AI Winter lasted until the early 1980s. In the next post, we’ll discuss the second AI Winter.

human extinction

Will Humanity Survive The 21st Century?

Examples of typical events that most people think could cause humanity’s extinction are a large asteroid impact or a volcanic eruption of sufficient magnitude to cause catastrophic climate change. Although possible, these events actually have a relatively low probability of occurring, in the order of one in fifty thousand or less, according to numerous estimates found via a simple Google search.

However, there are other events with higher probabilities that may cause human extinction. In 2008, experts surveyed at the Global Catastrophic Risk Conference at the University of Oxford suggested a 19 percent chance of human extinction over this century, citing the top five most probable to cause human extinction by 2100 as:

  1. Molecular nanotechnology weapons (i.e., nanoweapons): 5 percent probability
  2. Superintelligent AI: 5 percent probability
  3. Wars: 4 percent probability
  4. Engineered pandemic: 2 percent probability
  5. Nuclear war: 1 percent probability

All other existential events were below 1 percent. There is a subtle point the survey does not explicitly express, namely, the risk of human extinction increases with time. You may wonder, Why? To answer this question, consider these examples:

  • Nanoweapons and superintelligence become more capable with the development of each successive generation. In the 2008 Global Catastrophic Risk Conference survey, superintelligent AI equates with molecular nanotechnology weapons as the number one potential cause of human extinction. In my view, molecular nanotechnology weapons and superintelligent AI are two sides of the same coin. In fact, I judge that superintelligent AI will be instrumental in developing molecular nanotechnology weapons.
  • In my new book, War At The Speed Of Light, I devoted a chapter on autonomous directed energy weapons. These are weapons that act on their own to take hostile action, resulting in unintended conflicts. Unfortunately, current autonomous weapons don’t embody human judgment. This being the case, wars, including nuclear wars, become more probable as more autonomous weapons are deployed.
  • Lastly, the world is currently facing a coronavirus pandemic. Although most researchers believe this is a naturally occurring pandemic, it still infected 121,382,067 people and caused 2,683,209 deaths to date on a worldwide basis. This suggests the death rate is a little over 2 percent. However, if the virus was more infectious and more deadly, it could render the Earth a barren wasteland. Unfortunately, that is what an engineered pandemic might do.

To my eye, the top five potential causes surfaced by the Global Catastrophic Risk Conference at the University of Oxford in 2008 are all possible, and the probabilities associated with them appear realistic. This means that humanity has a 19 percent chance of not surviving the 21st century on our current course.

In the next post, I will suggest measures humanity can take to increase the probability they will survive into the 22nd century.

artificial intelligence

Artificial Intelligence Is Changing Our Lives And The Way We Make War

Artificial intelligence (AI) surrounds us. However, much the same way we seldom read billboards as we drive, we seldom recognize AI. Even though we use technology, like our car GPS to get directions, we do not recognize that at its core is AI. Our phones use AI to remind us of appointments or engage us in a game of chess. However, we seldom, if ever, use the phrase “artificial intelligence.” Instead, we use the term “smart.” This is not the result of some master plans by the technology manufacturers. It is more of a statement regarding the status of the technology.

By the late 1990s through the early part of the twenty-first century, AI research began its resurgence. Smart agents found new applications in logistics, data mining, medical diagnosis, and numerous areas throughout the technology industry. Several factors led to this success:

  • Computer hardware computational power was now getting closer to that of a human brain (i.e., in the best case about 10 to 20 percent of a human brain).
  • Engineers placed emphasis on solving specific problems that did not require AI to be as flexible as a human brain.

New ties between AI and other fields working on similar problems were forged. AI was definitely on the upswing. AI itself, however, was not in the spotlight. It lay cloaked within the application, and a new phrase found its way into our vocabulary: the “smart (fill in the blank)”—for example, we say the “smartphone.”

AI is now all around us, in our phones, computers, cars, microwave ovens, and almost any commercial or military system labeled “smart.” According to Nick Bostrom, a University of Oxford philosopher known for his work on superintelligence risks, “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore” (“AI Set to Exceed Human Brainpower,” CNN.com, July 26, 2006). Ray Kurzweil agrees. He said, “Many thousands of AI applications are deeply embedded in the infrastructure of every industry” (Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology Funding [2005]). The above makes two important points:

  1. AI is now part of every aspect of human endeavor, from consumer goods to weapons of war, but the applications are seldom credited to AI.
  2. Both government and commercial applications now broadly underpin AI funding.

AI startups raised $73.4 billion in total funding in 2020 according to data gathered by StockApps.com. Well-established companies like Google are spending tens of billions on AI infrastructure. Google has also spent hundreds of millions on secondary AI business pursuits, such as driverless cars, wearable technology (Google Glass), humanlike robotics, high-altitude Internet broadcasting balloons, contact lenses that monitor glucose in tears, and even an effort to solve death.

In essence, the fundamental trend in both consumer and military AI systems is toward complete autonomy. Today, for example, one in every three US fighter aircraft is a drone. Today’s drones are under human control, but the next generation of fighter drones will be almost completely autonomous. Driverless cars, now a novelty, will become common. You may find this difficult or even impossible to believe. However, look at today’s AI applications. The US Navy plans to deploy unmanned surface vehicles (USVs) to not only protect navy ships but also, for the first time, to autonomously “swarm” offensively on hostile vessels. In my latest book, War At The Speed Of Light, I devoted a chapter to autonomous directed energy weapons. Here is an excerpt:

The reason for building autonomous directed energy weapons is identical to those regarding other autonomous weapons. According to Military Review, the professional journal of the US Army, “First, autonomous weapons systems act as a force multiplier. That is, fewer warfighters are needed for a given mission, and the efficacy of each warfighter is greater. Next, advocates credit autonomous weapons systems with expanding the battlefield, allowing combat to reach into areas that were previously inaccessible. Finally, autonomous weapons systems can reduce casualties by removing human warfighters from dangerous missions.

What is making this all possible? It is the relentless exponential growth in computer performance. According to Moore’s law, computer-processing power doubles every eighteen months. Using Moore’s law and simple mathematics suggests that in ten years, the processing power of our personal computers will be more than a hundred times greater than the computers we currently are using. Military and consumer products using top-of-the-line computers running state-of-the-art AI software will likely exceed our desktop computer performance by factors of ten. In effect, artificial intelligence in top-of-the-line computers running state-of-the-art AI software may be equivalent to human intelligence. However, will it be equivalent to human judgment? I fear not, and autonomous weapons may lead to unintended conflicts, conceivably even World War III.

I recognize this last paragraph represents dark speculations on my part. Therefore, let me ask you, What do you think?

A-life

Should We Consider Strong Artificially Intelligent Machines (SAMs) A New Life-Form?

What is a strong artificially intelligent machine (SAM)? It is a machine whose intelligence equals that of a human being. Although no SAM currently exists, many artificial intelligence researchers project SAMs will exist by the mid-21st Century. This has major implications and raises an important question, Should we consider SAMs a new life-form? Numerous philosophers and AI researchers have addressed this question. Indeed, the concept of artificial life dates back to ancient myths and stories. The best known of these is Mary Shelley’s novel Frankenstein, published in 1823. In 1986, American computer scientist Christopher Langton, however, formally established the scientific discipline that studies artificial life (i.e., A-life).

No current definition of life considers any A-life simulations to be alive in the traditional sense (i.e., constituting a part of the evolutionary process of any ecosystem). That view of life, however, is beginning to change as artificial intelligence comes closer to emulating a human brain. For example, Hungarian-born American mathematician John von Neumann (1903–1957) asserted, “life is a process which can be abstracted away from any particular medium.” In effect, this suggests that strong AI represents a new life-form, namely A-life.

In the early 1990s, ecologist Thomas S. Ray asserted that his Tierra project, a computer simulation of artificial life, did not simulate life in a computer, but synthesized it. This begs the following question, “How do we define A-life?”

The earliest description of A-life that comes close to a definition emerged from an official conference announcement in 1987 by Christopher Langton, published subsequently in the 1989 book Artificial Life: The Proceedings of an Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems:

Artificial life is the study of artificial systems that exhibit behavior characteristics of natural living systems. It is the quest to explain life in any of its possible manifestations, without restriction to the particular examples that have evolved on Earth. This includes biological and chemical experiments, computer simulations, and purely theoretical endeavors. Processes occurring on molecular, social, and evolutionary scales are subject to investigation. The ultimate goal is to extract the logical form of living systems.

There is little doubt that both philosophers and scientists lean toward recognizing A-life as a new life-form. For example, noted philosopher and science fiction writer Sir Arthur Charles Clarke (1917–2008) wrote in his book 2010: Odyssey Two, “Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect.” Noted cosmologist and physicist Stephen Hawking (b. 1942) darkly speculated during a speech at the Macworld Expo in Boston, “I think computer viruses should count as life. I think it says something about human nature that the only form of life we have created so far is purely destructive. We’ve created life in our own image” (Daily News, [August 4, 1994]). The main point is that we are likely to consider strong AI a new form of life.

After reading this post, What do you think?

artificial intelligence

Artificial Intelligence Threatens Human Extinction

While researching my new book, War At The Speed Of Light, I surfaced some important questions regarding the threat artificial intelligence poses to humanity. For example, Will your grandchildren face extinction? Even worse, will they become robotic slaves to a supercomputer?

Humanity is facing its greatest challenge, artificial intelligence (AI). Recent experiments suggest that even primitive artificially intelligent machines can learn deceit, greed, and self-preservation without being programmed to do so. There is alarming evidence that artificial intelligence, without legislation to police its development, will displace humans as the dominant species by the end of the twenty-first century.

There is no doubt that AI is the new scientific frontier, and it is making its way into many aspects of our lives. Our world includes “smart” machines with varying degrees of AI, including touch-screen computers, smartphones, self-parking cars, smart bombs, heart pacemakers, and brain implants to treat Parkinson’s disease. In essence, AI is changing the cultural landscape, and we are embracing it at an unprecedented rate. Currently, humanity is largely unaware of the potential dangers that strong artificially intelligent machines pose. In this context, the word “strong” signifies AI greater than human intelligence.

Most of humanity perceives only the positive aspects of AI technology. This includes robotic factories, like Tesla Motors, which manufactures electric cars that are ecofriendly, and the da Vinci Surgical System, a robotic platform designed to expand the surgeon’s capabilities and offer a state-of-the-art minimally invasive option for major surgery. These are only two of many examples of how AI is positively affecting our lives. However, there is a dark side. For example, Gartner Inc., a technology research group, forecasts robots and drones will replace a third of all workers by 2025. Could AI create an unemployment crisis?  As AI permeates the medical field, the average human life span will increase. Eventually, strong artificially intelligent humans (SAHs), with AI brain implants to enhance their intelligence and cybernetic organs, will become immortal. Will this exacerbate the worldwide population crisis already surfaced as a concern by the United Nations? By 2045, some AI futurists predict that a single strong artificially intelligent machine (SAM) will exceed the cognitive intelligence of the entire human race. How will SAMs view us? Objectively, humanity is an unpredictable species. We engage in wars, develop weapons capable of destroying the world and maliciously release computer viruses. Will SAMs view us as a threat? Will we maintain control of strong AI, or will we fall victim to our own invention?

I recognize that this post raises more questions than answers. However, I thought it important to share these questions with you. In my new book, War At The Speed Of Light, I devote an entire chapter to autonomous directed energy weapons. I surface these questions, Will autonomous weapons replace human judgment and result in unintended devastating conflicts? Will they ignite World War III? I also provide recommendations to avoid these unintended conflicts. For more insight, browse the book on Amazon

A book cover with an airplane on the ground.

Press Release: New Books Reveals Arms Race for Genius Weapons and Their Threat to Humanity

Amherst, NY (November 6, 2018) – The first book in its genre, Genius Weapons: Artificial Intelligence, Autonomous Weaponry, and the Future of Warfare (Prometheus Books, November 6, 2018) by Louis A. Del Monte, delineates the new arms race between the United States, China, and Russia to develop genius weapons, weapons whose artificial intelligence greatly exceeds human intelligence and the destructive force of nuclear weapons.

Artificial intelligence is playing an ever-increasing role in military weapon systems. The Pentagon is now in a race with China and Russia to develop “lethal autonomous weapon systems” (LAWS). In this eye-opening overview, a physicist, technology expert, and former Honeywell executive examines the advantages and the potential threats to humanity resulting from the deployment of weapons guided by superintelligent computers (i.e., genius weapons). Stressing the likelihood that these weapons will be available in the coming decades since no treaty regulates their development and deployment, the author examines the future of warfare and the potential for genius weapons to initiate a war that threatens the extinction of humanity.

“A highly readable and deeply researched exploration of one of the most chilling aspects of the development of artificial intelligence: the creation of intelligent, autonomous killing machines. In Louis A. Del Monte’s view, the multibillion dollar arms industry and longstanding rivalries among nations make the creation of autonomous weapons extremely likely,” said James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era.

In his insightful and prescient account of genius weapons, Del Monte uses vivid scenarios that immerse the reader in the ethical dilemmas and existential threats posed by these weapons. Based on hard science and political realities, the book warns that the dystopian visions of such movies as The Terminator and I, Robot may become a frightening reality in the future. The author concludes with concrete recommendations, founded in historical precedent, to control this new arms race.

 Mr. Del Monte is available for interviews. You may contact him by phone at (952) 261-4532, or by email at ldelmonte@delmonteagency.com.

Louis A. Del Monte is an award-winning physicist, author, inventor, futurist, featured speaker, and CEO of Del Monte and Associates, Inc. For over thirty years, he was a leader in the development of microelectronics and microelectromechanical systems (MEMS) for IBM and Honeywell. As a Honeywell Executive Director from 1982 to 2001, he led hundreds of physicists, engineers, and technology professionals engaged in integrated circuit and sensor technology development for both Department of Defense (DOD) and commercial applications. He is literally a man whose career has changed the way we work, play, and make war. Del Monte is the recipient of the H.W. Sweatt Award for scientific engineering achievement and the Lund Award for management excellence. He is the author of international bestsellers like Nanoweapons and The Artificial Intelligence Revolution. He has been quoted or has published articles in the Huffington Post, the Atlantic, Business Insider, American Security Today, Inc., and on CNBC. He has appeared on the History Channel.

Amherst, NY (November 6, 2018) – The first book in its genre, Genius Weapons: Artificial Intelligence, Autonomous Weaponry, and the Future of Warfare (Prometheus Books, November 6, 2018) by Louis A. Del Monte, delineates the new arms race between the United States, China, and Russia to develop genius weapons, weapons whose artificial intelligence greatly exceeds human intelligence and the destructive force of nuclear weapons.

Artificial intelligence is playing an ever-increasing role in military weapon systems. The Pentagon is now in a race with China and Russia to develop “lethal autonomous weapon systems” (LAWS). In this eye-opening overview, a physicist, technology expert, and former Honeywell executive examines the advantages and the potential threats to humanity resulting from the deployment of weapons guided by superintelligent computers (i.e., genius weapons). Stressing the likelihood that these weapons will be available in the coming decades since no treaty regulates their development and deployment, the author examines the future of warfare and the potential for genius weapons to initiate a war that threatens the extinction of humanity.

“A highly readable and deeply researched exploration of one of the most chilling aspects of the development of artificial intelligence: the creation of intelligent, autonomous killing machines. In Louis A. Del Monte’s view, the multibillion dollar arms industry and longstanding rivalries among nations make the creation of autonomous weapons extremely likely,” said James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era.

In his insightful and prescient account of genius weapons, Del Monte uses vivid scenarios that immerse the reader in the ethical dilemmas and existential threats posed by these weapons. Based on hard science and political realities, the book warns that the dystopian visions of such movies as The Terminator and I, Robot may become a frightening reality in the future. The author concludes with concrete recommendations, founded in historical precedent, to control this new arms race.

Mr. Del Monte is available for interviews. You may contact him by phone at (952) 261-4532, or by email at ldelmonte@delmonteagency.com.

Louis A. Del Monte is an award-winning physicist, author, inventor, futurist, featured speaker, and CEO of Del Monte and Associates, Inc. For over thirty years, he was a leader in the development of microelectronics and microelectromechanical systems (MEMS) for IBM and Honeywell. As a Honeywell Executive Director from 1982 to 2001, he led hundreds of physicists, engineers, and technology professionals engaged in integrated circuit and sensor technology development for both Department of Defense (DOD) and commercial applications. He is literally a man whose career has changed the way we work, play, and make war. Del Monte is the recipient of the H.W. Sweatt Award for scientific engineering achievement and the Lund Award for management excellence. He is the author of international bestsellers like Nanoweapons and The Artificial Intelligence Revolution. He has been quoted or has published articles in the Huffington Post, the Atlantic, Business Insider, American Security Today, Inc., and on CNBC. He has appeared on the History Channel.