All posts by admin

human extinction

Will Humanity Survive the 21st Century?

In my last post, I stated, “In making the above predictions [about the singularity], I made one critical assumption. I assumed that humankind would continue the “status quo.” I am ruling out world-altering events, such as large asteroids striking Earth, leading to human extinction, or a nuclear exchange that renders civilization impossible. Is assuming the “status quo” reasonable? We’ll discuss that in the next post.

Let’s now discuss if humanity will survive the 21st century.

The typical events that most people consider as causing humanity’s extinction, such as a large asteroid impact or a volcanic eruption of sufficient magnitude to cause catastrophic climate change, actually have a relatively low probability of occurring, in the order of 1 in 50,000 or less, according to numerous estimates found via a simple Google search. In 2008, experts surveyed at the Global Catastrophic Risk Conference at the University of Oxford suggested a 19% chance of human extinction over the next century, citing the top five most probable to cause human extinction by 2100 as:

  1. Molecular nanotechnology weapons – 5% probability
  2. Super-intelligent AI – 5% probability
  3. Wars – 4% probability
  4. Engineered pandemic – 2% probability
  5. Nuclear war – 1% probability

All other existential events were below 1%. Again, doing a simple Google search may provide different results by different “experts.” If we take the above survey at face value, it would suggest that the risk of an existential event increases with time. This has led me to the conclusion that human survival over the next 30 years is highly probable.

It is interesting to note in the 2008 Global Catastrophic Risk Conference survey, super-intelligent AI equates with molecular nanotechnology weapons for number one. In my view, molecular nanotechnology weapons and super-intelligent AI are two sides of the same coin. In fact, I judge that super-intelligent AI will be instrumental in developing molecular nanotechnology weapons. I also predict that humanity, in some form, will survive until the year 2100. However, I predict that will include both humans with strong artificially intelligent brain implants and organic humans (i.e., no brain implants to enhance their intelligence). However, each may have some artificially intelligent body parts.

Let me summarize. Based on the above information, it is reasonable to judge humanity will survive through the 21st century.

Singularity

The Inevitability Of A Computer Smarter Than Humanity

In my last post, I predicted that the world would experience the singularity between 2040 -2045, an artificially intelligent machine that exceeds the combined cognitive intelligence of the entire human race. In this post, I will delineate my predictions leading to the singularity. Please note their simplicity. I have worked hard to strip away all non-essential elements and only focus on those that represent the crucial elements leading to the singularity. I will state my rationale, and you can judge whether to accept or reject each prediction. Here are my predictions:

Prediction 1: Computer hardware, with computational power greater than a human brain (estimated at 36.8 petaflops), will be in the hands of governments and wealthy companies by the early 2030s.

Rationale: My reasoning for this is straightforward. We are already at the point that governments utilize computers close to the computational power of the human brain.  They are IBM’s Sequoia (16.32 petaflops), Cray’s Titan (17.59 petaflops), and China’s Tianhe-2 (33.86 petaflops). Given the state of current computer technology, we can use Moore’s law to reach the inescapable conclusion that by the early 2030s, governments and wealthy companies will own supercomputers with computational capability greater than a human brain.

Prediction 2: Software will exist that not only emulates but also exceeds the cognitive processes of the human brain by the early 2040s.

Rationale: Although no computer-software combination has passed the Turing test (i.e., essentially conversing with a computer is equivalent to conversing with another human), several have come close. For example, in 2015, a program called Eugene was able to convince 10 of 30 judges from the Royal Society that it was human. Given Moore’s law, by 2025, computer-processing power will have increased by over 100 fold. I view Moore’s law to be applicable in a larger context than raw computer processing power. I believe it is an observation regarding the trend of human creativity as it applies to technology. However, is Moore’s law applicable to software improvement? Historically, software development has not followed Moore’s law. The reason behind this was funding. Computer hardware costs dominated the budget of most organizations. The software had traditionally taken a backseat to hardware, but that trend is changing. With the advent of ubiquitous, cost-effective computer hardware, there is more focus on producing high-quality software. This emphasis led to software engineering development, which since the early 1980s has become widely recognized as a profession on par with other engineering disciplines. Numerous companies and government agencies employ highly educated software engineers. As a result, state-of-the-art computer software is closing the gap and becoming a near-follower of state-of-the-art computer hardware. How near? Based on my judgment, which I offer only as a rough estimate, software prowess is approximately one decade behind computer processing power. My rationale for this is straightforward. Even if computer hardware and software receive equal funding, the computer hardware will still lead the software simply because you need the hardware for the more sophisticated software to function. Is my estimation that software lags hardware by ten years correct? If anything, I think it is conservative. If you agree, it is reasonable to accept that vastly more capable computer software will follow within a decade in addition to the vastly increased computer processing power. Based on this, it is not a stretch to judge one or more computers will pass the Turing Test by 2025-2030. Even if software development progresses on a linear trend, as opposed to the exponential trend predicted by Moore’s law, we can expect computer software to improve 10 fold from 2030 to 2040. In my judgment, this will be sufficient to exceed the cognitive processes of the human brain.

Prediction 3: A computer will be developed in the 2040-2045 timeframe that exceeds the cognitive intelligence of all humans on Earth.

Rationale: This last prediction is, in effect, predicting the timeframe of the singularity. It requires predictions 1 and 2 to be correct and that a database that represents all human knowledge be available to store in a computer’s memory. To understand this last point, let us consider a hypothetical question. Will there be a digital database by the early 2040s equivalent to all knowledge known to humanity? In my view, the answer is yes. Databases like this almost exist today. For example, consider the data that Google has indexed. In addition to indexing online content, Google began an ambitious project in 2004, namely to scan and index the world’s paper books and make them searchable online. If we assume that by 2040 they complete this task, their database would contain all the information in books up to that point and all online information. Would that be all the knowledge of humanity? Perhaps! There is no way of knowing if Google alone will be the digital repository of all human knowledge in 2040. The crucial point is there are likely to be digital databases in 2040 that, if integrated, represent the total of all human knowledge. Google may only be one of them. These databases can be stored in a computer’s memory. With early 2040 state-of-the-art software, a supercomputer in early 2040 will be able to access those databases and cognitively exceed the intelligence of the entire human race, which is by definition the point of the singularity.

Many contemporary futurists typically predict numerous details leading to the singularity and attempt to attach a timeframe to each detail. I have set that approach aside since it is not relevant to predicting the singularity. That includes, for example, predicting computer brain implants, nanotech-based manufacturing, as well as a laundry list of other technological marvels. However, I think the singularity will only require accurately predicting the three events delineated above. As simple as they appear, they satisfy two crucial requirements. One, they are necessary, and two, they are sufficient to predict the singularity.

In making the above predictions, I made one critical assumption. I assumed that humankind would continue the “status quo.” I am ruling out world-altering events, such as large asteroids striking Earth, leading to human extinction, or a nuclear exchange that renders civilization impossible. Is assuming the “status quo” reasonable? We’ll discuss that in the next post.

A circular image of the center of a building.

Predicting the Singularity

Futurists differ on the technical marvels and cultural changes that will precede the singularity. In this context, let us define the singularity as a point in time when an artificially intelligent machine exceeds the combined cognitive intelligence of the entire human race. In effect, there is no widely accepted vision of the decade leading to the singularity. There are reasons why this is the case.

The most obvious reason is that futurists differ on when the singularity will occur. Respected artificial intelligence technology futurists, like Ray Kurzweil and the late James Martin (1933 – 2013), predict the singularity will occur on or about 2045. At the 2012 Singularity Summit, Stuart Armstrong, a University of Oxford James Martin research fellow, conducted a poll regarding artificial generalized intelligence (AGI) predictions (i.e., the timing of the singularity) and found a median value of 2040. If you scour the Internet, you can find predictions that are substantially earlier and a century later. Therefore, let me preface everything I say with “caveat emptor,” Latin for “Let the buyer beware.” In this context, you may interpret it, “Let the reader be skeptical.” Although I strongly believe that my predictions regarding the singularity are correct, I also caution that the reader be skeptical and examine each prediction using their own judgment to ascertain its validity.

After much research and thought, I have concluded that the world will experience the singularity between 2040 -2045. In effect, I agree with Kurzweil, Martin, and the 2012 Armstrong survey. That suggests that the singularity will occur within the next twenty-five years. In the next post, I’ll explain how I arrived at my projection in the next post.

A colorful star with many lines coming out of it.

China’s Laser Weapons

This is an edited excerpt from my new book, War At The Speed Of Light.

Significant evidence indicates that China is developing laser weapons. Jane’s 360 reported, “Chinese media have reported that a prototype laser weapon is being tested by the People’s Liberation Army Navy (PLAN). An article published on 5 April [2019] on the Sina news website contains several screengrabs taken from footage broadcast by China Central Television (CCTV) showing a trainable optical device mounted on a mobile chassis with a large main lens.”

China’s laser weapon appeared in a promotional video broadcast by state-run channel CCTV. The transmission shows it in a ground-based, vehicle-mounted application. According to Sina.com, China intends both land and sea deployment, including aboard its destroyers, as an alternative to their short-range surface-to-air missile. This last statement implies it has a range of about three miles. Beyond talking about potential applications, China provides no evidence of the laser’s capabilities.

China is using espionage to obtain any information it can on the US Navy’s developments. The Maritime Executive, a source for breaking maritime and marine news, reported, “[The] U.S. Navy has uncovered evidence of widespread and persistent hacking by Chinese actors targeting naval technology. According to a recent internal review ordered by Navy Secretary Richard Spencer, the service’s broader R&D ecosystem is “under cyber siege,” primarily by Chinese hacking teams.”

My view is that China is doing all within its capability to develop laser weapons. Given their tenacity to hack their way into the US’ most crucial intelligence information, combined with their government’s funding of advanced weapons, it is only a matter of time before they weaponize lasers. Indeed, according to ZeeNews, “The Indian and US satellites are vulnerable to China’s ground-based lasers as according to some analysts China has acquired the full capability to destroy the enemy’s satellite sensors through its lasers. China can cause great damage to Indian and US satellites during wartime.” If this last statement is true, it means China has become a laser power.

artificial intelligence equal to human intelligence

How Will We Know IF Artificial Intelligence Equals Human Intelligence?

Today, we find many different opinions regarding what constitutes human intelligence. There is no one widely accepted answer. Here are two definitions that have found some acceptance among the scientific community.

  1. “A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—‘catching on,’ ‘making sense of things,” or ‘figuring out’ what to do” (“Mainstream Science on Intelligence,” an editorial statement by fifty-two researchers, The Wall Street Journal, December 13, 1994).
  2. “Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by thinking. Although these individual differences can be substantial, they are never entirely consistent: a given person’s intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of ‘intelligence’ attempt to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions.” (“Intelligence: Knowns and Unknowns,” a report published by the Board of Scientific Affairs of the American Psychological Association, 1995).

Now that we have some basis for defining human intelligence, let us attempt to define a test that we could use to assert that artificial intelligence emulates human intelligence.

Alan Turing is widely considered the father of theoretical computer science and artificial intelligence. He became prominent for his pivotal role in developing a computer that cracked the daily settings for the Enigma machine, Germany’s technology for coding messages during World War II. This breakthrough allowed the Allies to defeat the Nazis in many crucial engagements. Some credit Turing’s work, as a cryptanalyst, for shortening the war in Europe by as many as two to four years. After World War II, in 1950, Alan Turing turned his attention to artificial intelligence and proposed the now-famous Turing test. The Turing test is a methodology to test the intelligence of a computer. The Turing test requires a human “judge” to engage both a human and a computer with strong AI in a natural-language conversation. None of the participants, however, can see each other. If the judge cannot distinguish between the human and strong AI computer, the computer passes the Turing test and is equivalent to human intelligence. This test does not require that the answers be correct, just indistinguishable. Passing the Turing test requires almost all the major capabilities associated with strong AI to be equivalent to those of a human brain. It is a challenging test, and to date, no intelligent agent has passed it. However, over the years, there have been numerous attempts to pass the Turing Test, with associated claims of success. Here is a summary of major attempts to pass the Turing Test:

  • In 1966, Joseph Weizenbaum created the ELIZA program, which examined a user’s typed comments for keywords. If the program found a keyword, its algorithm used a rule to return a reply. Although Weizenbaum and others claim success, their claim is highly contentious. In effect, this is the same type of algorithm (i.e., a set of rules followed in problem-solving operations by a computer) early search engines used to provide search returns before Google’s use of “link popularity” (i.e., the number of links that point to a website using an imbedded keyword) to improve search return relevance.
  • In 1972, Kenneth Colby created PARRY, which was characterized as “ELIZA with attitude.” The PARRY program took the ELIZA algorithm and additionally modeled the behavior of a paranoid schizophrenic. Once again, the results were disappointing. It was not able to consistently convince professional psychiatrists that it was a real patient.
  • In 2015, the developers of a program called Eugene made a claim it passed the Turing Test. However, their claim turned out to be bogus. Eugene was able to convince 10 of 30 judges from the Royal Society that it was human. Although augmentative, there is a strong consensus based on the test conditions and results that Eugene did not pass the Turing test.

Although other tests claim to go beyond the Turing Test, no new test has gained wide support in the scientific community. Therefore, even today, the Turing Test remains the gold standard concerning an AI machine emulating human intelligence. Despite recent claims to the contrary, no AI machine has been able to pass the Turing Test.