Tag Archives: stephen hawking

A futuristic, sleek turbine engine with a conical front and illuminated blue accents in a dark setting.

Stephen Hawking Proposes Nanotechnology Spacecraft to Reach ‘Second Earth’ in 20 years

Renowned physicist Stephen Hawking is proposing a nanotechnology spacecraft that can travel at a fifth of the speed of light. At that speed, it could reach the nearest star in 20 years and send back images of a suspected “Second Earth” within 5 years. That means if we launched it today, we would have our first look at an Earth-like planet within 25 years.

Hawking proposed a nano-spacecraft, termed “Star Chip,” at the Starmus Festival IV: Life And The Universe, Trondheim, Norway, June 18 – 23, 2017. Hawking told attendees that every time intelligent life evolves it annihilates itself with “war, disease and weapons of mass destruction.” He asserted this as the primary reason why advanced civilizations from another part of the Universe are not contacting Earth and the primary reason we need to leave the Earth. His advocates we colonize a “Second Earth.”

Scientific evidence appears to support Hawking’s claim. The SETI Institute has been listening for evidence of extraterrestrial radio signals, a sign of advanced extraterrestrial life, since 1984. To date, their efforts have been futile. SETI claims, rightly, that the universe is vast, and they are listening to only small sectors, which is much like finding a needle in a haystack. Additional evidence that Hawking may be right about the destructive nature of intelligent life comes from experts surveyed at the 2008 Global Catastrophic Risk Conference at the University of Oxford, whose poll suggested a 19% chance of human extinction by the end of this century, citing the top four most probable causes:

  1. Molecular nanotechnology weapons – 5% probability
  2. Super-intelligent AI – 5% probability
  3. Wars – 4% probability
  4. Engineered pandemic – 2% probability

Hawking envisions the nano-spacecraft to be a tiny probe propelled on its journey by a laser beam from Earth, much the same way wind propels sailing vessels. Once it reaches its destination, Hawking asserts, “Once there, the nano craft could image any planets discovered in the system, test for magnetic fields and organic molecules, and send the data back to Earth in another laser beam.”

Would Hawking’s nano-spacecraft work? Based on the research I performed during my career and in preparation for writing my latest book, Nanoweapons: A Growing Threat to Humanity (Potomac Books, 2017), I judge his concept is feasible. However, it would require significant engineering, as well as funding, to move from Hawking’s concept to a working nano-spacecraft, likely billions of dollars and decades of work. However, in Nanoweapons, I described the latest development of bullets that contain nanoelectronic guidance systems that allow the bullets to guide themselves, possibly to shoot an adversary hiding around a corner. Prototypes already exist.

Hawking’s concept is compelling. Propelling a larger conventional spacecraft using a laser would not attain the near light speed necessary to reach a distant planet. Propelling it with rockets would also fall short. According to Einstein’s theory of relativity, a large conventional spacecraft would require close to infinite energy to approach the speed of light. Almost certainly, Hawking proposed a nano-spacecraft for just that reason. Its mass would be small, perhaps measured in milligrams, similar to the weight of a typical household fly.

Hawking’s concept represents a unique application of nanotechnology that could give humanity its first up-close look at an inhabitable planet. What might we see? Perhaps it already harbors advanced intelligent life that chose not to contact Earth, given our hostile nature toward each other. Perhaps it harbors primitive life similar to the beginning of life on Earth. We have no way of knowing without contact.

You may choose to laugh at Hawking’s proposal. However, Hawking is one of the top scientists on Earth and well aware of advances in any branch of science he speaks about. I judge his concerns are well founded and his nano-spacecraft concept deserves serious consideration.

Multiple overlapping clock faces with various times, creating a surreal and abstract time concept in blue tones.

Stephen Hawking’s Chronology Protection Conjecture’s Impact On Time Travel Science

Most of the scientific community agrees that time travel is theoretically possible, based on Einstein’s special and general theories of relativity. However, world-famous cosmologist and physicist Stephen Hawking published a 1992 paper, “Chronology Protection Conjecture,” in which he stated the laws of physics do not allow the appearance of closed timelike curves (i.e., time travel to the past). Since its publication, the chronology protection conjecture has been significantly criticized. Most of the criticism centered on Dr. Hawking’s use of semiclassical gravity, versus using quantum gravity, to make his arguments. Dr. Hawking acknowledged, in 1998, that portions of the criticism are valid.

However, not to take sides on this issue, I feel compelled to point out that the two fundamental pillars of modern science, namely, general relativity and quantum mechanics, are incompatible. This placed Dr. Hawking in a difficult position regarding the use of gravity in writing the chronology protection conjecture. General relativity and quantum mechanics do not come together to provide a quantum gravity theory. This argues that we still do not have the whole picture, which makes it difficult to completely rule out Dr. Hawking’s chronology protection conjecture.

Currently, there is no widespread consensus on any theory that unifies general relativity with quantum mechanics. If such a theory existed, it would be the theory of everything (TOE) and would provide us with a quantum gravity theory. Highly regarded physicists, such as Stephen Hawking, believe M-theory (i.e., membrane theory), which is the most comprehensive string theory, is a candidate for the theory of everything. However, there is significant disagreement in the scientific community. Many physicists argue that M-theory is not experimentally verifiable, and on that basis is not a valid theory of science. However, to be fair to all sides, Einstein’s special theory of relativity, published in 1905, was also not experimentally verifiable for years. Today, most of the scientific community views the special theory of relativity as science fact, having withstood over one hundred years of scientific investigation. The scientific community, which didn’t really know what to make of the special theory of relativity in 1905, hails it now as the “gold standard” of theories, arguing that other theories must measure up to the same standards of rigorous investigation. I think science is better served by a more moderate position. In this regard, I agree with prominent physicist and author Michio Kaku, who stated in Nina L. Diamond’s Voices of Truth (2000), “The strength and weakness of physicists is that we believe in what we can measure. And if we can’t measure it, then we say it probably doesn’t exist. And that closes us off to an enormous amount of phenomena that we may not be able to measure because they only happened once. The Big Bang is an example. That’s one reason why they scoffed at higher dimensions for so many years. Now we realize that there’s no alternative.”

In essence, we need to keep an open mind, regardless of how bizarre a scientific theory may first appear. However, we need to balance our open-mindedness with experimental verification. This, to my mind, is how science advances.

A metallic robotic skull with glowing red eyes and cables attached, set against a black background.

Stephen Hawking Agrees with Me – Artificial Intelligence Poses a Threat!

Two days after the publication of my new book, The Artificial Intelligence Revolution (April 17, 2014), a short article authored by Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek was  published in the Huffington Post (April 19, 2014) under the title Transcending Complacency on Superintelligent Machines. Essentially the article warned, “Success in creating AI would be the biggest event in human history…Unfortunately, it might also be the last, unless we learn how to avoid the risks.” Shortly following the above publication, the Independent newspaper on May 1, 2014 ran an article entitled Stephen Hawking: Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?.

Recently, another notable artificial intelligence expert, Nick Bostrom, Professor in the Faculty of Philosophy at Oxford University. published his new book, Superintelligence (September 3, 2014), and offered a similar warning addressing the questions: ” What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?”

It is unlikely that my book, which provides a similar warning and predates their warnings, was their impetus. I say unlikely because the time between the publication of my book, and it rise to number one on Amazon, is too close to their publications.  Although, it is entirely possible that they may have read my book prior to going public with their warnings. The important point is that highly credible physicists and philosophers came to the same conclusion that formed the premise of my book: Artificial intelligence poses a potential threat to the long term existence of humankind.

Unfortunately, the artificial intelligence field is divided on the issue. Some believe that strong artificial intelligence, or what is also referred to as artificial general intelligence (i.e., intelligence equal to or exceeding human intelligence), will align itself with human goals and ethics. Ray Kurtzweil, a well know authority on artificial intelligence, acknowledges the potential threat exists, but suggests strong artificially intelligent entities will be grateful to humans for giving them existence. My view is this fraction is not looking at the facts. Humans have proven to be a dangerous species. In particular, we have set the world on a path of catastrophic climate change and engaged in wars, even world wars. We now have enough nuclear weapons to wipe out all life on Earth twice over. If you were a strong artificially intelligent entity, would you align with human goals and ethics? That is the crux of the issue. We pose a threat to strong artificially intelligent entities. Why wouldn’t they recognize this and seek to eliminate the threat?

Far fetched? Impossible? Consider this 2009 experiment performed by the Laboratory of Intelligent Systems in the Swiss Federal Institute of Technology in Lausanne. The experiment involved robots programmed to cooperate with one another in searching out a beneficial resource and avoiding a poisonous one. Surprisingly the robots learned to lie to one another in an attempt to hoard the beneficial resource (“Evolving Robots Learn to Lie to Each Other,” Popular Science, August 18, 2009). Does this experiment suggest the human emotion (or mind-set) of greed is a learned behavior? If intelligent machines can learn greed, what else can they learn? Wouldn’t self-preservation be even more important to an intelligent machine? Also, recognize that today’s robots would be eight times more intelligent that those in 2009, based on Moore’s law (i.e., computer technology doubles in capability every eighteen months).

The concern is real. We already have evidence (i.e., the Lausanne experiment) that suggest artificially intelligent entities will act in accordance with their own best interests and this does not have to be explicitly programmed. The evidence suggests, to my mind, that increased artificial intelligence gives rise to human-like mindsets like greed and self-preservation.

The time has come for the US to form an oversight committee to address the issue and suggest legislation. However, this issue is not just a US issue. It is a worldwide issue. For example, currently China has the most capable supercomputer (Tianhe-2). We must address this as a worldwide problem, similar to the way biological weapons and above ground nuclear testing was addressed. This means it must become one of the United Nation’s top issues.

There is a high urgency. Extrapolating today’s artificial intelligence technology, using Moore’s law, suggests that computes with artificial general intelligence will be built during the 2020 -2030 time-frame. Further extrapolation suggests computes that exceed the combined cognitive intelligence of all humans on Earth will be built in the 2040-2050 time-frame. The time to act is now, while humans are still the dominant species on the planet.