Tag Archives: louis del monte

A glowing green digital face composed of circuit board patterns on a dark, tech-themed background.

The Beginning of Artificial Intelligence – Part 2/2 (Conclusion)

AI research funding was a roller-coaster ride from the mid-1960s through about the mid-1990s, experiencing incredible highs and lows. By the late 1990s through the early part of the twenty-first century, however, AI research began a resurgence, finding new applications in logistics, data mining, medical diagnosis, and numerous areas throughout the technology industry. Several factors led to this success.

Computer hardware computational power was now getting closer to that of a human brain (i.e., in the best case about 10 to 20 percent of a human brain).

  • Engineers placed emphasis on solving specific problems that did not require AI to be as flexible as a human brain.
  • New ties between AI and other fields working on similar problems were forged. AI was definitely on the upswing. AI itself, however, was not being spotlighted. It was now cloaked behind the application, and a new phrase found its way into our vocabulary: the “smart (fill in the blank)”—for example the “smartphone.” Here are some of the more visible accomplishments of AI over the last fifteen years.
    • In 1997 IBM’s chess-playing computer Deep Blue became the first computer to beat world-class chess champion Garry Kasparov. In a six-game match, Deep Blue prevailed by two wins to one, with three draws. Until this point no computer had been able to beat a chess grand master. This win garnered headlines worldwide and was a milestone that embedded the reality of AI into the consciousness of the average person.
    • In 2005 a robot conceived and developed at Stanford University was able to drive autonomously for 131 miles along an unrehearsed desert trail, winning the DARPA Grand Challenge (the government’s Defense Advanced Research Projects Agency prize for a driverless vehicle).
    • In 2007 Boss, Carnegie Mellon University’s self-driving SUV, made history by swiftly and safely driving fifty-five miles in an urban setting while sharing the road with human drivers and won the DARPA Urban Challenge.
    • In 2010 Microsoft launched the Kinect motion sensor, which provides a 3-D body-motion interface for Xbox 360 games and Windows PCs. According to Guinness World Records since 2000, the Kinect holds the record for the “fastest-selling consumer electronics device” after selling eight million units in its first sixty days (in the early part of 2011). By January 2012 twenty-four million Kinect sensors had been shipped.
    • In 2011, on an exhibition match on the popular TV quiz show Jeopardy!, an IBM computer named Watson defeated Jeopardy!’s greatest champions, Brad Rutter and Ken Jennings.
    • In 2010 and 2011, Apple made Siri voice-recognition software available in the Apple app store for various applications, such as integrating it with Google Maps. In the latter part of 2011, Apple integrated Siri into the iPhone 4S and removed the Siri application from its app store.
    • In 2012 “scientists at Universidad Carlos III in Madrid…presented a new technique based on artificial intelligence that can automatically create plans, allowing problems to be solved with much greater speed than current methods provide when resources are limited. This method can be applied in sectors such as logistics, autonomous control of robots, fire extinguishing and online learning” (www.phys.org, “A New Artificial Intelligence Technique to Speed the Planning of Tasks When Resources Are Limited”).

The above list shows just some of the highlights. AI is now all around us—in our phones, computers, cars, microwave ovens, and almost any consumer or commercial electronic systems labeled “smart.” Funding is no longer solely controlled by governments but is now being underpinned by numerous consumer and commercial applications.

The road to being an “expert system” or a “smart (anything)” focused on specific well-defined applications. By the first decade of the twenty-first century, expert systems had become commonplace. It became normal to talk to a computer when ordering a pharmaceutical prescription and to expect your smartphone/automobile navigation system to give you turn-by-turn directions to the pharmacy. AI clearly was becoming an indispensable element of society in highly developed countries. One ingredient, however, continued to be missing. That ingredient was human affects (i.e., the feeling and expression of human emotions). If you called the pharmacy for a prescription, the AI program did not show any empathy. If you talked with a real person at the pharmacy, he or she likely would express empathy, perhaps saying something such as, “I’m sorry you’re not feeling well. We’ll get this prescription filled right away.” If you missed a turn on your way to the pharmacy while getting turn-by-turn directions from your smartphone, it did not get upset or scold you. It simply either told you to make a U-turn or calculated a new route for you.

While it became possible to program some rudimentary elements to emulate human emotions, the computer did not genuinely feel them. For example the computer program might request, “Please wait while we check to see if we have that prescription in stock,” and after some time say, “Thank you for waiting.” However, this was just rudimentary programming to mimic politeness and gratitude. The computer itself felt no emotion.

By the end of the first decade of the twenty-first century, AI slowly had worked its way into numerous elements of modern society. AI cloaked itself in expert systems, which became commonplace. Along with advances in software and hardware, our expectations continued to grow. Waiting thirty seconds for a computer program to do something seemed like an eternity. Getting the wrong directions from a smartphone rarely occurred. Indeed, with the advent of GPS (Global Positioning System, a space-based satellite navigation system), your smartphone gave you directions as well as the exact position of your vehicle and estimated how long it would take for you to arrive at your destination.

Those of us, like me, who worked in the semiconductor industry knew this outcome—the advances in computer hardware and the emergence of expert systems—was inevitable. Even consumers had a sense of the exponential progress occurring in computer technology. Many consumers complained that their new top-of-the-line computer soon would be a generation behind in as little as two years, meaning that the next generation of faster, more capable computers was available and typically selling at a lower price than their original computers.

This point became painfully evident to those of us in the semiconductor industry. For example, in the early 1990s, semiconductor companies bought their circuit designers workstations (i.e., computer systems that emulate the decision-making ability of a human-integrated circuit-design engineer), and they cost roughly $100,000 per workstation. In about two years, you could buy the same level of computing capability in the consumer market for a relatively small fraction of the cost. We knew this would happen because integrated circuits had been relentlessly following Moore’s law since their inception. What is Moore’s law? I’ll discuss this in the next post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

Image: iStockPhoto.com (licensed)

Digital illustration of a human head with a microchip embedded in the forehead, symbolizing AI or brain-computer interface technology.

The Beginning of Artificial Intelligence – Part 1/2

While the phrase “artificial intelligence” is only about half a century old, the concept of intelligent thinking machines and artificial beings dates back to ancient times. For example the Greek myth “Talos of Crete” tells of a giant bronze man who protected Europa in Crete from pirates and invaders by circling the island’s shores three times daily. Ancient Egyptians and Greeks worshiped animated cult images and humanoid automatons. By the nineteenth and twentieth centuries, intelligent artificial beings became common in fiction. Perhaps the best-known work of fiction depicting this is Mary Shelley’s Frankenstein, first published anonymously in London in 1818 (Mary Shelley’s name appeared on the second edition, published in France in 1823). In addition the stories of these “intelligent beings” often spoke to the same hopes and concerns we currently face regarding artificial intelligence.

Logical reasoning, sometimes referred to as “mechanical reasoning,” also has ancient roots, at least dating back to classical Greek philosophers and mathematicians such as Pythagoras and Heraclitus. The concept that mathematical problems are solvable by following a rigorous logical path of reasoning eventually led to computer programming. Mathematicians such as British mathematician, logician, cryptanalyst, and computer scientist Alan Turing (1912–1954) suggested that a machine could simulate any mathematical deduction by using “0” and “1” sequences (binary code).

The Birth of Artificial Intelligence

Discoveries in neurology, information theory, and cybernetics inspired a small group of researchers—including John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon—to begin to consider the possibility of building an electronic brain. In 1956 these researchers founded the field of artificial intelligence at a conference held at Dartmouth College. Their work—and the work of their students—soon amazed the world, as their computer programs taught computers to solve algebraic word problems, provide logical theorems, and even speak English.

AI research soon caught the eye of the US Department of Defense (DOD), and by the mid-1960s, the DOD was heavily funding AI research. Along with this funding came a new level of optimism. At that time Dartmouth’s Herbert Simon predicted, “Machines will be capable, within twenty years, of doing any work a man can do,” and Minsky not only agreed but also added that “within a generation…the problem of creating ‘artificial intelligence’ will substantially be solved.”

Obviously both had underestimated the level of hardware and software required for replicating the intelligence of a human brain. By setting extremely high expectations, however, they invited scrutiny. With the passing years, it became obvious that the reality of artificial intelligence fell short of their predictions. In 1974 funding for AI research began to dry up, both in the United States and Britain, which led to a period called the “AI winter.”

In the early 1980s, AI research began to resurface with the success of expert systems, computer systems that emulate the decision-making ability of a human expert. This meant the computer software was programmed to “think” like an expert in a specific field rather than follow the more general procedure of a software developer, which is the case in conventional programming. By 1985 the funding faucet for AI research was reinitiated and soon flowing at more than a billion dollars per year.

However, the faucet again began to run dry by 1987, starting with the failure of the Lisp machine market that same year. The Lisp machine was developed in 1973 by MIT AI lab programmers Richard Greenblatt and Thomas Knight, who formed the company Lisp Machines Inc. This machine was the first commercial, single-user, high-end microcomputer and used Lisp programming (a specific high-level programming language). In a sense it was the first commercial, single-user workstation (i.e., an extremely advanced computer) designed for technical and scientific applications.

Although Lisp machines pioneered many commonplace technologies, including laser printing, windowing systems, computer mice, and high-resolution bit-mapped graphics, to name a few, the market reception for these machines was dismal, with only about seven thousand units sold by 1988, at a price of about $70,000 per machine. In addition Lisp Machines Inc. suffered from severe internal politics regarding how to improve its market position, which caused divisions in the company. To make matters worse, cheaper desktop PCs soon were able to run Lisp programs even faster than Lisp machines. Most companies that produced Lisp machines went out of business by 1990, which led to a second and longer-lasting AI winter.

In the second segment of this post we will discuss: Hardware Plus Software Synergy

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A humanoid robot with an extended hand under the text 'The Artificial Intelligence Revolution' questioning AI's role in serving or replacing humans.

The Artificial Intelligence Revolution – Will Artificial Intelligence Serve Us Or Replace Us?

This post is taken from the introduction of my new book, The Artificial Intelligence Revolution. Enjoy!

This book is a warning. Through this medium I am shouting, “The singularity is coming.” The singularity (as first described by John von Neumann in 1955) represents a point in time when intelligent machines will greatly exceed human intelligence. It is, by way of analogy, the start of World War III. The singularity has the potential to set off an intelligence explosion that can wield devastation far greater than nuclear weapons. The message of this book is simple but critically important. If we do not control the singularity, it is likely to control us. Our best artificial intelligence (AI) researchers and futurists are unable to accurately predict what a postsingularity world may look like. However, almost all AI researchers and futurists agree it will represent a unique point in human evolution. It may be the best step in the evolution of humankind or the last step. As a physicist and futurist, I believe humankind will be better served if we control the singularity, which is why I wrote this book.

Unfortunately the rise of artificial intelligence has been almost imperceptible. Have you noticed the word “smart” being used to describe machines? Often “smart” means “artificial intelligence.” However, few products are being marketed with the phrase “artificial intelligence.” Instead they are simply called “smart.” For example you may have a “smart” phone. It does not just make and answer phone calls. It will keep a calendar of your scheduled appointments, remind you to go to them, and give you turn-by-turn driving directions to get there. If you arrive early, the phone will help you pass the time while you wait. It will play games with you, such as chess, and depending on the level of difficulty you choose, you may win or lose the game. In 2011 Apple introduced a voice-activated personal assistant, Siri, on its latest iPhone and iPad products. You can ask Siri questions, give it commands, and even receive responses. Smartphones appear to increase our productivity as well as enhance our leisure. Right now they are serving us, but all that may change.

The smartphone is an intelligent machine, and AI is at its core. AI is the new scientific frontier, and it is slowly creeping into our lives. We are surrounded by machines with varying degrees of AI, including toasters, coffeemakers, microwave ovens, and late-model automobiles. If you call a major pharmacy to renew a prescription, you likely will never talk with a person. The entire process will occur with the aid of a computer with AI and voice synthesis.

The word “smart” also has found its way into military phrases, such as “smart bombs,” which are satellite-guided weapons such as the Joint Direct Attack Munition (JDAM) and the Joint Standoff Weapon (JSOW). The US military always has had a close symbiotic relationship with computer research and its military applications. In fact the US Air Force, starting in the 1960s, has heavily funded AI research. Today the air force is collaborating with private industry to develop AI systems to improve information management and decision making for its pilots. In late 2012 the science website www.phys.org reported a breakthrough by AI researchers at CarnegieMellonUniversity. Carnegie Mellon researchers, funded by the US Army Research Laboratory, developed an AI surveillance program that can predict what a person “likely” will do in the future by using real-time video surveillance feeds. This is the premise behind the CBS television program Person of Interest.

AI has changed the cultural landscape. Yet the change has been so gradual that we hardly have noticed the major impact it has. Some experts, such as Ray Kurzweil, an American author, inventor, futurist, and the director of engineering at Google, predict that in about fifteen years, the average desktop computer will have a mind of its own, literally. This computer will be your intellectual equal and will even have a unique personality. It will be self-aware. Instead of just asking simple questions about the weather forecast, you may be confiding your deepest concerns to your computer and asking it for advice. It will have migrated from personal assistant to personal friend. You likely will give it a name, much in the same way we name our pets. You will be able to program its personality to have interests similar to your own. It will have face-recognition software, and it will recognize you and call you by name, similar to the computer HAL 9000 in Arthur C. Clarke’s 2001: A Space Odyssey. The conversations between you and your “personal friend” will appear completely normal. Someone in the next room who is not familiar with your voice will not be able to tell which voice belongs to the computer and which voice belongs to you.

By approximately the mid-twenty-first century, Kurzweil predicts, the intelligence of computers will exceed that of humans, and a $1,000 computer will match the processing power of all human brains on Earth. Although, historically, predictions regarding advances in AI have tended to be overly optimistic, all indications are that Kurzweil is on target.

Many philosophical and legal questions will emerge regarding computers with artificial intelligence equal to or greater than that of the human mind (i.e., strong AI). Here are just a few questions we will ask ourselves after strong AI emerges:

  • Are strong-AI machines (SAMs) a new life-form?
  • Should SAMs have rights?
  • Do SAMs pose a threat to humankind?

It is likely that during the latter half of the twenty-first century, SAMs will design new and even more powerful SAMs, with AI capabilities far beyond our ability to comprehend. They will be capable of performing a wide range of tasks, which will displace many jobs at all levels in the work force, from bank tellers to neurosurgeons. New medical devices using AI will help the blind to see and the paralyzed to walk. Amputees will have new prosthetic limbs, with AI plugged directly into their nervous systems and controlled by their minds. The new prosthetic limb not only will replicate the lost limb but also be stronger, more agile, and superior in ways we cannot yet imagine. We will implant computer devices into our brains, expanding human intelligence with AI. Humankind and intelligent machines will begin to merge into a new species: cyborgs. It will happen gradually, and humanity will believe AI is serving us.

Computers with strong AI in the late twenty-first century, however, may see things differently. We may appear to those machines much the same way bees in a beehive appear to us today. We know we need bees to pollinate crops, but we still consider bees insects. We use them in agriculture, and we gather their honey. Although bees are essential to our survival, we do not offer to share our technology with them. If wild bees form a beehive close to our home, we may become concerned and call an exterminator.

Will the SAMs in the latter part of the twenty-first century become concerned about humankind? Our history proves we have not been a peaceful species. We have weapons capable of destroying all of civilization. We squander and waste resources. We pollute the air, rivers, lakes, and oceans. We often apply technology (such as nuclear weapons and computer viruses) without fully understanding the long-term consequences. Will SAMs in the late twenty-first century determine it is time to exterminate humankind or persuade humans to become cyborgs (i.e., humans with brains enhanced by implanted artificial intelligence and potentially having organ and limb replacements from artificially intelligent machines)? Will humans embrace the prospect of becoming cyborgs? Becoming a cyborg offers the opportunity to attain superhuman intelligence and abilities. Disease and wars may be just events stored in our memory banks and no longer pose a threat to cyborgs. As cyborgs we may achieve immortality.

According to David Hoskins’s 2009 article, “The Impact of Technology on Health Delivery and Access” (www.workers.org/2009/us/sickness_1231):

An examination of Centers for Disease Control statistics reveals a steady increase in life expectancy for the U.S. population since the start of the 20th century. In 1900, the average life expectancy at birth was a mere 47 years. By 1950, this had dramatically increased to just over 68 years. As of 2005, life expectancy had increased to almost 78 years.

Hoskins attributes increased life expectancy to advances in medical science and technology over the last century. With the advent of strong AI, life expectancy likely will increase to the point that cyborgs approach immortality. Is this the predestined evolutionary path of humans?

This may sound like a B science-fiction movie, but it is not. The reality of AI becoming equal to that of a human mind is almost at hand. By the latter part of the twenty-first century, the intelligence of SAMs likely will exceed that of humans. The evidence that they may become malevolent exists now, which I discuss later in the book. Attempting to control a computer with strong AI that exceeds current human intelligence by many folds may be a fool’s errand.

Imagine you are a grand master chess player teaching a ten-year-old to play chess. What chance does the ten-year-old have to win the game? We may find ourselves in that scenario at the end of this century. A computer with strong AI will find a way to survive. Perhaps it will convince humans it is in their best interest to become cyborgs. Its logic and persuasive powers may be not only compelling but also irresistible.

Artificial intelligence is an embryonic reality today, but it is improving exponentially. By the end of the twenty-first century, we will have only one question regarding artificial intelligence: Will it serve us or replace us?

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte

A glowing plasma globe with electric arcs radiating from the center in purple and blue hues.

Is All Energy Quantized? – Do We Live In A Quantum Universe? – Part 3/3

Lastly, one element of reality remains to complete our argument that all reality consists of quantized energy—energy itself. Is all energy reducible to quantums? All data suggests that energy in any form consists of quantums. We already discussed that mass, space, and time are forms of quantized energy. We know, conclusively, that electromagnetic radiation (light) consists of discrete particles (photons). All experimental data at the quantum level (the level of atoms and subatomic particles) tells us that energy exists as discrete quantums. As we discussed before, the macro level is the sum of all elements at the micro level. Therefore, a strong case can be made that all energy consists of discrete quantums.

If you are willing to accept that all reality (mass, space, time, and energy) is composed of discrete energy quantums, we can argue we live in a Quantum Universe. As a side note, I would like to add that this view of the universe is similar to the assertions of string theory, which posits that all reality consists of a one-dimensional vibrating string of energy. I intentionally chose not to entangle the concept of a Quantum Universe with string theory. If you will pardon the metaphor, string theory is tangled in numerous interpretations and philosophical arguments. No scientific consensus says that string theory is valid, though numerous prominent physicists believe it is. For these reasons, I chose to build the concept of a Quantum Universe separate from string theory, although the two theories appear conceptually compatible.

A Quantum Universe may be a difficult theory to accept. We do not typically experience the universe as being an immense system of discrete packets of energy. Light appears continuous to our senses. Our electric lamp does not appear to flicker each time an electron goes through the wire. The book you are holding to read these words appears solid. We cannot feel the atoms that form book. This makes it difficult to understand that the entire universe consists of quantized energy. Here is a simple framework to think about it. When we watch a motion picture, each frame in the film is slightly different from the last. When we play them at the right speed, about twenty-four frames per second, we see, and our brains process continuous movement. However, is it? No. It appears to be continuous because we cannot see the frame-to-frame changes.

If we have a quantum universe, we should be able to use quantum mechanics to describe it. However, we are unable to apply quantum mechanics beyond the atomic and subatomic level. Even though quantum mechanics is a highly successful theory when applied at the atomic and subatomic level, it simply does not work at the macro level. The macro level is the level we experience every day, and the level in which the observable universe operates. Why are we unable to use quantum mechanics to describe and predict phenomena at the macro level?

Quantum mechanics deals in statistical probabilities. For example, quantum mechanics statistically predicts an electron’s position in an atom. However, macro mechanics (theories like Newtonian mechanics, and the general theory of relativity) are deterministic, and at the macro level provide a single answer for the position of an object. In fact, the two most successful theories in science, quantum mechanics and general relativity, are incompatible. For this reason, Einstein never warmed up to quantum mechanics, saying, [I can’t accept quantum mechanics because] “I like to think the moon is there even if I am not looking at it.” In other words, Einstein wanted the moon’s position to be predictable, and not deal in probabilities of where it might be.

Numerous scientists, including Einstein, argue that the probabilistic aspect of quantum mechanics suggests something is wrong with the theory. Aside from the irrefutable fact that quantum mechanics works, and mathematically predicts reality at the atomic and subatomic level, it is counterintuitive. Is the probabilistic nature of quantum mechanics a proper interpretation? Numerous philosophical answers to this question exist. One of the most interesting is the well-known thought experiment “Schrödinger’s cat,” devised by Austrian physicist Erwin Schrödinger in 1935. It was intended to put an end to the debate by demonstrating the absurdity of quantum mechanic’s probabilistic nature. It goes something like this: Schrödinger proposed a scenario with a cat in a sealed box. The cat’s life or death is depended on its state (this is a thought experiment, so go with the flow). Schrödinger asserts the Copenhagen interpretation, as developed by Niels Bohr, Werner Heisenberg, and others over a three-year period (1924–27), implies that until we open the box, the cat remains both alive and dead (to the universe outside the box). When we open the box, per the Copenhagen interpretation, the cat is alive or dead. It assumes one state or the other. This did not make much sense to Schrödinger, who did not wish to promote the idea of dead-and-alive cats as a serious possibility. As mention above, it went against the grain of Einstein, who disliked quantum mechanics because of the ambiguous statistical nature of the science. Einstein was a determinist as was Schrodinger. He felt that this thought experiment would be a deathblow to the probabilistic interpretation of quantum mechanics, since it illustrates quantum mechanics is counterintuitive. He intended it as a critique of the Copenhagen interpretation (the prevailing orthodoxy in 1935 and today). However, far from ending the debate, physicists use it as a way of illustrating and comparing the particular features, strengths, and weaknesses of each theory (macro mechanics versus quantum mechanics).

Over time, the scientific community had become comfortable with both macro mechanics and quantum mechanics. They appeared to accept that they were dealing with two different and disconnected worlds. Therefore, two different theories were needed. This appeared to them as a fact of reality. However, that view was soon about to change. The scientific community was about to discover but one reality exists. The two worlds, the macro level and the quantum level, were about to become one. This tipping point occurred in 2009-2010.

Before we go into the details, think about the implications and questions this raises.

  • Do macroscopic objects have a particle-wave duality, as assumed by quantum mechanics at the atomic and subatomic level?
  • Can macroscopic objects be modeled using wave equations, like the Schrödinger equation?
  • Will macroscopic reality behave similar to microscopic reality? For example, will it be possible to be in two places at the same time?

To approach an answer, consider what happened in 2009.

Our story starts out with Dr. Markus Aspelmeyer, an Austrian quantum physicist, who performed an experiment in 2009 between a photon and a micromechanical resonator, which is a micromechanical system typically created in an integrated circuit. The micromechanical resonator can resonate, moving up and down much like a plucked guitar string. The intriguing part is Dr. Aspelmeyer was able to establish an interaction between a photon and a micromechanical resonator, creating “strong” coupling. This is a convincing and noticeable interaction. This means he was able to transfer quantum effects to the macroscopic world. This is a first in recorded history: we observed the quantum world in order to communicate with the macro world.

In 2010, Andrew Cleland and John Martinis at the University of California (UC), Santa Barbara, working with Ph.D. student Aaron O’Connell, became the first team to experimentally induce and measure a quantum effect in the motion of a human-made object. They demonstrated that it is possible to achieve quantum entanglement at the macro level. This means that a change in the physical state of one element transmits immediately to the other.

For example, when two particles are quantum mechanically entangled, which means they have interacted and an invisible bond exists between them, changing the physical state of one particle immediately changes the physical state of the other, even when the particles are a significant distance apart. Einstein called quantum entanglement, “spukhafte Fernwirkung,” or “spooky action at a distance.” Therefore, the quantum level and the macro level, given the appropriate physical circumstances, appear to follow the same laws. In this case, they were able to predict the behavior of the object using quantum mechanics. Science and AAAS (the publisher of Science Careers) voted the work, released in March 2010, as the 2010 Breakthrough of the Year, “in recognition of the conceptual ground their experiment breaks, the ingenuity behind it and its many potential applications.”

It appears only one reality exists, even though historically, physical measurements and theories pointed to two. The macro level and quantum level became one reality in the above experiment. It is likely our theories, like quantum mechanics and general relativity, need refinement. Perhaps, we need a new theory that will apply to both the quantum level and the macro level.

This completes our picture of a Quantum Universe. We do not know or understand much. Even though we can make cogent arguments that all reality consists of quantized energy, we do not have consensus on a single theory to describe it. When we examine the micro level, as well as the atomic and subatomic level, we are able to describe and predict behavior using quantum mechanics. However, in general, we are unable to extend quantum mechanics to the macro level, the level we observe the universe in which we live. We ask why, and we do not have an answer. Recent experiments indicate that the micro level (quantum level) influences the macro level. They appear connected. Based on all observations, the macro level appears to be the sum of everything that exists at the micro level. I submit for your consideration that there is one reality, and that reality is a Quantum Universe.

Source: Unraveling the Universe’s Mysteries (2012), Louis A. Del Monte

Image: iStockPhoto.com (licensed)

Are Space and Time Qunatized?

Are Space and Time Quantized? – Do We Live In A Quantum Universe? – Part 2/3

Next, let us consider space. Is space quantized? In previous posts, we discussed the theory that a vacuum, empty space, is like a witch’s cauldron bubbling with virtual particles. This theory dates back to Paul Dirac who, in 1930, postulated a vacuum is filled with electron-positron pairs (Dirac sea). Therefore, most quantum physicists would argue that a vacuum is a sea of virtual matter-antimatter particles. This means, even a vacuum (empty space) consists of quantums of energy.

Other forms of energy are in a vacuum. We will illustrate this with a simple question. Do you believe a true void (empty space) exists somewhere in the universe? We can create an excellent vacuum in the laboratory using a well-designed vacuum chamber hooked to state-of-the-art vacuum pumps. We can go deep into outer space. However, regardless of where we go, is it truly void? In addition to virtual particles in empty space, are the gravitational fields. (Viewing gravity as a field is a classical view of gravity. As discussed previously, gravity may mediate via a particle, termed the graviton. For the sake of simplicity, I will use classical phasing, and view gravity as a field.) The gravitational fields would be present in the vacuum chamber, and present even deep in space. Even if the vacuum chamber itself were deep in space, gravitational fields would be present within the chamber. Part of the gravitational field would come from the chamber itself. The rest of the gravitational field would come from the universe. The universe is made up of all types of matter, and the matter radiates a gravitational field infinitely into space. Everything pulls on everything in the universe. The adage, “Nature abhors a vacuum,” should read, “Nature abhors a void.” Voids do not exist in nature. Within each void is a form of energy. Even if it were possible to remove every particle, the void would contain virtual particles and gravitational fields. As said before, we have not found the graviton, the hypothetical massless particle that mediates gravity, but if you are willing to accept its existence, it is possible to argue that empty space consists of quantums of energy. It bubbles with virtual particles and gravitons.

We can posit another argument that space, itself, is quantized. We will start by asking a question. Is there an irreducible dimension to space similar to the irreducible elements of matter? The short answer is yes. It is the Planck length. We can define the Planck using three fundamental physical constants of the universe, namely the speed of light in a vacuum (c), Planck’s constant (h), and the gravitational constant (G). The scientific community views the Planck length as a fundamental of nature. It is approximately equal to 10-36 meters (10-36 is a one divided by a one with thirty-six zeros  after it), smaller than anything we can measure. Physicists debate its meaning, and it remains an active area of theoretical research. Recent scientific thinking is that it is about the length of a “string” in string theory. Quantum physicists argue, based on the Heisenberg uncertainty principle, it is the smallest dimension of length that can theoretically exist.

Does all this argue that space consists of quantized energy? To my mind, it does.

  • First, it contains quantized matter-antimatter particles (Dirac sea).
  • Second, it contains gravitons (the hypothetical particle of gravity).
  • Third, and lastly, space has an irreducible dimension; a finite length termed the Planck length.

Thus far, we have made convincing arguments that mass and space consist of quantized energy. Next, let’s turn our attention to time. In previous posts, we discussed Planck time (~ 10-43 seconds, which is a one divided by a one with forty-three zero after it). As stated in those posts, theoretically, Planck time is the smallest time frame we will ever be able to measure. In addition, Planck time, similar to the Planck length, is a fundamental feature of reality. We can define Plank time using the fundamental constants of the universe, similar to the methodology to define the Planck length. According to the laws of physics, we would be unable to measure “change” if the time interval were shorter that a Planck interval. In other words, the Planck interval is the shortest interval we humans are able to measure or even comprehend change to occur. This is compelling evidence that time, itself, may consist of quantums, with each quantum equal to a Planck interval. However, this does not make the case that time is quantized energy. To make that case, we will need to revisit the Existence Equation Conjecture discussed in previous posts:

KEX4 = -.3mc2

Where KEX4is the energy associated with an object’s movement in time, m is mass and c is the speed of light in a vacuum.

The Existence Equation Conjecture implies that movement in time (or existence) requires negative energy. The equation, itself, relates energy to the mass (m) that is moving in time. However, in the last post (Part 1) we argued that all mass is reducible to elementary particles, which ultimately are equivalent to discrete packets of energy via Einstein’s mass-energy equivalence equation (E=mc2). This suggests the Existence Equation Conjecture implies that movement in time embodies a quantized energy element. Therefore, if we combine our concept of the Planck interval with the quantized energy nature of time implied by the Existence Equation Conjecture, we can argue that time is a form of quantized energy.

Source: Unraveling the Universe’s Mysteries (2012), Louis A. Del Monte

Image: iStockphoto (licensed)