Category Archives: Universe Mysteries

Diagram of a double-slit experiment setup with light source, thin opaque plate, double slits, and screen.

A Classic Time Travel Paradox – Double-Slit Experiment Demonstrates Reverse Causality!

Almost the entire scientific community has held for hundreds of years that for every effect, there must have been a cause. Another way of saying this is cause precedes effect. For example, if you hit a nail with a hammer (the cause), you can drive it deeper into the wood (the effect). However, some recent experiments are challenging that belief. We are discovering that what you do after an experiment can influence what occurred at the beginning of the experiment. This would be the equivalent of the nail going deeper into the wood prior to it being hit by the hammer. This is termed reversed causality. Although, there are numerous new experiments that illustrate reverse causality, science has been struggling with a classical experiment called the “double-slit” that illustrates reverse causality for well over half a century.

There are numerous versions of the double-slit experiment. In its classic version, a coherent light source, for example a laser, illuminates a thin plate containing two open parallel slits. The light passing through the slits causes a series of light and dark bands on a screen behind the thin plate. The brightest bands are at the center, and the bands become dimmer the farther they are from the center. See image below to visually understand this.

The series of light and dark bands on the screen would not occur if light were only a particle. If light consisted of only particles, we would expect to see only two slits of light on the screen, and the two slits of light would replicate the slits in the thin plate. Instead, we see a series of light and dark patterns, with the brightest band of light in the center, and tapering to the dimmest bands of light at either side of the center. This is an interference pattern and suggests that light exhibits the properties of a wave. We know from other experiments—for example, the photoelectric effect (see glossary), which I discussed in my first book, Unraveling the Universe’s Mysteries—that light also exhibits the properties of a particle. Thus, light exhibits both particle- and wavelike properties. This is termed the dual nature of light. This portion of the double-slit experiment simply exhibits the wave nature of light. Perhaps a number of readers have seen this experiment firsthand in a high school science class.

The above double-slit experiment demonstrates only one element of the paradoxical nature of light, the wave properties. The next part of the double-slit experiment continues to puzzle scientists. There are five aspects to the next part.

  1. Both individual photons of light and individual atoms have been projected at the slits one at a time. This means that one photon or one atom is projected, like a bullet from a gun, toward the slits. Surely, our judgment would suggest that we would only get two slits of light or atoms at the screen behind the slits. However, we still get an interference pattern, a series of light and dark lines, similar to the interference pattern described above. Two inferences are possible:
    1. The individual photon light acted as a wave and went through both slits, interfering with itself to cause an interference pattern.
    2. Atoms also exhibit a wave-particle duality, similar to light, and act similarly to the behavior of an individual photon light described (in part a) above.
  2. Scientists have placed detectors in close proximity to the screen to observe what is happening, and they find something even stranger occurs. The interference pattern disappears, and only two slits of light or atoms appear on the screen. What causes this? Quantum physicists argue that as soon as we attempt to observe the wavefunction of the photon or atom, it collapses. Please note, in quantum mechanics, the wavefunction describes the propagation of the wave associated with any particle or group of particles. When the wavefunction collapses, the photon acts only as a particle.
  3. If the detector (in number 2 immediately above) stays in place but is turned off (i.e., no observation or recording of data occurs), the interference pattern returns and is observed on the screen. We have no way of explaining how the photons or atoms know the detector is off, but somehow they know. This is part of the puzzling aspect of the double-slit experiment. This also appears to support the arguments of quantum physicists, namely, that observing the wavefunction will cause it to collapse.
  4. The quantum eraser experiment—Quantum physicists argue the double-slit experiment demonstrates another unusual property of quantum mechanics, namely, an effect termed the quantum eraser experiment. Essentially, it has two parts:
    1. Detectors record the path of a photon regarding which slit it goes through. As described above, the act of measuring “which path” destroys the interference pattern.
    2. If the “which path” information is erased, the interference pattern returns. It does not matter in which order the “which path” information is erased. It can be erased before or after the detection of the photons.

This appears to support the wavefunction collapse theory, namely, observing the photon causes its wavefunction to collapse and assume a single value.

If the detector replaces the screen and only views the atoms or photons after they have passed through the slits, once again, the interference pattern vanishes and we get only two slits of light or atoms. How can we explain this? In 1978, American theoretical physicist John Wheeler (1911–2008) proposed that observing the photon or atom after it passes through the slit would ultimately determine if the photon or atom acts like a wave or particle. If you attempt to observe the photon or atom, or in any way collect data regarding either one’s behavior, the interference pattern vanishes, and you only get two slits of photons or atoms. In 1984, Carroll Alley, Oleg Jakubowicz, and William Wickes proved this experimentally at the University of Maryland. This is the “delayed-choice experiment.” Somehow, in measuring the future state of the photon, the results were able to influence their behavior at the slits. In effect, we are twisting the arrow of time, causing the future to influence the past. Numerous additional experiments confirm this result.

Let us pause here and be perfectly clear. Measuring the future state of the photon after it has gone through the slits causes the interference pattern to vanish. Somehow, a measurement in the future is able to reach back into the past and cause the photons to behave differently. In this case, the measurement of the photon causes its wave nature to vanish (i.e., collapse) even after it has gone through the slit. The photon now acts like a particle, not a wave. This paradox is clear evidence that a future action can reach back and change the past.

To date, no quantum mechanical or other explanation has gained widespread acceptance in the scientific community. We are dealing with a time travel paradox that illustrates reverse causality (i.e., effect precedes cause), where the effect of measuring a photon affects its past behavior. This simple high-school-level experiment continues to baffle modern science. Although quantum physicists explain it as wavefunction collapse, the explanation tends not to satisfy many in the scientific community. Irrefutably, the delayed-choice experiments suggest the arrow of time is reversible and the future can influence the past.

This post is based on material from my new book, How to Time Travel, available at Amazon in both paperback and Kindle editions.

Image: Figure 3, from How to Time Travel (2013)

dark matter

Dark Matter Explained – Most of the Universe Is Missing!

The most popular theory of dark matter is that it is a slow-moving particle. It travels up to a tenth of the speed of light. It neither emits nor scatters light. In other words, it is invisible. However, its effects are detectable, as I will explain below. Scientists call the mass associated with dark matter a “WIMP” (Weakly Interacting Massive Particle).

In 1933, Fritz Zwicky (California Institute of Technology) made a crucial observation. He discovered the orbital velocities of galaxies were not following Newton’s law of gravitation (every mass in the universe attracts every other mass with a force inversely proportional to the square of the difference between them). They were orbiting too fast for the visible mass to be held together by gravity. If the galaxies followed Newton’s law of gravity, the outermost stars would be thrown into space. He reasoned there had to be more mass than the eye could see, essentially an unknown and invisible form of mass that was allowing gravity to hold the galaxies together. Zwicky’s calculations revealed that there had to be 400 times more mass in the galaxy clusters than what was visible. This is the mysterious “missing-mass problem.” It is normal to think that this discovery would turn the scientific world on its ear. However, as profound as the discovery turned out to be, progress in understanding the missing mass lags until the 1970s.

In 1975, Vera Rubin and fellow staff member Kent Ford, astronomers at the Department of Terrestrial Magnetism at the Carnegie Institution of Washington, presented findings that reenergized Zwicky’s earlier claim of missing matter. At a meeting of the American Astronomical Society, they announced the finding that most stars in spiral galaxies orbit at roughly the same speed. They made this discovery using a new, sensitive spectrograph (a device that separates an incoming wave into a frequency spectrum). The new spectrograph accurately measured the velocity curve of spiral galaxies. Like Zwicky, they found the spiral velocity of the galaxies was too fast to hold all the stars in place. Using Newton’s law of gravity, the galaxies should be flying apart, but they were not. Presented with this new evidence, the scientific community finally took notice. Their first reaction was to call into question the findings, essentially casting doubt on what Rubin and Ford reported. This is a common and appropriate reaction, until the amount of evidence (typically independent verification) becomes convincing.

In 1980, Rubin and her colleagues published their findings (V. Rubin, N. Thonnard, W. K. Ford, Jr, (1980). “Rotational Properties of 21 Sc Galaxies with a Large Range of Luminosities and Radii from NGC 4605 (R=4kpc) to UGC 2885 (R=122kpc).” Astrophysical Journal 238: 471.). It implied that either Newton’s laws do not apply, or that more than 50% of the mass of galaxies is invisible. Although skepticism abounded, eventually other astronomers confirmed their findings. The experimental evidence had become convincing. “Dark matter,” the invisible mass, dominates most galaxies. Even in the face of conflicting theories that attempt to explain the phenomena observed by Zwicky and Rubin, most scientists believe dark matter is real. None of the conflicting theories (which typically attempted to modify how gravity behaved on the cosmic scale) was able to explain all the observed evidence, especially gravitational lensing (the way gravity bends light).

Currently, the scientific community believes that dark matter is real and abundant, making up as much as 90% of the mass of the universe. However, dark matter is still a mystery. For years, scientists have been working to find the WIMP particle to confirm dark matter’s existence. All efforts have been either unsuccessful or inconclusive.

This material is from Unraveling the Universe’s Mysteries (2012), Louis A. Del Monte.

Close-up image of translucent blue cells or microscopic organisms against a dark background.

Virtual Particles – Spontaneous Particle Creation

This article is from chapter 1 of my book, Unraveling the Universe’s Mysteries. Enjoy!

Spontaneous particle creation is the phenomenon of particles appearing from apparently nothing (i.e., a vacuum), hence their name “virtual particles.” However, they appear real, and cause real changes to their environment. What is a virtual particle? It is a particle that only exists for a limited time. The virtual particle obeys some of the laws of real particles, but it violates other laws. What laws do virtual particles obey? They obey two of the most critical laws of physics, the Heisenberg uncertainty principle (it is not possible to know both the position and velocity of a particle simultaneously), and the conservation energy (energy cannot be created or destroyed). What laws do they violate? Their kinetic energy, which is the energy related to their motion, may be negative. A real particle’s kinetic energy is always positive. Do virtual particles come from nothing? Apparently, but to a physicist, empty space is not nothing. Said more positively, physicists consider empty space something.

Before we proceed, it is essential to understand a little more about the physical laws mentioned in the above paragraph.

First, we will discuss the Heisenberg uncertainty principle. Most physics professors teach it in the context of attempting to simultaneously measure a particle’s velocity and position. It goes something like this:

  • When we attempt to measure a particle’s velocity, the measurement interferes with the particle’s position.
  • If we attempt to measure the particle’s position, the measurement interferes with the particles velocity.
  • Thus, we can be certain of either the particle’s velocity or the particle’s position, but not both simultaneously.

This makes sense to most people. However, it is an over simplification. The Heisenberg uncertainty principle has greater implications. It embodies the statistical nature of quantum mechanics. Quantum mechanics is a set of laws and principles that describes the behavior and energy of atoms and subatomic particles. This is often termed the “micro level” or “quantum level.” Therefore, you can conclude that the Heisenberg uncertainty principle embodies the statistical behavior of matter and energy at the quantum level. In our everyday world, which science terms the macro level, it is possible to know both the velocity and position of larger objects. We generally do not talk in terms of probabilities. For example, we can predict the exact location and orbital velocity of a planet. Unfortunately, we are not able to make similar predictions about an electron as it obits the nucleus of an atom. We can only talk in probabilities regarding the electron’s position and energy. Thus, most scientists will say that macro-level phenomena are deterministic, which means that a unique solution describes their state of being, including position, velocity, size, and other physical attributes. On the other hand, most physics will argue that micro level (quantum level) phenomena are probabilistic, which means that their state of being is described via probabilities, and we cannot simultaneously determine, for example, the position and velocity of a subatomic particle.

The second fundamental law to understand is the conservation of energy law that states we cannot create or destroy energy. However, we can transform energy. For example, when we light a match, the mass and chemicals in the match transform into heat. The total energy of the match still exists, but it now exists as heat.

Lastly, the kinetic energy of an object is a measure of its energy due to its motion. For example, when a baseball traveling at high velocity hits a thin glass window, it is likely to break the glass. This is due to the kinetic energy of the baseball. When the window starts to absorb the ball’s kinetic energy, the glass breaks. Obviously, the thin glass is unable to absorb all of the ball’s kinetic energy, and the ball continues its flight after breaking the glass. However, the ball will be going slower, since it has used some of its kinetic energy to break the glass.

With the above understandings, we can again address the question: where do these virtual particles come from? The answer we discussed above makes no sense. It is counter intuitive. However, to the best of science’s knowledge, virtual particles come from empty space. How can this be true?

According to Paul Dirac, a British physicist and Nobel Prize Laureate, who first postulated virtual particles, empty space (a vacuum) consists of a sea of virtual electron-positron pairs, known as the Dirac sea. This is not a historical footnote. Modern-day physicists, familiar with the Dirac-sea theory of virtual particles, claim there is no such thing as empty space. They argue it contains virtual particles.

This raises yet another question. What is a positron? A positron is the mirror image of an electron. It has the same mass as an electron, but the opposite charge. The electron is negatively charged, and the positron is positively charged. If we consider the electron matter, the positron is antimatter. For his theoretical work in this area, science recognizes Paul Dirac for discovering the “antiparticle.” Positrons and antiparticles are all considered antimatter.

Virtual particle-antiparticle pairs pop into existence in empty space for brief periods, in agreement with the Heisenberg uncertainty principle, which gives rise to quantum fluctuations. This may appear highly confusing. A few paragraphs back we said that the Heisenberg uncertainty principle embodies the statistical nature of energy at the quantum level, which implies that energy at the quantum level can vary. Another way to say this is to state the Heisenberg uncertainty principle gives rise to quantum fluctuations.

What is a quantum fluctuation? It is a theory in quantum mechanics that argues there are certain conditions where a point in space can experience a temporary change in energy. Again, this is in accordance with the statistical nature of energy implied by the Heisenberg uncertainty principle. This temporary change in energy gives rise to virtual particles. This may appear to violate the conservation of energy law, arguably the most revered law in physics. It appears that we are getting something from nothing. However, if the virtual particles appear as a matter-antimatter pair, the system remains energy neutral. Therefore, the net increase in the energy of the system is zero, which would argue that the conservation of energy law remains in force.

No consensus exists that virtual particles always appear as a matter-antimatter pair. However, this view is commonly held in quantum mechanics, and this creation state of virtual particles maintains the conservation of energy. Therefore, it is consistent with Occam’s razor, which states that the simplest explanation is the most plausible one, until new data to the contrary becomes available. The lack of consensus about the exact nature of virtual particles arises because we cannot measure them directly. We detect their effects, and infer their existence. For example, they produce the Lamb shift, which is a small difference in energy between two energy levels of the hydrogen atom in a vacuum. They produce the Casimir-Polder force, which is an attraction between a pair of electrically neutral metal plates in a vacuum. These are two well-known effects caused by virtual particles. A laundry list of effects demonstrates that virtual particles are real.

Abstract fractal pattern resembling a cosmic or underwater scene with glowing blue and white textures.

Is Dark Energy Real or Simply a Scary Ghost Story?

If it is not real, it is an extremely scary ghost story. Unfortunately, the phenomena we call dark energy is real. If it plays out on its current course, we are going to be alone, all alone. The billions upon billions of other galaxies holding the promise of planets with life like ours will be gone. The universe will be much like what they taught our grandparents at the beginning of the Twentieth Century. It will consist of the Milky Way galaxy. All the other galaxies will have moved beyond our cosmological horizon, and be lost to us forever. There will be no evidence that the Big Bang ever occurred.

Mainstream science widely accepts the Big Bang as giving birth to our universe. Scientists knew from Hubble’s discovery in 1929 that the universe was expanding. However, prior to 1998, scientific wisdom was that the expansion of the universe would gradually slow down, due to the force of gravity, and eventually all mass in the universe would collapse to a single point in a “big crunch.” We were so sure that the “big crunch” model was correct, we decided to confirm our theory by measuring it. Can you imagine our reaction when our first measurement did not confirm our paradigm, namely that the expansion of the universe should be slowing down?

What happened in 1998? The High-z Supernova Search Team (an international cosmology collaboration) published a paper that shocked the scientific community. The paper was: Adam G. Riess et al. (Supernova Search Team) (1998). “Observational evidence from supernovae for an accelerating universe and a cosmological constant.” Astronomical J. 116 (3). They reported that the universe was doing the unthinkable. The expansion of the universe was not slowing down—in fact, it was accelerating. Of course, this caused a significant ripple in the scientific community. Scientists went back to Einstein’s general theory of relativity and resurrected the “cosmological constant,” which Einstein had arbitrarily added to his equations to prove the universe was eternal and not expanding. Einstein considered the cosmological constant his “greatest blunder” when Edwin Hubble, in 1929, proved the universe was expanding.

Through high school-level mathematical manipulation, scientists moved Einstein’s cosmological constant from one side of the equation to the other. With this change, the cosmological constant no longer acted to keep expansion in balance to result in a static universe. In this new formulation, Einstein’s “greatest blunder,” the cosmological constant, mathematically models the acceleration of the universe. Mathematically this may work, however, it does not give us insight into what is causing the expansion.

The one thing that you need to know is that almost all scientists hold the paradigm of “cause and effect.” If it happens, something is causing it to happen. Things do not simply happen. They have a cause. That means every bubble in the ocean has a cause. It would be a fool’s errand to attempt to find the cause for each bubble. Yet, I believe, as do almost all of my colleagues, each bubble has a cause. Therefore, it is perfectly reasonable to believe something is countering the force of gravity, and causing the expansion to accelerate. What is it? No one knows. Science calls it “dark energy.”

That is the state of science today. The universe’s expansion is accelerating. No one knows why. Scientists reason there must be a cause countering the pull of gravity. They name that cause “dark energy.” Scientists mathematically manipulate Einstein’s self-admitted “greatest blunder,” the “cosmological constant,” to model the accelerated expansion of the universe.

Here is the scary part. In time, we will be entirely alone in the galaxy. The accelerated expansion of space will cause all other galaxies to move beyond our cosmological horizon. When this happens, our universe will consist of the Milky Way. The Milky Way galaxy will continue to exist, but as far out as our best telescopes will be able to observe, no other galaxies will be visible to us. What they taught our grandparents will have come true. The universe will be the Milky Way and nothing else. All evidence of the Big Bang will be gone. All evidence of dark energy will be gone. Space will grow colder, almost devoid of all heat, as the rest of the universe moves beyond our cosmological horizon. The entire Milky Way galaxy will grow cold as the stars eventually run out of fuel and die. All life will end. How is that for a scary story?

This post is based on my book, Unraveling the Universe’s Mysteries (2012).

A highly magnified electron microscope image of a tardigrade, a tiny water-dwelling micro-animal known for its resilience.

What kind of life might we find on other planets? Extremophiles!

In the last five decades, we have come to learn that life can be highly adaptable. Starting with the discovery of extremophiles in the 1960s, our entire understanding of how life may have evolved on Earth has been undergoing a reassessment. The early Earth would have presented a relatively inhospitable environment, suggesting to scientists that the earliest forms of life may have been extremophiles.

What is an extremophile? An extremophile is an organism that can thrive in an extreme condition that would be detrimental to most life on Earth. Let us take an example of the most complex of all known extremophiles, the tardigrade.

Tardigrades (also known as “water bears”) are 1 millimeter (0.039 in) long when fully grown, with 4 pairs of legs, each with 4-8 claws also known as “disks.” The animals are prevalent in moss and lichen and viewable with a low-power microscope.

What makes them an extremophile? The tardigrade can withstand temperatures as low as minus 273 degrees Celsius (near absolute zero) and as hot as 151 degrees Celsius (well above the boiling point of water, which is 100 degrees Celsius). It is also able to withstand pressures about six times that found in the deepest ocean trenches and ionizing radiation at doses hundreds of times higher than humans can survive. The big surprise is they can also live in the vacuum of space.

The average human can live without water for about three days, without food for about ten days, if the external environment is hospitable to humans. However, the tardigrade can go without both food and water for more than 10 years. They dry out to the point where they are less than 3% water, but can rehydrate, forage, and reproduce.

The discovery of extremophiles makes finding life on other planets and moons, even within our own solar system, more likely. For example, just recently researchers discovered bacterium, Planococcus halocryophilus OR1, in permafrost (permanently frozen ground) on Ellesmere Island (part of the Qikiqtaaluk Region of the Canadian territory of Nunavut). The organism thrives at 5 degrees Fahrenheit (minus 15 degrees Celsius). This discovery offers clues as to the type of life we may find on Mars or Saturn’s moon Enceladus, both of which contain water ice and surface temperatures well below freezing.

What we humans consider hospitable conditions may actually be lethal to extremophiles. For example, the microorganism Ferroplasma acidiphilum needs large amounts of iron to survive. The iron amounts they thrive in would kill most other life forms. On Earth, many extremophiles live deep underground, which was previously thought to be a dead zone for life, due to the absence of sunlight. However, now we know that the majority of our planet’s bacteria live underground.

The planet Mars has two polar ice caps, which consist primarily of water ice. What might we find as we explore these and the surrounding regions? Saturn’s moon Enceladus appears to have liquid water under its icy surface. Because of Enceladus’s apparent water near the surface, it is a prime candidate for extraterrestrial life in the form of extremophiles.

In my YouTube video, introducing my book, Unraveling the Universe’s Mysteries, I predicted that we would likely find extraterrestrial life in our own solar system within the next twenty years. I stand by that prediction. In fact, I believe I am being conservative.

I suggest we prepare ourselves. We may be on the verge of discovering life in our own solar system.

Sources:

Image: Wikipedia Commons – The tardigrade Hypsibius dujardini