Invisible – or Irrelevant?

A lot of physics is, quite frankly boring.  That’s not to say it isn’t very important in understanding stresses and strains, deformations and fractures, and in ensuring bridges don’t collapse, jet engines work, and so much else.  But it’s not a topic that stands up well in everyday conversations, unless you are an engineer, of course!  Cosmology is a different matter.  That’s a subject packed with interesting stuff, all the more so because so much of it is not amenable to experiments in the physics lab!  Not only interesting, but some of it addresses things that are invisible.  We ‘know’ they are there, but we can’t see them.

One example that kept a lot of people happy for a long time was the aether.  According to ancient science, this was an invisible material that filled space outside of the Earth, which was the location for the other four elements: earth, air, fire, and water.  Largely forgotten for a long time, this changed in the 19th Century when scientists determined  light travelled in waves.  Waves imply motion in a substance, and since there was no other candidate, the aether made a comeback.  Actually, Newton had used the aether in the 1670s, in his first model to explain gravity.  In order to account for interaction between distant bodies, he (re)introduced a mechanism of propagation through the medium of the aether.   The explanation was clumsy.  He described the aether as a medium that ‘flows’ continually downward toward the Earth’s surface and is partially absorbed and partially diffused. This ‘circulation’ of aether is what he needed to explain the force of gravity.  The theory was a complicated mess, and he had to postulate different aether densities, with aether dense within objects and rarer outside them. We don’t need to go into the elaborate model he developed, and, anyway, he abandoned it in his next theory of gravity.  It was pretty nutty!

Invisible, but necessary for the transmission of light, until one of the most famous experiments in physics, the 1887 Michelson-Morley experiment to detect the aether (at that time often referred to as the ‘luminiferous aether’).  The experiment was simple in concept, attempting to compare the speed of light in two directions, one perpendicular to the other.   The result was negative, in that the experimenters found no significant difference between the speed of light in the direction of movement through the presumed aether, and the speed at right angles.   A negative result, and the first critical piece of evidence in demolishing the aether concept (as well as providing a key component in what would later become Einstein’s theory of special relativity).  That experiment finally dispensed with an invisible substance that had been seen as key to understanding the physical world for more than 2,000 years.

Today physics is grappling with another invisible, undetectable and yet essential substance permeating space, in this case ‘dark matter’.  Dan Merritt’s recent book, A Philosophical Approach to MOND, published in 2020, explains the issue at stake.  The standard theory of cosmology is called the Lambda cold dark matter (ΛCDM) model.  As that name suggests, the theory postulates the existence of dark matter – a mysterious substance that (according to the theorists) comprises the bulk of the matter in the Universe.  It is widely accepted.  Most cosmologists today use this ‘Standard Model’, and virtually all of them take the existence of dark matter for granted.  As one Nobel Prize winner put it: ‘The evidence for the dark matter of the hot Big Bang cosmology is about as good as it gets in natural science.’

There is one problem, however. For four decades and counting, scientists have failed to detect dark matter particles in terrestrial laboratories. You might think this would have generated some doubts about the standard cosmological model, but all indications are to the contrary. Surely the lack of experimental confirmation should create concern.

In fact, there are competing cosmological theories, and not all of them contain dark matter. The most successful competitor is called Modified Newtonian Dynamics (MOND). Observations that are explained under the Standard Model by invoking dark matter are explained under MOND by postulating a modification to the theory of gravity. If scientists had confirmed the existence of these dark particles, there would be little motivation to explore such theories as MOND. But dark matter is still undetected, so the existence of a viable alternative theory that lacks dark matter invites us to ask the obvious question: does dark matter really exist?

Philosophers of science are fascinated by such situations, and it is easy to see why. The traditional way of assessing the truth or falsity of a theory is by testing its predictions.  If a prediction is confirmed, we tend to believe the theory; if it is refuted, we tend to reject it.  Given this, if two theories are equally capable of explaining relevant observations, there would seem to be no way to decide between them.

What is a poor scientist to do? How to decide? It turns out that the philosophers have some suggestions. They point out that scientific theories can achieve correspondence with the facts in two very different ways.  The ‘bad’ way is by ‘after the event’ accommodation: the theory is adjusted or modified to bring it in line with each new piece of data as it becomes available.  The ‘good’ way is by prediction: the theory correctly predicts facts in advance of their discovery, without any adjustments to the original theory.

It is probably safe to say that many theories need some changes after their first formulation. But philosophers are nearly unanimous in arguing that successful, prior prediction of a fact gives a much stronger basis for belief in a theory than after-the-event tweaks to deal with an annoying observation.  Some philosophers would go as far as to claim the only basis to support a theory are data that were predicted in advance of experimental confirmation, finding data anticipated by the theory.  Since only one of these two cosmological theories can be correct, you would expect that only one of them corresponds with the facts it predicts.  This is what has been under examination to date, and it’s not the Standard Model that’s the favoured theory according to the philosophers’ criterion. It’s MOND!  In the discussion that follows, I am borrowing extensively from Wikipedia and other sources.  I should add that while nothing I am reporting is new, it does help us understand an interesting dilemma.

Dark matter was a response to an anomaly that arose, in the late 1970s, from observations of spiral galaxies such as our Milky Way.  The speed at which stars and gas clouds orbit about the centre of a galaxy should be predictable given the observed distribution of matter in the galaxy.  Such predictions are based on the assumption that the gravitational force resulting from observable matter (stars, gas) is responsible for maintaining the stars in their circular orbits, just as the Sun’s gravity maintains the planets in their orbits. But this prediction was decisively contradicted by the observations. It was found that, sufficiently far from the centre of every spiral galaxy, stars were spinning much faster than predicted.  What was going on?

Cosmologists had a solution. They proposed that every galaxy is embedded in a ‘dark matter halo’, a roughly circular cloud composed of a substance that generates just the right amount of extra gravity needed to explain these high orbital speeds.  When I read that, I was amazed.  Up until I delved into this, I had assumed dark matter was just floating about in space, not just a series of clouds, each focussed on a galaxy.  Not only was this dark matter put forward to explain observations, but the theorists had to add a bit more: dark matter had to be invisible, consisting of elementary particles that do not interact with electromagnetic radiation (that includes light, but also radio waves, gamma rays etc).  However, such a particle wasn’t known at the time, nor have physicists found any evidence in their laboratory experiments of its existence, despite of looking very hard since the early 1980s. A solution to a problem, without data to back it up.  Can you see why it reminds me of the aether?

In 1983, an alternative explanation for this ‘rotation-curve anomaly’ was put forward by a physicist at the Weizmann Institute of Science in Israel, Mordehai Milgrom.  Milgrom noticed that the anomalous data had two striking regularities that were not explained by the dark matter hypothesis.  First: orbital speeds were not simply larger than predicted, but in every galaxy that had been observed, the orbital speed rises the further you are from the centre, before reaching an unchanging high value as far out as observations permit.  Apparently, astronomers call this property ‘asymptotic flatness of the rotation curve’ (don’t you love the terminology they use!).  Second: these anomalously high orbital speeds invariably appear in regions of space where any acceleration due to gravity falls below a certain characteristic, and very small, value. It turns out no one can predict, in any galaxy, exactly where the rotational speed will begin to deviate from Newtonian dynamics.

Based on these observations, Milgrom began to wonder if the theory of gravity might simply be wrong.  He suggested a simple modification to Isaac Newton’s laws that relate gravitational force to acceleration, a modification carefully designed to produce what had been observed.  As you can imagine, many cosmologists said, ‘so what?’, since it is easy to invent something new to fit facts.  However, his theory also predicted how to calculate this effective gravitational force, based on the observed distribution of normal matter alone.  When tested, his predictions were correct, without relying on the presence of dark matter.

There is an important issue at stake here.  The standard cosmological model relies on the ‘after the event’ tweaking approach, invoking whatever amount and distribution of dark matter is required to reconcile the observed orbital speeds with Newton’s laws. On the other hand, Milgrom’s hypothesis predicts orbital speeds based on the observed distribution of normal matter alone. To date, Standard Model cosmologists have been unable to come up with an algorithm capable of doing anything as impressive as that.  The usual conventions of the philosophy of science would suggest the predictive success of Milgrom’s theory gives us a firm basis for accepting his MOND theory, as opposed to the Standard Model.

I suppose you won’t be surprised to learn that most cosmologists are sticking with the Standard Model, despite further predictions from MOND being confirmed, predictions that can’t be explained by the original version of the Standard Model.  Not to worry.  Scientists don’t give up easily, and Standard Model theorists have come up with a methodology to carry out large-scale computer simulations of the formation and evolution of galaxies, starting from uniform initial conditions in the early Universe. These simulated galaxies can then be ‘observed’, and their properties tabulated.  We don’t need to get into the details.  Suffice to say, there is a mechanism proposed, called ‘feedback’, which helps their approach.  These adjustments aren’t quite enough to explain Milgrom’s findings, but progress continues, and the Standard Theory adherents are confident they can tweak their way to success.

Will their eventual success lend support to the now modified Standard Theory, just as confirmation of Milgrom’s predictions gave support to MOND?  Now, this is where it gets interesting!  Philosophers of science have an answer: ‘no’! As many have observed, when one theory has accounted for a set of facts by clever adjustments (tweaks), while a rival accounts for the same facts directly and without contrivance, then the rival does, but the first does not, derive support from those facts.  To put that plainly, Milgrom’s hypothesis correctly predicts the observation of higher-than-expected orbital speeds far from the centre of every spiral galaxy, ‘without contrivance’.  On that basis, MOND ‘wins’: it is the sole hypothesis that derives support from those data.

This isn’t just a philosophical preference given to scientific theories that predict in advance previously unknown observations or relations.  This is also the approach that scientists themselves have adhered to, repeatedly, going back centuries.  I read that Leibniz wrote in 1678:  ‘Those hypotheses deserve the highest praise … by whose aid predictions can be made, even about phenomena or observations which have not been tested before.’  Why then have most cosmologists been so dismissive of MOND, given that MOND exhibits the very quality that scientists prize so highly?

As you might have suspected, there’s more.  One of the most important features of the standard cosmological model is its ability to account for the characteristics of the cosmic microwave background (CMB) spectrum, the statistical properties of temperature fluctuations in the universe-filling radiation that was produced soon after the Big Bang.  Milgrom’s theory didn’t originally do this, or at least not as well as the Standard Model, but last year, two theorists in the Czech Republic, showed there are versions of Milgrom’s hypothesis that are perfectly capable of reproducing the CMB data without dark matter. This relativistic version of MOND, called RMOND, supports Milgrom theory on the scale of galaxies.

Case over?  Prior to this work, many supporters of the Standard Model had argued fitting the CMB data was the single most important thing MOND needed to do.   Cosmologist Ruth Durrer told The Atlantic: ‘A theory must do really well to agree with [the CMB] data. This is the bottleneck.’ Since this bottleneck no longer exists, have Standard Model cosmologists accepted RMOND theory as a legitimate competitor to theirs?  Of course not. A current argument is RMOND works, but it is so much more complicated than the Standard Model.

Does this mean the Standard Model is better?  Perhaps their criticisms are missing a key point: it’s not that RMOND is too complicated; it is that the dark matter concept is too simple!  RMOND theorists have understood for a long time that there is just no way that a formless entity such as dark matter can spontaneously keep rearranging itself to produce the striking regularities observed in nearby galaxies.

The interest in this debate is not just over the strange concept of invisible, undetectable dark matter.  Clearly, the argument from predictive success is a good reason to favour MOND over the standard cosmological model. But one can hope for more: for what Karl Popper called a ‘crucial experiment’: an empirical or observational result that decisively favours one theory over the other.  When will we identify this definitive experiment!?

Drawn to cosmology by my father’s interest, the field is especially exciting right now, with two viable contenders for a unifying theory of cosmology. But I can’t help reflecting on the fact that this story is rather similar to that for the aether:  the dominant scientific community showing it is unwilling to abandon a theory that is increasingly unsupportable.  Just like that ‘luminiferous aether’, dark matter cannot be detected, despite the four decades of determined searching.  Science keeps facing these crises, with researchers finding it hard to abandon a previously well supported theoretical framework.  I wonder if this is another example.  One day, we might come to agree that dark matter isn’t just invisible, it’s irrelevant.

Recent Posts

Categories

Archives