Cosmological arguments for the existence of God
A philosophical perspective on findings of science

by Albrecht Moritz
(2009, revised 2015)


Summary

The apparent extreme fine-tuning of the laws of nature, necessary to allow for physical evolution of the universe and evolution of life, and affirmed with broad consensus by cosmologists regardless of their worldview, is reviewed. The article shows that all naturalistic non-design proposals fail to explain the apparent fine-tuning. 

These are:
a) Brute chance or brute fact
b) Necessity or high probability of the laws of nature
c) Life as we do not know it
d) The multiverse

Therefore, the most rational explanation is that the fine-tuning is due to actual design, pointing to the existence of God. As also demonstrated here, scenarios of a naturalistic origin of the universe blatantly contradict, or are not at all supported by, observations from science about actual matter, energy or fields. The physical realm does not behave in a way required for naturalistic origins. This further points to a completely different, immaterial source for the origin of the universe – lending additional support to the concept of a designer God.

Furthermore, it is discussed why the hypothesis of Cosmological Natural Selection is an insufficient alternative to the argument that the laws of nature must be designed.


Background

A few years ago my theism was challenged after discovering that, apart from the fact that evolution "all worked by itself", probably the origin of life had natural causes as well (I became so enthusiastic about the topic that I wrote an
overview of the research on the origin of life which is posted on the evolution website Talkorigins.org). I became interested in atheistic explanations of the world. After analyzing all these, I decided that, from my perspective, they fell far short of providing the most rational explanation of the world, and what is more, that the philosophy of theism still did. While I still consider the issues regularly and am open to new input, I currently see no serious challenge that would lead me to change my theist position. Certainly, I am well aware that theism and religious belief are not without difficulties – but in my view the difficulties of the atheistic position are much greater, by a wide margin. Also, while I respect it when others feel compelled to do so, I see for myself no reason to adopt an agnostic position. In any case, my study of atheistic views has further deepened my interest in rational arguments for the existence of God, a type of argument that had already played a central role in my belief in God for a long time.

Divine revelation is important evidence, but if that would be the only evidence available, without convincing evidence for God from the world around us, it would not suffice for me. It is the dual, combined evidence from both rational arguments for the existence of God and divine revelation that forms for me an unbeatable combination and keeps me firmly grounded in theism (in my personal case, Catholicism) and also excludes deism.

Important arguments for the existence of God outside of divine revelation, and apart from the classical philosophical arguments, are for me the kinds of cosmological arguments that I will discuss below, and the Argument from Reason, which holds that the mind must have an immaterial component in order to be rational, even though it uses, and is dependent on, the brain as an instrument (mental states correlate with brain activity). It thus suggests a more direct involvement of God in the creation of the human mind. For web literature on the argument, see for example this article:

Why Naturalists Should Mind about Physicalism, and Vice Versa

Of particular interest are the logical relation of ground-consequent as opposed to the relation cause-effect, and the contradiction between determinism and rationality, as outlined there *).

*) While overall I find it a very good article and worth reading in its entirety, I do not agree with all of it. Qualia, for example, might well be explained by a non-reductive physicalism.

An excellent, more extended presentation of the argument is C.S.Lewis’s article:

The Cardinal Difficulty of Naturalism

(The text may require repeated careful reading to fully grasp the issues; there are attempts at rebuttal of this article on the web that do not even understand what they aim to refute. Lewis's ponderings of quantum mechanics, a then new topic in scence, at the beginning can be safely ignored, he does not use them in his argument anyway.)

I also recommend Victor Reppert’s book
C.S. Lewis’s Dangerous Idea, as well as Stephen Barr’s outstanding discussion of rationality, which more concentrates on recognition of abstract and absolute (e.g. mathematical) truths and the issue if the mind is just a computer, in the section "What is Man?" of his book Modern Physics and Ancient Faith.

As for the conflict between rationality and determinism, it can also be seen when more closely examining the claim "Naturalism is true", see my essay:

"Naturalism is true": A self-contradictory statement

Other than when it comes to the human mind, I am a ‘die-hard’ evolutionist throughout, and I also hold, following the scientific evidence which grows ever stronger, that the origin of life must have had natural causes *).

*) There will be those who claim that I am a proponent of ideas of so-called 'Intelligent Design' (ID) after all; while not when it comes to the origin of life and general biological evolution, then at least when it comes to the human mind. Yet this not the case. The issue has little to do with the 'irreducible complexity' of ID, with the complexity of material organization of life in general and the brain in particular. Rather, it has to do with the question if mere matter is able to organize itself to (paraphrasing Thomas Nagel) successfully aspire to thoughts or thought processes of universal validity, also driven by laws of logic, even though in some concrete cases they may still be prone to errors – or if this is possible only if the mind has also an immaterial component. Referring to computers as an example of matter capable of exhibiting ‘objective thought’ is not a valid counterargument. The functioning of computers is dependent on human rationality – even if they are induced to 'learn' and in the process to create output 'on their own' – since they are programmed by humans according to the rules of logic and reason that these apply. Instead of being programmed to calculate 9 x 7 = 63, a computer could just as easily be programmed to calculate 9 x 7 = 126 – also obeying the laws of physics. It would not know the difference.

For me who is coming from theism, the idea of God – principally as eternally existing spirit, i.e. immaterial being, that is infinite and all powerful, the ultimate cause of existence, yet personal at the same time – makes good sense from a philosophical perspective (it certainly is not the "white-bearded old man in the sky" of religious folklore). Many of the great philosophers through history are actually theists, as are many scientists today and as were all of the great scientists who started the scientific revolution (yes, including Galileo Galilei). This leads me to a different weighing of observable, rational facts about the world than an atheist may perform who has a priori decided that the God hypothesis is not a viable alternative (perhaps some may consider this alternative more reasonable after reading the arguments below). An atheist might argue that his/her positions are scientific and objective, since they are an extrapolation from what science tells us about the world. This, however, overlooks the fact that this extrapolation, while it may claim to be based on science, is a philosophical extrapolation, not a scientific one, since it transcends the realm of strictly scientific knowledge. The atheist’s position is no less philosophical than the theist’s position – which, when it comes to a cosmic designer, can also be, as in my case, an extrapolation from what science tells us about the world (no, I am not talking about the Intelligent Design position that denies the science of evolution).


Cosmological arguments for the existence of God

The arguments presented here are based on observations from science, yet especially the arguments in Section 2 may show a certain resemblance to classical philosophical arguments, like some the five ways of Thomas Aquinas.

Obviously, like any other arguments for the existence of God, the arguments presented here only apply to the God of the philosophers (contained in the attributes of God in the three great monotheistic religions), not to the God of any specific religion. Choices in favor of a particular religion will best be argued on the basis of historical and theological considerations – how divine revelation developed – rather than on philosophical ones.


Contents

1. The laws of nature require a designer
1.1. The apparent fine-tuning of the laws of nature
1.2. Addressing three common objections
1.3 Proposals to explain apparent fine-tuning without design
1.3.1. Brute chance or brute fact
1.3.2 Necessity or high probability of the laws of nature
1.3.3. Life as we do not know it
1.3.4. The multiverse
1.4. The multiverse does not solve the design problem
1.5. Conclusion
1.6. The Rare Earth hypothesis: a design argument?

2. The origin of the universe: eternal God, eternal matter or eternal fields
2.1 Eternal matter
2.2. Eternal field
2.3. A universe from nothing?
2.4. Conclusion

3. Critique of Cosmological Natural Selection
3.1. The hypothesis
3.2. Objections
3.3. Conclusion

4. The uncaused universe


1. The laws of nature require a designer

1.1. The apparent fine-tuning of the laws of nature

In support of their position, atheists like to point out the fact that physical and biological evolution are self-sufficient processes. However, when it comes to a comprehensive consideration of the issue, this fact is only part of it. Further questions are: is the very existence of evolution self-explanatory? Why can evolution occur in the first place? And here the atheistic position is confronted with grave problems.

Physicists know since a few decades that the laws of nature have to be exceedingly special to allow for any form of life to develop, and in fact for any kind of chemistry and complexity in the universe. The fortunate apparent coincidences in the laws of nature that allow for life in the universe are called ‘anthropic coincidences’ ('anthropic' means related to mankind; perhaps a better term would have been the more general ‘biophilic', life-friendly). Physicists say that the laws of nature are extremely ‘fine-tuned’.

Actually, the scientifically neutral term would be the sometimes used ‘apparent fine-tuning’, since ‘fine-tuning’ already implies a fine-tuner, which is moving away from strict science into the realm of philosophy. However, the consistent use of ‘apparent fine-tuning’ becomes kind of awkward at the latest when it comes to extending it to the phrase ‘apparently fine-tuned’. Thus, I will use the common expression ‘fine-tuning’ here, with the just mentioned caveat.

Since, as shown below, chemistry and complexity in general would not be possible without exceedingly special laws of nature, evolution would not be possible without them as well. Therefore, even though the power of physical and biological evolution is awesome, at least as wondrous appears to be the fact that there is evolution at all. This fact is thus not self-explanatory, it is in dire need of an explanation.

Cosmological fine-tuning per se, when it just comes to pointing out the extraordinary special character of the laws of nature which appear to be balanced on the knife’s edge, is not a religious argument, contrary to what many atheists claim. No, the fine-tuning of the laws of nature is pointed out with broad consensus by leading cosmologists, many of them agnostics or atheists. It is remarkable and rather curious how many atheists conveniently ignore or even dismiss mainstream science when it comes to cosmological fine-tuning, thus committing the same mistake they rightfully accuse creationists of when it comes to evolution. Certainly, the further extension as a design argument is theistic, while atheistic scientists often see no other choice than to posit the multiverse (see below) as a non-design explanation.

(Alas, many theists and apologists of the faith use the fine-tuning argument in conjunction with the anti-evolutionistic Intelligent Design argument. This is unfortunate, since a powerful philosophical argument from science is mingled with an argument from scientific ignorance, and thus it is more easily dismissed by atheists – not by lack of merit, but by association.)

What follows are some quotes from experts, all of them apparently atheists or agnostics (I chose them deliberately over theistic physicists to make the point).

The different sources that I quote use different expressions to denominate the term "10 to the power of n". I will uniformly rewrite this as "1En", since this is a standard formulation on computers and avoids the confusion of different denominations for the same thing from different sources (the best would be scientific notation, yet it uses superscript which is not reliably read on an html page from all browsers and on all computers). For example, "10 to the power of 6", i.e. one million, becomes 1E6, which is 1 with 6 zeros behind it, "10 to the power of -6" becomes 1E-6 which is 1/millionth. 1E9 is thus one billion (a 1 with 9 zeros behind it), 1E12 is one trillion, 1E40 is a 1 with 40 zeros behind it etc.

Lee Smolin, a leading theoretical physicist, points out in The Life of the Cosmos that stars are necessary for life. They are the energy sources that prevent everything from falling into a homogeneous thermal equilibrium, in which life as an entity that necessarily operates outside any thermal equilibrium could not exist.

He then says (here is the link to the
chapter on stars):


"What is the probability that the world so created [with random values of the parameters] would contain stars? The answer is that the probability is incredibly small. This is such an important conclusion that I will take a few pages to explain why it is true. In fact the existence of stars rests on several delicate balances between the different forces in nature. These require that the parameters that govern how strongly these forces act be tuned just so. In many cases a small turn of the dial in one direction or another results in a world, not only without stars, but with much less structure than our universe."


He then discusses for several pages the parameters that need to be ‘just right’. Here is a brief summary:

1) Protons, neutrons, electrons and neutrinos interact via four basic forces. These are gravity, electromagnetism, and the strong and weak nuclear forces.

2) Newton’s gravitational constant is incredibly weak. This is vital for stars because the weaker gravity is, the more protons must be piled on top of each other before the pressure is strong enough to produce nuclear reactions. Stars are therefore so huge because the constant is so tiny. If they were not huge then they would not be able to burn for billions of years (they usually burn for 10 billion years). If it were stronger by only a factor of 10, stars would only burn for 10 million years (not enough time to get life out of that). If it were stronger by another factor of 10 then the lifetime of a star would be 10,000 years.

3) Stars burn through nuclear reactions that fuse protons and neutrons into a succession of more massive nuclei. For this to happen the masses of the elementary particles must be chosen very delicately. Were the electron’s mass not about the same size as the amount that the neutron outweighs the proton (which is about 0.2 %), and were each of these not much smaller than the proton’s mass, stable nuclei could not be formed (according to the standard model of physics, the masses of these three particles are set by completely independent parameters). The strengths of the different forces must also be carefully tuned to obtain stable nuclei. Stars cannot burn if nuclei are not stable.

4) The neutrino mass must be very small for the nuclear reactions that energize the stars to happen.

5) Why is the universe big enough for stars? Why does it live for the billions of years needed for stars to form? This depends on the cosmological constant which can be no larger than about 1E-40. If it were not, the universe would not live long enough to produce stars.
(The value of the cosmological constant given here appears somewhat different from other sources cited below, but it is just based on other units.)

6) If it were not for the strong nuclear force, nuclei would be blown apart. Remarkably, the attractive nuclear force actually balances the electrical repulsion of the protons. If there were not this fine balance there would be no stability and no nuclei. Our existence depends on it. The strong nuclear force must also be short-ranged, otherwise there would be the danger that all the protons and neutrons in the world would be pulled together into one big nucleus.

7) The weak nuclear interaction must be set up in order to govern the basic nuclear reactions on which the physics of stars is based.

Smolin concludes that the chances that a universe created by randomly choosing the parameters contains stars suitable to sustain life are ridiculously small. He calculates it to be infinitesimally smaller than one against the sum of all neutrons and protons in all the stars of the observable universe combined, which is 1E80. The number he comes up with, one chance in 1E229 *), comes from a straightforward calculation, explained in the notes to the chapter of his book. A common objection against this sort of calculations is that it is fallacious to vary only one parameter while holding all the rest constant, and probability space would be considerably widened were several parameters allowed to co-vary. Yet as cosmologist Luke Barnes
shows in his in his article from p. 19 onward, with impressive graphs for a number of cases, in general this objection does not hold.

*) that is one chance in 10 trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion trillion. No, this is not a joke. (You can do the math yourself: a trillion is 1E12, a trillion trillion (or a trillion times trillion) is 1E24, and so on.)

Max Tegmark writes:


"For instance, if the electromagnetic force were weakened by a mere 4%, then the sun would immediately explode (the diproton would have a bound state, which would increase the solar luminosity by a factor 1E18). If it were stronger, there would be fewer stable atoms. Indeed, most if not all the parameters affecting low-energy physics appear fine-tuned at some level, in the sense that changing them by modest amounts results in a qualitatively different universe.

"If the weak interaction were substantially weaker, there would be no hydrogen around, since it would have been converted to helium shortly after the Big Bang. If it were either much stronger or much weaker, the neutrinos from a supernova explosion would fail to blow away the outer parts of the star, and it is doubtful whether life-supporting heavy elements would ever be able to leave the stars where they were produced. If the protons were 0.2% heavier, they would decay into neutrons unable to hold onto electrons, so there would be no stable atoms around. If the proton-to-electron mass ratio were much smaller, there could be no stable stars, and if it were much larger, there could be no ordered structures like crystals and DNA molecules."


Leonard Susskind writes in The Cosmic Landscape:


"To make the first 119 decimal places of the vacuum energy zero is most certainly no accident." (The vacuum energy relates to the cosmological constant.)


Stephen Hawking writes in A Brief History of Time, p. 125:


"The remarkable fact is that the values of these numbers (i.e. the constants of physics) seem to have been very finely adjusted to make possible the development of life" (p. 125).


Cosmologist
Andrei Linde says:


"We have a lot of really, really strange coincidences, and all of these coincidences are such that they make life possible. […] And if we double the mass of the electron, life as we know it will disappear. If we change the strength of the interaction between protons and electrons, life will disappear. Why are there three space dimensions and one time dimension? If we had four space dimensions and one time dimension, then planetary systems would be unstable and our version of life would be impossible. If we had two space dimensions and one time dimension, we would not exist."


In discussions atheists often quote an
article by Steven Weinberg who suggests that some of the constants are less fine-tuned than they appear. Yet even he is forced, in the same article, to acknowledge at least the apparent fine-tuning of the cosmological constant (see below). Also, the multiverse, discussed later, is seen by him and many other physicists as the only way out of the design problem, and he was quoted as saying in a discussion with Richard Dawkins:


"If you discovered a really impressive fine-tuning ... I think you'd really be left with only two explanations: a benevolent designer or a multiverse."


Astrophysicist Martin Rees writes in Just Six Numbers about six parameters that need to be right (this does not say that there are not more than these six that are important, Smolin for example uses others as well). His conclusion is that the idea that these six numbers all coincide by chance is unsatisfactory (p. 164 f.):


"I'm impressed by a metaphor given by the Canadian philosopher John Leslie. Suppose you are facing a firing squad. Fifty marksmen take aim but they all miss. If they hadn't all missed you wouldn't have survived to ponder the matter. But you wouldn't just leave it at that – you'd still be baffled, and would seek some further reason for your good fortune."


(As a consequence, just like Smolin, Weinberg, Tegmark and Linde, Rees is an advocate of the multiverse hypothesis, see below.)

The chances for a universe like ours that allows for living observers are thought to be incredibly small by others as well.

Leonard Susskind says
in an interview:


"The discovery in string theory of this large landscape of solutions, of different vacuums, which describe very different physical environments, tipped the scales for me. At first, string theorists thought there were about a million solutions. Thinking about Weinberg's argument and about the non-zero cosmological constant, I used to go around asking my mathematician friends: are you sure it's only a million? They all assured me it was the best bet.

"But a million is not enough for anthropic explanations – the chances of one of the universes being suitable for life are still too small. When Joe Polchinski and Raphael Bousso wrote their paper in 2000 that revealed there are more like 1E500 vacuums in string theory, that to me was the tipping point."


(This number of vacuums, 1E500, each corresponding to a potential universe, is a 1 with 500 zeros behind it! Even a gigantic number like the before mentioned 1E229 (Smolin, probability of stars) does not even come close to that.)

There is one dissenter from the broad consensus among cosmologists, the physicist Victor Stenger, who gained considerable popularity as a spokesman for the atheistic cause.
He writes:


"I have made some estimates of the probability that a chance distribution of physical constants can produce a universe with properties sufficient that some form of life would have likely had sufficient time to evolve. In this study, I randomly varied the constants of physics (I assume the same laws of physics as exist in our universe, since I know no other) over a range of ten orders of magnitude around their existing values. For each resulting "toy" universe, I computed various quantities such as the size of atoms and the lifetimes of stars. I found that almost all combinations of physical constants lead to universes, albeit strange ones, that would live long enough for some type of complexity to form. […] Note that in well over half the universes, stars live at least a billion years."


However, the calculations are extremely flawed. As Stenger outlines elsewhere, they come from his program MonkeyGod. Here is an incisive refutation of the calculations by cosmologist Luke Barnes:

No Faith In MonkeyGod: A Fine-Tuned Critique of Victor Stenger (Part 2)

Furthermore, the calculations use only four parameters. Most notably, while many other constants are absent too, the calculations lack constants that decide about the behavior of the universe as a whole, for example the cosmological constant and the ‘primordial ripple’ constant Q which critically decides on the formation of galaxies (it describes the minute density contrasts in the early universe, which have the unnaturally small numerical value of 1E-5, see
Susskind’s comment).

Here is what physicist Stephen Barr writes about the cosmological constant:


"In the equations that govern the gravitational force (Einstein's equations), there are two numbers. One of them is called Newton's constant, and is conventionally presented by the symbol GN. It says how strong gravity is. The other number is called the cosmological constant and is conventionally represented by the Greek letter lambda. The cosmological constant tells how much gravitational pull is exerted by "empty space." (This may sound absurd, but in quantum theory empty space is not as empty as it seems – it seethes and bubbles with "quantum fluctuations.") As we saw in the chapters on the Big Bang, the value of the cosmological constant is important to how the universe as a whole behaves. In discussing the size of the cosmological constant we shall use what are "natural units" for gravity, in which Newton's constant is exactly 1. It has long been known that the cosmological constant (when expressed in these natural units) is less than about 1E-120. In decimal form this would be written
0.00000000000000000000000000
000000000000000000000000000
000000000000000000000000000
0000000000000000000000000001.
This is an amazingly small number. It is so small that physicists have long assumed that the cosmological constant is really exactly zero. But it is hard to tell whether a physical quantity is exactly zero, or just too small to be measured with available techniques. Recently, there have been astrophysical measurements that seem to imply that the cosmological constant is not exactly zero, but rather is a number about 1E-120.

"In either case, whether the cosmological constant is exactly zero or just fantastically small, physicists are confronted by a very deep puzzle. In physics, if a number is either exactly zero or extremely small there is usually a physical reason for it. For example, the mass of the photon is believed to be exactly zero; that is understood to be the consequence of a fundamental symmetry of the laws of physics called "electromagnetic gauge invariance". So far, no one has been able to find the physical reason why the cosmological constant is small or zero. This failure is the so-called "cosmological constant problem," and is considered by many scientists to be the deepest unsolved problem in physics. […]

"It turns out to be a very fortunate thing for us that the cosmological constant is so small. If it were not, the universe would not have been able to have a nice steady existence for the billions of years required for life to evolve.

"One has to consider two cases, because the cosmological constant is allowed by mathematics to be either a positive or a negative quantity. Suppose, first, that the cosmological constant had been negative and equal to some number that was not particularly small or particularly large, say -1. Then the universe would have gone through its entire life cycle of expansion and collapse in the incredibly short time of 1E-43 seconds. That is, the universe would only have lasted a ten-millionth of a billionth of a billionth of a billionth of a billionth of a second. This very short time is called the "Planck time," and is a fundamental length of time in physics. In essence, the Planck time is the shortest period of time that any physical process can happen in and still be described in terms of our usual notions of space and time.

"If we suppose instead that the cosmological constant had been negative and equal to about minus one-millionth (i.e., -0.000001), then the universe would have lasted for a thousand Planck times, namely about a ten-thousandth of a billionth of a billionth of a billionth of a billionth of a second. Not a great improvement from our point of view. If the cosmological constant is negative, and the universe is to last for the several billion years required for life to appear, then the magnitude of the cosmological constant has to be less than about 1E-120, the terrifically small number mentioned earlier.

"Now suppose that the cosmological constant is a positive number. If it had been positive and not too large or small, say about + 1, then the universe would have undergone an incredible "exponential" expansion, in which it doubled in size every ten-millionth of a billionth of a billionth of a billionth of a billionth of a second – that is, in every Planck time. The universe would have lasted forever, but always expanding ferociously. Even if the cosmological constant had been as small as +1E-48, the universe would have expanded so fast that it would have doubled in size in the time it takes an electron in an atom to orbit the atomic nucleus once. In such a situation, even atoms would be ripped apart by the expansion of the universe. Even if the cosmological constant had the much smaller value +1E-80, the universe would have expanded so fast that it would have doubled in size every thousandth of a second or so, which would be so fast that your body would be ripped apart by the expansion. If the universe was to have a sufficiently gradual expansion over billions of years to allow life to evolve, then the cosmological constant had to be less than or about + 1E-120. In order for life to be possible, then, it appears that the cosmological constant, whether it is positive or negative, must be extremely close to zero – in fact, it must be zero to at least 120 decimal places. This is one of the most precise fine-tunings in all of physics."


From: Modern Physics and Ancient Faith, p. 129 f.

(My addendum: it has now been shown that the cosmological constant, while being so small, is a positive number.)

For a change I have now quoted a theistic physicist, but Susskind would agree. He writes in The Cosmic Landscape, p. 88:


"When we combine the theory of elementary particles with the theory of gravity, we discover the horror of a cosmological constant big enough to not only destroy galaxies, stars, and planets but also atoms, and even protons and neutrons – unless. Unless what? Unless the various bosons, fermions, masses, and coupling constants that go into calculating the vacuum energy conspire to cancel the first 119 decimal places [of the cosmological constant]. But what natural mechanism could ever account for such an unlikely state of affairs? Are the Laws of Physics balanced on an incredibly sharp knife-edge, and if so, why? Those are the big questions."


Here Susskind draws attention to the fact that the fundamental problem of the cosmological constant lies not just in it being a small number. As Luke Barnes
explains in his article on p. 34 f.:


"Quantum field theory allows us to calculate a number of contributions to the total dark energy *) from matter fields in the universe. Each of these contributions turns out to be 1E120 times larger than the total. […] The fine-tuning problem is that these different independent contributions, including perhaps some we don't know about, manage to cancel each other to such an alarming, life-permitting degree."


*) the term 'dark energy' is here used for the cosmological constant

To return to Stenger: now it is clear from just this one example of the cosmological constant why Stenger’s calculations – apart from all the problems with them pointed out by Luke Barnes in the link above – are simply not credible, since they leave out the cosmological constant and other parameters that decide about the behavior of universes as a whole (Smolin does use the cosmological constant in his calculations, see above). Yet Stenger pretends that his calculations are relevant for the behavior of entire universes. He reiterates this fallacy in his book God – The failed Hypothesis (p. 148) where he claims:


"Only four parameters are needed to specify the broad features of the universe as it exists today: the masses of the electron and the proton and the current strengths of the electromagnetic and strong interactions. (The strength of gravity enters through the proton mass, by convention.) "


(Even in this unduly limited array Stenger conveniently ‘forgets’ a closely related parameter, the mass of the neutron. A free variation relative to the masses of proton and electron would reduce probabilities for stable nuclei immensely; for reasons see Smolin above.)

Stenger’s calculations are thus, falsely, extremely selective and limited with respect to physical constants.

In God – The failed Hypothesis Stenger does address the cosmological constant, however, knowing full well that it is a problematic parameter (but he does not include it in any probability calculations). Yet his assertions that the problem would be solved if the cosmological constant would be exactly zero are questionable, see above for the problem of mutual cancellation of contributions to it. He further claims that the original calculations of the constant being non-zero were incomplete, but he quotes another book by himself as the only source for that assertion – obviously it is he who is heavily biased in his interest to let that problem go away. Clearly there must have been hordes of cosmologists who already have poured over both measurements and calculations and found no mistakes, so the idea that precisely Stenger may be right just because he ‘must be’ is nothing more than wishful thinking. Also,
WMAP data measuring the cosmic microwave background strongly indicate that a positive cosmological constant rather than the so-called quintessence preferred by Stenger is indeed the best candidate for the explanation of the observed acceleration of the universe’s expansion (Summary, Study). (Contrary to Stenger’s claims, also quintessence would have to be fine-tuned.)

Furthermore, in the book Stenger represents a research article in a clearly false manner, and in support of his views. He claims that the article
Why the Universe Is Just So by Craig Horgan comes to a similar conclusion as an article by Anthony Aguirre that finds other cosmologies with other parameters in which life might arise (we will come to that article later). This is plainly not true. Horgan’s article asserts none of that, but simply is concerned with the question of which parameters might be fixed in a Grand Unified Theory (for this, see also section 1.3.3.), and which ones might remain variable – and he is clearly in favor of ‘anthropic’ explanations, the type of explanation that Stenger so strongly dismisses.

Cosmologist Luke Barnes has also written
a rather devastating critique of Stenger’s next book on the subject, The Fallacy of Fine-Tuning, which contains an updated refutation (see above) of Stenger’s "MonkeyGod" calculations as well (the paper is a rather technical read, but entirely worthwhile). The article contains impressive graphs that show just how dramatic the fine-tuning of some parameters is. I especially suggest taking a look at Figure 2 (top row) on page 22 in the article, with parameters and life-permitting criteria explained in detail on page 20. This figure is also part of an effective rebuttal of the common objection – also raised by Stenger – that, when considering fine-tuning, it is in most cases fallacious to vary only one parameter while holding all the rest constant, and probability space would be considerably widened were several parameters allowed to co-vary.

Just like Michael Behe (Darwin’s Black Box), Victor Stenger contradicts mainstream science in an unjustified manner *), and both have in common to twist or ignore scientific facts for the purpose of advancing their respective world views. In other words, the opinions of those who base their cosmological views on Stenger can be taken seriously to no greater extent than the opinions of those anti-evolutionists who base their views on the writings of Behe.

*) also with other cosmological opinions, such as his contention that the universe was initially in a state of chaotic high entropy, while mainstream science holds that the initial state of the universe at the Big Bang was one of extremely low entropy (extremely high degree of order).

There is another publication that uses only a limited number (three) of parameters to calculate the parameter space for existence of stable stars and arrives at a high probability (25% of parameter space):

Stars In Other Universes: Stellar structure with different fundamental constants

Not surprisingly, this paper has caused some atheists to once more question the fine-tuning argument. However,
as Luke Barnes points out, the plot on which the high probability of stable stars with sustained nuclear fusion is based, Figure 5 in the paper, uses a logarithmic scale to depict parameter space. If, instead, we were to plot in normal (not log) space, then the parameter space allowing for the existence of stable stars would drop from 25 % to the miniscule value of about 1E-13, i.e. 1 part in 10 trillion (!). Aptly enough, Barnes’ article is called "The Shrinking Quarter"…Furthermore, Barnes points out an issue with the limits of the plot. If, for example, we allow a possible range of the gravitational force to where it is as strong as the strong force (as Smolin does), then the probablility drops further to 1E-42.

The author of the study also plainly concedes that his calculations are about conditions for existence of stable stars once formed, not about conditions under which star formation can occur in the first place:


"This paper has focused on stellar structure properties. An important related question (beyond the scope of this work) is whether or not stellar bodies can be readily made in universes with varying values of the constants. Even if the laws of physics allow for stellar objects to exist and actively burn nuclear fuel, there is no guarantee that such bodies will be produced in significant numbers. […] In future work, another issue to be considered is coupling the effects of alternate values of the fundamental constants to the cosmic expansion, big bang nucleosynthesis, and structure formation. […] In addition to energy sources (provided by stars), there will be additional constraints to provide the right mix of chemical elements [for life] (e.g., carbon in our universe) and a universal solvent (e.g., water). These additional requirements will place additional constraints on the allowed region(s) of parameter space."


(The requirements for chemistry do not enter Stenger’s equations either.)

One of these factors for star formation, cosmic expansion, is determined by the cosmological constant, and its negligible allowable parameter space will further constrain the already miniscule allowable parameter space for star existence of 1E-13 (or 1E-42, as mentioned) to an incredible extent (this not even considering all the additional other restrictions). In Smolin’s calculations of star formation, see above, that constant is included as well.

But even with the topic of star existence based on only three parameters, with the parameter space depicted on a logarithmic scale, and not considering chemical elements for life, there are problems.
NatureNews reports (Smolin makes a similar point, see above, and accounts for it in his calculations):


"Martin Rees, a cosmologist at the University of Cambridge, and Britain's Astronomer Royal, says that we shouldn't be too surprised by the result, as other astronomers have shown that universes in which gravity is stronger could support stars — although they would have much shorter lives. "This would not be a propitious universe because there wouldn't be enough time for complex evolution," he adds, "and objects as big as us would be crushed by gravity." "


Note that two of Rees’ numbers in Just Six Numbers where he arrives at extremely low probabilities for our universe (causing him to cite the firing squad metaphor, see further above) are also the cosmological constant and the ‘primordial ripple’ constant Q, constants left out from this study on star existence (just like in Stenger). The problem of a third number in Rees’ book that describes the universe as a whole, the critical density, may be solved by inflation, as also the author points out – yet to accomplish this, it appears
that inflation itself must be fine-tuned.

How acute the problem would be without inflation is described by Stephen Hawking in Brief History of Time:


"We know that there has to have been a very close balance between the competing effect of explosive expansion and gravitational contraction which, at the very earliest epoch about which we can even pretend to speak (called the Planck time, 10E-43 sec. after the big bang), would have corresponded to the incredible degree of accuracy represented by a deviation in their ratio from unity by only one part in 1E60."


Notice the difference in approach: Out of the six parameters that Rees uses in Just Six Numbers, several of these, as cosmological parameters, refer to the behavior of the universe as a whole – whereas none of the four parameters that Stenger uses does that. Obviously, the prize for believability goes to Rees, hands down.

The fine-tuning problem may actually become worse as even more variables are considered. More recently
it has been reported that also the ratio of dark matter to normal matter in the universe must be the right amount:


"The fact that the ratio is so conducive to a life-bearing universe "looks like a tremendous coincidence", says Raphael Bousso at the University of California, Berkeley."


And then we have the initial conditions of the Big Bang. The famous mathematical physicist Roger Penrose *) writes as follows in The Emperor's New Mind about the extraordinarily low entropy (i.e. extremely huge degree of order) at the Big Bang (p. 445):


"This now tells us how precise the Creator's aim must have been: namely to an accuracy of one part in 1E1E123.

"This is an extraordinary figure. One could not possibly even write the number down in full, in the ordinary denary notation: it would be '1' followed by 1E123 successive '0's! Even if we were to write a '0' on each separate proton and on each separate neutron in the universe – and we could throw in all the other particles as well for good measure – we would fall far short of writing down the figure needed."


*) Smolin says about Penrose that no one has contributed more to our understanding and use of the theory of general relativity save Einstein himself, see The Trouble with Physics, p. 319

In summary, if a sufficient number of relevant parameters is taken into account, the experts in physics and cosmology agree that our universe appears exquisitely fine-tuned for life and extremely unlikely to have arisen by brute chance. There is no rational arguing around that, and none of the leading cosmologists even tries.

A minimal summary of relevant parameters, which is not necessarily exhaustive, is given here:

1. Extremely low entropy of the Big Bang
2. Inflation with just the right properties (requiring fine-tuning) to solve the critical density problem
3. Ratio of electrical force to gravity
4. Strength of strong interaction
5. Relative masses of electron, proton and neutron
6. Ratio dark matter to normal matter
7. ‘Ripple constant’ Q
8. Cosmological constant
9. Dimensionality of space
10. Existence of quantum mechanics and Pauli exclusion principle (preventing atoms from collapse)
11. Right chemistry (carbon or equivalent in an alien chemistry)

***

How about universes with other parameters that might harbor life? There is indeed a study on
a universe without weak nuclear interactions, the so-called ‘weakless universe’, in which life might be possible. The story is quite fascinating, and it is worth reading:

My other universe is a Porsche

In the weakless universe there are different mechanisms of star burning, and there is no radioactivity. However, as the study says:


"Chemistry in the Weakless Universe is virtually indistinguishable from that of our Universe. The only differences are the higher fraction of deuterium as hydrogen and the absence of atomic parity-violating interactions."


This is the important part: chemistry, on which life depends, would be basically the same. Yes, one of the four fundamental forces is removed entirely in the weakless universe, but it is exactly the one with the least consequences for life. Imagine on the other hand what would happen to chemistry if the strong nuclear interaction (which holds nuclei together) or electromagnetism (on which all chemistry depends) were changed – not to mention abolished!

Furthermore, in general the weakless universe is modeled after an already fine-tuned universe, ours that is. In addition, as the authors explain in the paper, they had to ‘arbitrarily adjust’ parameters of the Standard Model of physics and cosmological parameters (e.g. the ratio of visible baryons to photons, the abundance of dark matter) to provide a close match to our universe to ensure characteristics supportive of life. But this arbitrary adjustment of parameters is nothing other than – exactly, fine-tuning. Thus, not only is the weakless universe not a refutation of fine-tuning, it is in fact just another example thereof – a confirmation of the fine-tuning principle.

Therefore, the assertions of some (see also Stenger in God – The failed Hypothesis, p. 164) that the weakless universe argues against fine-tuning are false. It is just a very special case of another universe where life might be possible.

The idea that our universe is the only one that possibly could sustain life would be a misunderstanding of the fine-tuning argument. The argument only says that any universe that could support life would be extremely unlikely to have arisen by chance selection of physical parameters, and indeed the weakless universe – closely modeled after our own, but with judicious adjustments – shows that other life-supporting universes may be possible.

As the above link reads:


"According to the multiverse view, it is unlikely that ours is the only universe complex enough to support sentient beings. Martin Rees of the University of Cambridge goes further and believes there may be an archipelago of "islands" in the multiverse, havens for life dotted in a vast sea of uninhabitable universes."


Thus, even though there may be a good number of different universes that might be able to harbor life, the chance that any of these universes would arise from a random selection of physical parameters would still be negligible compared to the immense number of sterile universes.

Yet does the weakless universe really work? According to a critical analysis,
Problems in a weakless universe, the lack of radioactivity would cause a lack of volcanism and heating of the core of putative earthlike planets in this universe, which may impair the ability of such planets to maintain stable surface temperatures. Perhaps this would not be the biggest problem; more detrimental might be the lack of oxygen production and dispersal during a type of supernovae which is secondary in our own universe, but the only one there (oxygen abundance would only be about 1 % of that in our universe). The idea that several generations of stars would eventually forge enough oxygen (the counterargument of the authors of the weakless universe) is debatable. Initial star formation is delayed by a factor of 100 compared to our universe, to between 10 and 100 billion years after the Big Bang (a hotter Big Bang is necessary for the required deuterium abundance), see Problems in a weakless universe. This may affect galaxy formation, and thus the interstellar medium, since in the meantime the universe expands at the same rate as ours. Negative effects on the fecundity of the interstellar medium might further arise from the lesser number of supernovae in the weakless universe, complicating star formation after first generation stars so that, while second generation stars might just be formed, multiple star generations could be out of reach.

However, having life strictly depend on oxygen may not be the most imaginative view. Molecules suitable for reproduction and catalysis of life (genetic material, proteins) could possibly develop with other organic chemistry that does not require oxygen. Another universal solvent as a replacement of water would have to be there as well. Water has a strong polarity, and not just reproductive and catalytic biomolecules, but also cellular boundaries (membranes) are dependent on this polarity. A molecule like ammonia might be thought of as a potential substitute, even though it has the tendency to form salts due to its chemical nature of being a base. An inhabitable planet would then have to be cooler than Earth in order to contain liquid ammonia, or it would have to have great atmospheric pressure (lakes of ammonia are found on Titan, a moon of Saturn). Certainly, for us ammonia is poisonous, but for other living beings that developed on the molecule it might indeed be their ‘lifeblood’.

Other universes with features that supposedly could harbor life have been
constructed by Aguirre. He found that in universes arising from a cold Big Bang, rather than the hot Big Bang of our own universe, a relaxation of the stringency in the requirements for certain cosmological parameters should be possible. For example, the curvature scale, which relates to the flatness problem regarding the critical density of the universe, can be relaxed from a value of about 1E29 to about 1E24. However, this is still huge compared to its ‘natural’ value of 1. For another example, the cosmological constant, the link My other universe is a Porsche explains:


"After our hot big bang, the universe took tens of millions of years to cool to the point where matter could clump into stars. "But in the cold big bang universe, stars can begin to form within 100 years of the big bang," says Aguirre.

He even modelled an extreme cold big bang universe where the cosmological constant was 1E17 times what it is in our universe. By rights, this strong repulsive force ought to fling matter apart, preventing the formation of galaxies. However, in the cold big bang universe, stars form so quickly that they are in place before this cosmological repulsion takes hold. "The stars then rush away from each other," says Aguirre. "It's a pretty dull universe with each star isolated in a vast ocean of space. Nevertheless, there is nothing to prevent such stars having planets and observers." "


How would it be possible that such a universe could have planets and observers? After all, in our universe with its hot Big Bang heavier elements are not there when the first stars are formed. Heavier elements only come later, born in first generation stars and then dispersed into a galactic interstellar medium from which second generation stars and planets can form. In Aguirre’s cold Big Bang, however, nucleosynthesis in the first few minutes of the universe’s existence can also forge heavier elements, so that the cosmic medium can start out with the same level of enrichment as gas in the hot Big Bang which has been processed by stars.

A factor of 1E17 times that of our cosmological constant sounds impressive, but it really is still an extremely small constant, in the range of about 1E-100 and thus fine-tuned to that accuracy. None of the brutal things that are described to occur by Stephen Barr (see above) for cosmological constants of 1E-80 and 1E-48 (which are very small too) occur at this order of magnitude. Thus, even though somewhat 'relaxed', the cosmological constant would still be extraordinarily fine-tuned, just as the somewhat ‘relaxed’ curvature scale.

In summary, in Aguirres's cold Big Bang scenario
1) the fine-tuning of cosmological parameters would still be extreme
2) the fine-tuning requirements for parameters of the Standard Model of physics (governing matter and chemistry) would remain unchanged (they are not affected by cold vs. hot Big Bang, and are not varied in the study).

Also, the findings for the cold Big Bang do not invalidate the exceptional fine-tuning required in our hot Big Bang.


1.2. Addressing three common objections

At this point I should address three objections with regard to fine-tuning for life that atheists frequently raise.

Objection 1. How can you say that the universe appears fine-tuned for life? Most of it is completely inhospitable and hostile to life.

It should be obvious by now that cosmological fine-tuning holds in the relation to the universe as a whole, and is not meant to address the question of why you cannot live on the sun or breathe on the moon. Of course sources of energy (stars) are needed to drive life and evolution, and of course you cannot live on them. Nor can you live in the, by necessity, frighteningly large stretches of empty space between them and planets. So what is the point? Nobody would deny that the light bulb is an invention that greatly enhances modern life. But when you would try to hold your hand around a light bulb that is turned on, you would burn it to pieces. Is the light bulb then "hostile to life"? Certainly not. This modest example, however, indicates how utterly irrelevant the argument really is – one of those false arguments that appear to be brought forth and rehashed solely in order to avoid the deeper issues.

Objection 2. The fine-tuning for life is nonsense. We are adapted to the universe by evolution, not the universe is adapted to us.

Proponents of this argument cite life under extreme circumstances, like extremophiles that live in a sulfuric world and are found in a few places on Earth – whatever the circumstances, evolution will always find a way. Similar will hold for life when the physical constants are different. We are adapted to the universe, not the universe is adapted to us.

In light of the above explanations by physicists, it will be clear that this argument is utterly uninformed. How can you even think about evolution of life when no heavier elements than hydrogen and helium are formed, when there is no time for evolution because stars burn far too fast, when there is no chemistry, and when not even atoms exist due to wrong mass relationships between protons and neutrons or because they are torn apart by too rapid expansion of the universe due to a cosmological constant that is too large? The argument completely misses the point.

Objection 3. How can you say that the universe appears fine-tuned for life? Life is incredibly imperfect and filled with suffering.

It should be obvious by now that cosmological fine-tuning deals with the thoroughly astonishing fact that there is life in the universe at all, not with how (subjectively) perfect it is. It is irrelevant for the fine-tuning argument if, like me, you think that life by and large is wonderful despite its flaws and difficulties, or if you find it terrible.


1.3 Proposals to explain apparent fine-tuning without design

1.3.1. Brute chance or brute fact


We have seen that evolution, the physical evolution of the universe and biological evolution, can only take place under exceedingly special and improbable conditions, which are given by the initial conditions of the universe and the laws of nature. This invalidates the idea that evolution is self-explanatory, and if one wants to avoid the cosmic design explanation for life, it forces a search for other explanations. Or does it?

There are those who argue: "We obviously are here, so there is nothing to be surprised about. Perhaps it is luck, but so what?" However, when it comes to ridiculously low probabilities (see also Susskind’s comment above), just ‘brute chance’ is a hard thing to ponder seriously. Indeed, all the leading atheistic or agnostic cosmologists who bring the fine-tuning issue to the public's attention reject the idea of 'brute chance'.

Having by chance so many things aligned (see summary above) just right within relatively narrow limits, while the total bandwidth of possibilities is very wide (e.g. for the cosmological constant, the ‘ripple constant’ Q, the ratio of the masses of neutron, proton and electron, or the ratio of electrical force to gravity) would amount to a far luckier chance than winning the lotto jackpot where you ‘only’ need to get 6 simple numbers within one order of magnitude right. Would you believe that it is ‘pure luck’ when one and the same person wins the lotto jackpot not just once, but several times? Of course you would not. You would, logically, think that the outcome was rigged. This refutes the ‘brute chance’ argument.

It does so unless one takes a particularly hardheaded stance. What if it is claimed that life is not necessarily important in the grand scheme of things? All the arguments from probability evaporate if you dispense with the necessity for life, or even chemistry, or other structures; why should they matter? If at the Big Bang the constants had sorted themselves out randomly, and we were left with a universe with no galaxies, stars or life, we would still have something, even though that something might be viewed as unsatisfactory to our senses – but we would not be here to observe it anyway. In other words: why is life, our universe with life, more significant than whatever other something might be produced from different physical constants?

Indeed, if one takes the position that life is inconsequential, and if one claims that the specific universe that we find ourselves in is simply a brute fact, the whole argument falls apart. We are just a 'cosmic fluke', but let us enjoy it anyway and make the best out of it. Life is something that happened to us by brute chance, and we just seize the day.

However, given the huge improbability of life arising by chance, the apparent fine-tuning of the laws of nature is interpreted by theists as strong support for the tradition of philosophical argument and of belief that life exists because it is here on purpose, being intended by God. From that point of view life has a natural explanation. Refuting such purpose would require to show that life is 'natural' also without the idea of God being its originator – a natural outcome of undesigned laws of nature.

The scenario of life as a chance 'cosmic fluke', as simple brute fact, on the other hand, would make life a highly 'unnatural' outcome of nature, something that is absurdly atypical among the vast space of possibilities how things could be different, an anomaly. It is precisely this ‘unnaturalness’ which makes the position unsatisfying and rationally unconvincing, especially compared to the theistic position that life is natural as being designed on purpose. Some other ‘natural’ explanation for life seems to be required, if one does not want to adopt the theistic position. Non-theistic cosmologists who have studied fine-tuning generally agree with that, and view the multiverse (see below) as an explanatory necessity.


God as a brute fact?

There will be those who will counter that while atheists may postulate the universe as brute fact, theists do the same with God. This claim does not hold. There cannot be anything of necessity as to what properties the universe has; it could be different in any other way, see the discussion of this point below. Yet as the source of all being, God must have certain properties by metaphysical necessity, e.g. the property of being absolutely simple since not composed of parts (possible only as an immaterial being), the property of being pure act, and the identity of His essence with His existence, see for example Edward Feser's article
"Why is there anything at all? It's simple".

(For an overview of the author’s previous articles related to the issue, see Classical theism roundup; for the writing of Thomas Aquinas on divine simplicity, see the chapter in his Summa Theologica.)

Therefore, according to classical theistic philosophy God cannot simply be a brute fact, rather, He is the source of all facts that exist, the metaphysically necessary explanation of why there is something rather than nothing. Reading Feser's article will make it abundantly clear that in classical theism God is in no way conceived as a ‘superman’, somehow analogous to the symbolic depictions of God as white-bearded man in the sky, or analogous to a figure like Zeus. If He were, I would already be an atheist. If God were conceived as ‘superman’, Richard Dawkins’ argument in the central fourth chapter of The God Delusion, which implicitly assumes God to have something like a complex giant ‘brain’ (that would have to have evolved), would hold. But it does not. In fact, while there are substantial arguments in favor of atheism – which nonetheless I find to have considerably less weight than arguments in favor of theism – my experience from books and discussions strongly suggests that the philosophical position of many atheists is at least to some extent based on a misunderstanding of the classical concept of God in the three great monotheistic religions (see Feser's article). This misunderstanding leads to on the surface logical, yet upon deeper examination non-sensical, questions like "who created God?". It also leads to such an irrelevant statement as "believers are atheists towards all other gods, atheists go just one step further", envisioned as a ‘logical’ invitation to believers to do the same. By the way, in addition the misunderstanding results in the false notion that Jews, Muslims and Christians all believe in a different God, something that is not, and in principle cannot be, the case; the Catholic Church, for example, is quite clear about that.


1.3.2. Necessity or high probability of the laws of nature

What if the physical constants are immutably fixed against one another and simply could not be any other way? Or if they are not fixed, what if the specific physical constants of our universe are highly probable to occur since certain regions of parameter space are much more likely to be occupied than others? Then the emergence of life is either a necessary or a distinctly probable, natural outcome of how nature works, plain and simple. Yet such necessity or probability of the laws of nature cannot be logically sustained.

Yes, the specific values of the physical constants, which would have to include all the initial conditions of the universe, might be a necessary or highly probable outcome due to a unified basic system and/or a particular universe-creating mechanism that is associated with it. However, even if such necessity or probability of specific physical constants were true – and current knowledge of the physical world makes that seem extremely unlikely – the basic system could also be founded on principles other than those of general relativity and quantum mechanics (possibly unifiable into a theory of 'quantum gravity') which hold for our particular universe. The laws of nature could be based on any of an infinitude of other mathematically self-consistent systems imaginable, and specific universe-creating mechanisms that accompany them. Therefore, even if not necessarily within a framework combining general relativity and quantum mechanics, the laws of physics could, in fact, be different in infinitely many ways (see also
Stephen Barr’s remarks in this link). This cannot be logically disputed.

Thus, there cannot be either a necessity or a favorable probability of the laws of nature – unless one suggests that "the fabric of nothing" may only allow for certain frameworks of physical laws or law-generating mechanisms to arise. Yet then 'nothing' would have to have properties, which is philosophically and logically absurd. Nothing has no properties whatsoever – nothing is, in fact, nothing.

(The 'physical nothing' of empty space, the quantum vacuum, is not really nothing at all. For the current common confusion, see the article
"Of Nothing".)

In light of all of this, the question will always remain legitimate: why this particular universe, with its exceedingly special laws that allow for physical and biological evolution, for life – and not any other? Again, the issue is that 'any other' universe will not suffice, but only very select universes, given the extreme fine-tuning of laws of nature that is necessary for the existence of life.

The only way to circumvent the problem on a naturalistic basis, apart from assuming a multiverse (see below), would be by postulating that everything that can exist in fact does exist. Yet this would radically violate Occam’s razor, discussed at the end of this chapter.

Even when it comes to just the framework of general relativity and quantum mechanics, the idea that there might be unique laws of nature (they simply had to be this way) is not one that appears to be upheld by any leading physicists. As Lee Smolin writes about a theory that requires uniqueness of physical constants so that they are immutably fixed against each other (The Life of the Cosmos, p. 37, emphasis mine):


"For better or for worse, no such theory has ever been found. Nor is there any reason, besides faith, to hope that a consistent theory that was able to describe something like our world should be unique."


On p. 76 of the book he says:


"We have so far no evidence to support the conjecture that the requirement that the laws of nature be mathematically consistent, or agree with quantum theory and relativity, constrains significantly the possible masses of the elementary particles or the strengths of the different forces."


All the other cosmologists who now prefer the multiverse hypothesis (see below) appear to agree.


1.3.3. Life as we do not know it

But wait a minute, the atheist will say: perhaps ‘life as we know it’ may be extremely unlikely, but could it not be that many different forms of life were possible that we could not even dream of, and thus that life in any form is not unlikely at all? Terms like ‘carbon chauvinism’ and ‘argument from incredulity’ then usually come up in these argumentative settings.

One who studies the fine-tuning issue sufficiently will know that not just life as we know it, but any chemistry requires highly specialized laws of nature (see the fortunate coincidences discussed by physicists and cosmologists above). Yet any kind of material life probably would require complexity of matter, which in turn requires chemistry.

Any significant detuning of physical constants, however, and any chemistry would be impossible. Just hydrogen, and possibly deuterium and helium (or equivalents), and no chemistry would exist. And even just atoms can only exist if there is not a detuned cosmological constant, which causes an expansion rate of the universe that rips even atoms apart, or if nuclei are stable which requires a great deal of fine-tuning as well.

Therefore, in a universe with randomly different physical constants the chances for any chemistry and any material complexity – and thus any kind of life, not just life as we know it – would be extraordinarily low (the above examples of the weakless universe and cold Big Bang cosmology are obviously not random, they are deviations modeled from our fine-tuned universe and would feature life as we know it, or close relatives of it). Our universe with its specific laws of nature would be a small oasis within a vast desert of a humongous number of sterile, non-complex universes where no chemistry takes place.

If, on the other hand, there would be a completely different kind of life in a completely different kind of universe with laws of nature that do not at all resemble what we have (and which also produce entirely different particles), it is still the most rational assumption that this life would be based as well on some material complexity, which in turn would be based on some sort of alien chemistry. And furthermore, we would rationally expect by extrapolation from the situation surrounding our laws of nature that also here slight detuning of physical constants would make that alien chemistry impossible.

Thus, this would result in another small oasis within a vast desert of a humongous number of sterile, non-complex universes where no chemistry takes place.

Overall, then, even if some entirely different form of life might be possible somewhere else under completely different conditions, we should reasonably expect the probability for any life, known or unknown, to remain extraordinarily low.

The argument might be brought up, "life could theoretically occur in ways we would never imagine, or in ways we would not even think of as life at all". Life on neutron stars based on "nuclear molecules", or life within plasma, among others, have been proposed – yes, weird ideas do exist. How do you get reproduction based on "genetic" information, mutation, an environment for natural selection, and a communicative network of "cells" (whatever that may mean under those circumstances), thus the ability for evolution of intelligent life, out of any of this? With which tools would this life explore the world? The idea of material intelligent life without any chemical complexity leads one into regions of thought that are not seriously debatable anymore, and which have no basis in our knowledge from science. Imaginative thinking is one thing, wild and baseless science fiction another.

Thus, upon closer analysis the argument of ‘life as we do not know it’ fails as well.


Virtual life

There is a life form that can evolve on something other than the substrate of organic chemistry: virtual life developing by means of certain computer programs. Structures can evolve in these programs, they can learn and they also can acquire some sort of 'intelligence'. Yet the processes of evolution which these programs may exhibit and the 'intelligence' which this virtual life may evolve are initiated by humans, even if the results may be surprising and unforeseen. Also, microchips and computers on which this 'life' is dependent are a human invention and their emergence cannot be part of the physical evolution of the universe by any natural laws that we know of. These devices are already of a tremendous, ordered complexity which necessarily stands at the very beginning of virtual life, whereas organic life on Earth began from simple elements, probably a self-replicating molecule. It follows that the existence of human-induced virtual life does not provide valid support for an argument that complex life could gradually emerge from non-life without chemistry.


1.3.4. The multiverse

What if our universe is not the only one? Enter the multiverse. The multiverse would be a conglomerate of not just a few, but of trillions of trillions of trillions of universes other than our own. All these universes would have a common origin in a multiverse generator. For example, in the model of eternal inflation, all the single universes forming the multiverse are just "bubbles" (or domains) from a mother universe. A common assumption in the multiverse model is that those trillions of trillions of trillions of universes each differ from one another by slight variations of physical constants. If all these slight variations are actualized, at some point there would also have to arise, by chance, a universe which features our physical constants. Thus the multiverse would make our particular combination of physical constants, unlikely as it is on its own, nonetheless a statistically inevitable outcome.

The multiverse idea may seem far-fetched. After all, besides our own universe, there is not even one other universe that we can observe – and certainly not trillions of trillions of trillions of others. Yet the actual existence of such a multiverse would be the only rationally viable explanation for the highly unlikely combination of physical constants found in our universe without requiring specific design of our particular universe. As we have seen, the other explanations:

a) brute chance or brute fact
b) necessity or high probability of the laws of nature
c) life as we do not know it

all fail.

This may be a main reason why the multiverse idea is quite popular among today’s leading physicists and cosmologists. Another reason for the postulation of a multiverse may be a purely scientific one, resulting from the combination of inflation and string theory (explained elsewhere on the web or in books, e,g, The Cosmic Landscape by Susskind). This scientific reason also leads some theistic physicists like Stephen Barr and Don Page to embrace the idea. Yet that scientific reason alone would hardly have attained so much traction, would it not seem to solve the design problem of our universe. After all, which scientist would want to postulate a scientific hypothesis that incorporates an almost infinite multitude of intrinsically unobservable domains of reality, except for desperation to explain away what s/he actually observes?

This leads us to the question:

Is the multiverse hypothesis science?

In general, even when they uphold the probable existence of the multiverse, cosmologists concede that we will never be able to directly observe other universes outside our own. The reason for this is the particle horizon: the maximum distance from which particles (i.e. also particles carrying information) could have traveled to the observer in the age of the universe. It represents the portion of the universe which we could have conceivably observed at the present day. Any other universe would lie outside this particle horizon.At this point I would recommend to the reader to study the following presentation by the eminent cosmologist George Ellis,


The multiverse, ultimate causation and God

as to why the multiverse hypothesis should not be considered science. He points out that, due to the particle horizon, the multiverse is beyond accessibility to observation and test of hypothesis, the very cornerstones of science that have been responsible for the incredible success of this endeavour of acquiring knowledge. Therefore it is more philosophy than science proper.

Certainly, some hope that we might obtain indirect observational evidence for the multiverse, such as from imprints in the cosmic microwave background (CMB) map. Perhaps there will be a possible observational indication of the existence of some multiverse from such indirect evidence. While such potential evidence, were it to actualize, is sometimes portrayed as sufficient evidence, this is not correct. The demonstration of the mere existence of a multiverse would not suffice. Scientifically adequate evidence would only be the demonstration of the existence of a multiverse fulfilling the requirements needed to explain the fine-tuning, i.e. sufficient random variation of the physical parameters between the different universes or universe domains. Yet such evidence could only be established by, impossible, direct observation. To think that some imprint in the microwave background might indirectly tell us not just about an incredible multitude of other universes, but also about the exact physical constants of each single one of them (necessary to make the multiverse work as explanation for the anthropic coincidences) seems quite preposterous, far beyond realistic scientific expectations. And even if one day some scientist were to claim to be able to extract such sophisticated information from the CMB map (or from some potential other future maps, like imprints from neutrino radiation or gravitational waves), this claim probably would have to involve a huge amount of theoretical modeling that, again, would lose any reasonable contact with observation and experiment.

Of course, the history of science has shown us that there are always surprises with respect to what science can demonstrate. Often things were thought impossible to detect, such as black holes or neutrinos, but science eventually has been able to observe them anyway. Yet it appears that with respect to observation beyond the particle horizon, necessary for a sufficiently detailed observational study of a putative multiverse, science reaches not just practical limits (which always have a tendency to be overcome eventually) but principal ones.

As Lee Smolin writes in Life of the Cosmos, p. 78 (emphasis mine):


"A fantastic consequence of general relativity theory [is] that the part of the universe that we will ever be able to see does not include the whole of it. The part of reality we can in principle ever see has boundaries. And there are necessarily regions of space and time beyond those boundaries."


And that holds even just for our universe, not to speak of other universes beyond that.

Here a fitting surprise with respect to what science can show would be that, in the future, we can narrow the particle horizon by causing particles to travel towards us at a speed much greater than the speed of light – in a sort of forced acceleration towards us by ‘fishing net’ as it were. But I firmly would put such a scenario in the science fiction category. Science simply cannot overcome physical limits. Just like it has not been and never will be able to build a perpetuum mobile (impossible due to the second law of thermodynamics), it can also not transcend physical laws in matters of observation. So yes, science has shown to progress in ways that are totally unexpected, but in the entire history of science no physical laws have ever been transcended. This has to be a crucial consideration in a realistic and rational assessment of what science can and cannot do.

Yet perhaps eventually we will arrive at a theory that is considered the correct fundamental theory of physics – it will account for all observations in cosmology and particle physics, it will be well-tested by experiment, will lead to many correct predictions, and will have a tight structure. The equations of that future theory may imply that the universe has a multiverse structure with each of the domains having physical parameters with different values. One might suggest that this may constitute a ‘theoretical proof’ that the multiverse is correct. However, in order to qualify as science in the usual sense there would still need to be observational proof that the multiverse, which is expressed in these equations as mere potentiality, is in fact actualized (confusing potentialities with actualities is a grave mistake in science). Such an observational proof is not possible.

All in all, I therefore have to agree with George Ellis that the multiverse hypothesis is not science, but philosophy. It is philosophy dressed up in scientific language. Certainly, it may be called a hypothesis from science, but it hardly qualifies as science proper. The multiverse hypothesis thus shows no inherent advantage over the God hypothesis in being more ‘scientific’ or ‘accessible to scientific investigation’.

At this point the reader may ask: if only observable entities count as science, are not cosmological fine-tuning claims scientifically problematic as well? After all, we do not observe other universes with different parameters. Yet to be able to argue for fine-tuning we do not need to show another universe with other parameters, since from observations of our own universe we know what would go wrong if the parameters were different (
see also this link, which explains the issue very well), even if our universe is indeed the only one that exists. Here the calculations, which are entirely uncontroversial among the scientific community, are only employed to show constraints on the physical laws governing an already observed entity, our universe, they are not predicting new entities which would then have to be observationally confirmed as actually existing. This is fundamentally different from a required observation of actually existing entities with other parameters in their laws of nature, if as such they were to serve as scientific support for naturalistic explanations of fine-tuning, such as a multiverse. The same requirement of observational confirmation of new hypothetical physics would extend to actually existing oscillations of the universe (with their re-set of entropy each time), or to universe generation from quantum fields, if they were to serve as explanations for the origin of our universe according to scientific standards (see below).

***

Yet while the multiverse may not be science, if it would solve the design problem it would nonetheless be a serious philosophical alternative to the God hypothesis. Does it succeed?


1.4. The multiverse does not solve the design problem

The multiverse concept solves the fine-tuning problem of our particular universe, but it is generally overlooked by its proponents that it does not really solve the overall design problem.

In fact, it creates another fine-tuning problem. In order to solve the fine-tuning problem of our particular kind of universe, the multiverse would have to make the occurrence of that universe by a pure chance process statistically inevitable, against all overwhelming odds. Yet this would require a truly random distribution of physical constants in extremely fine grades among the members (universes or domains) of the multiverse, in order to allow for a sufficient variety of physical constants between them (trillions times trillions times trillions etc. variations), so that ours would be guaranteed to arise by chance out of a huge possible parameter space of physical constants. This could only be achieved by careful design of the underlying many-universe generator. Thus, instead of solving the design problem, the multiverse theory just pushes it back one step.

As Stephen Barr concludes in his book Modern Physics and Ancient Faith (p.154):


"having laws that lead to the existence of domains of a sufficiently rich variety to make life inevitable would itself qualify as an anthropic coincidence. There seems to be no escape. Every way of explaining anthropic coincidences scientifically involves assuming the universe has some sort of very special characteristics that can be thought of as constituting in themselves another set of anthropic coincidences."


(Here the term ‘universe’ includes also the multiverse as an overarching ensemble.)

Robin Collins makes a similar point and explains the very special requirements for a multiverse that would explain the random appearance of our particular universe:

Universe or Multiverse? A Theistic Perspective
(Heading "Multiverse Generator Needs Design".)

George Ellis agrees. In his lecture linked to above he states:


"All the same anthropic issues arise as for a single universe: Why this multiverse, and not another one?"


The well-known cosmologist Paul Davies, who cannot exactly be called ‘religious’, appears to agree as well, in his essay
Taking Science on Faith:


"The multiverse theory is increasingly popular, but it doesn’t so much explain the laws of physics as dodge the whole issue. There has to be a physical mechanism to make all those universes and bestow bylaws on them. This process will require its own laws, or meta-laws. Where do they come from? The problem has simply been shifted up a level from the laws of the universe to the meta-laws of the multiverse."


I do not know, however, in which way to make much sense of his own alternative postulated in order to avoid involvement of God,


"In other words, the laws should have an explanation from within the universe and not involve appealing to an external agency. The specifics of that explanation are a matter for future research."


This may move too close to the idea of an ‘intrinsic necessity’ of the laws of nature, something that cannot be considered valid, as I pointed out. By the way, in his essay
A Brief History of the Multiverse Davies raises similar concerns of scientific observability of the multiverse as George Ellis does, with some additional critical thoughts on the multiverse issue.

The multiverse design problem is also not adequately addressed by the hypothesis of Cosmological Natural Selection, see chapter 3 below.


1.5. Conclusion

All the naturalistic arguments to explain the apparent fine-tuning of our universe fail:

a) brute chance or brute fact
b) necessity or high probability of the laws of nature
c) life as we do not know it

The only naturalistic hypothesis that successfully might explain the apparent fine-tuning of our particular universe is the multiverse hypothesis. However, also this hypothesis does not answer the overall design problem, because it introduces another fine-tuning problem.

Thus, I conclude that by far the most rational explanation for the apparent design of the universe is the simplest one: it is actually designed. The designer would most reasonably be God, since with His omnipotence and His infinite knowledge He could plan and execute the Big Bang with the exactly predicted outcome. Alternatively, God worked with a multiverse as an intermediate, which also would have to be carefully designed in order to allow for the generation of, among others, our particular universe.

Why God? Why not assume a naturalistic design for our universe? Yet that would require that the designer(s), as physical beings, would have evolved, which in turn would only be possible in a fine-tuned universe of its own. This would throw us back to inadequate naturalistic non-design explanations for that particular fine-tuned universe. God on the other hand – in the common philosophical concept – did not evolve. He is the necessary non-physical being that begins everything, including evolution.


Who designed the designer?

This is the obvious question that arises. In another form it is expressed in the atheist’s objection: Positing God is not a solution: who created God? This can be answered as follows:

God is the eternal, ultimate cause of existence. Something must be the first principle. For the believer God is the first principle, just like for the naturalist eternal matter or eternal fields (e.g. a quantum vacuum) must be the first principle from which everything arises. Asking the believer who created God makes just as little sense as asking a naturalist where matter or fields came from. They always were.

More specifically, the argument has been answered above in "God as a brute fact" (section 1.3.1.). Edward Feser’s article linked to there also points out why from a classical philosophical point of view God is the only entity that can serve as the ultimate explanation; eternal matter and eternal fields cannot. The classical argument, as presented in the article, also incisively deals with the common objection that God is too complex an assumption to begin with.

If on the other hand a naturalist would hold that ‘nothing’ could be a first principle it would make no logical sense. Matter and fields cannot arise form nothing, since nothing has no properties, and thus cannot produce anything (the pseudo-philosophical and pseudo-scientific claims made by Victor Stenger, Laurence Krauss and others notwithstanding). Nothing is, indeed, nothing. The ‘physical nothing’ of the quantum vacuum is of course not nothing, but a field. Something must always have been there, be it eternal matter, eternal fields or an eternal God.


A God of the Gaps?

Inevitably there will be those who claim that the design argument from fine-tuning is just another ‘God-of-the-Gaps’ argument. This is false, for two reasons:

a) The ‘God of the Gaps’ usually refers to things that may be within the domain of what science can investigate. It refers to things that science does not yet know, not to things that science in principle cannot know. The design of nature itself cannot be investigated by science for logical reasons (see ‘necessity of the laws of nature’, discussed above) and for observational reasons, due to the particle horizon.

b) More importantly, however, the ‘God-of-the-Gaps’ argument usually also refers to design within nature, not to the design of nature itself. In the former, God 'fills in gaps' where nature is thought not to be self-sufficient enough to achieve certain levels of design. The design argument from fine-tuning, on the other hand, does not seek to fill any explanatory gaps with regard to the workings of the laws of nature once these are in place. This is a fundamental difference to the ideas of biological Intelligent Design.

The design-argument from fine-tuning places God in an encompassing way above nature, as the designer of nature itself. Of course, there will be those who say that it amounts to a retreat of God even further, a retreat to outside of nature. But God is in fact outside nature – He created it.

As Ken Miller, one of the most prominent defenders of evolution today (he was also one of the star witnesses in the Dover trial against Intelligent Design),
writes:


"The categorical mistake of the atheist is to assume that God is natural, and therefore within the realm of science to investigate and test. By making God an ordinary part of the natural world, and failing to find Him there, they conclude that He does not exist. But God is not and cannot be part of nature. God is the reason for nature, the explanation of why things are. He is the answer to existence, not part of existence itself."


God is thus not just another force ‘in competition with nature’. As Stephen Barr beautifully writes in the last paragraph of his essay
The Miracle of Evolution about biological evolution (this can of course be extrapolated to the physical evolution of the universe as well):


"If biology remains only biology, it is not to be feared. Much of the fear that does exist is rooted in the notion that God is in competition with nature, so that the more we attribute to one the less we can attribute to the other. That is false. The greater the powers and potentialities in nature, the more magnificent must be nature’s far-sighted Author, that God whose ‘ways are unsearchable’ and who ‘reaches from end to end ordering all things mightily.’ Richard Dawkins famously called the universe ‘a blind watchmaker.’ If it is, it is miracle enough for anyone; for it is incomparably greater to design a watchmaker than a watch. We need not pit evolution against design, if we recognize that evolution is part of God’s design."


The two scenarios, an atheistic world and a world created by God, are indistinguishable from a strictly phenomenological point of view, putative rare miracles with physical manifestations aside. Thus, science does not automatically favor metaphysical naturalism, i.e. nature is all there is. Certainly, the atheist will say: Science shows that nature is self-sufficient, therefore, in extrapolation, if we do not need any outside explanation for what we observe in nature, we also we do not need any outside explanation of nature itself: a wider nature generated nature (our universe). However, this does not take into account the possible (and by the theist assumed) scenario that God created our self-sufficient and self-developing nature. Thus, the just mentioned atheist position is not a straightforward logical conclusion as in: "A results in B, and given B, C automatically follows." Rather, it is a jump between categories. Not that any such jump is necessarily always wrong, but it should be recognized as such – and since here this kind of jump needs to be made, the argument does not offer an advantage, not even one of simplicity, over the theistic argument (for the simplicity of God, see above). On the contrary, the theistic argument is simpler, because it also offers an explanation why the laws that allow for the existence of a self-developing nature are the way they are.

***

Even if the particle horizon problem did not exist and a multiverse could be demonstrated that is sufficient in variety as to make our particular universe statistically inevitable, the design argument from fine-tuning would still stand, as discussed, since the multiverse would have to be fine-tuned as well. Likewise, even if – though, again, this appears extremely unlikely – it could be shown in the future that a unified system of general relativity and quantum mechanics would yield a single unique solution for all physical constants and thus all of their values would result from necessity, it would not take away the argument that there could have been other unified systems instead (see section 1.3.2.). Also in this case the design argument from fine-tuning would still stand. Thus, as opposed to the typical ‘God-of-the-Gaps’ argument, the design argument from fine-tuning is immune against the charge that it is just another argument from ignorance – possibly doomed to collapse as science progresses.

By the way, if after these explanations someone would still unreasonably insist that God as author of nature itself remains a God-of-the-Gaps, my retort would be simple: under this mode of thinking, a naturalistic origin of the universe, which will always be scientifically unprovable *), would be the greatest Non-God-of-the-Gaps.

*) due to the particle horizon and the fact that we cannot observe beyond the Big Bang (hypothetical calculations without any possibility of accompanying observations of putative explanatory entities are not science, see above). Even if there were no particle horizon and a wider universe could be observed from which our universe was born, also this universe could never be proven to have always existed on its own. Not only that, an assumption of a naturalistic origin of the universe would require faith in characteristics of matter and fields that science has not shown to exist, see chapter 2 below.


1.6. The Rare Earth hypothesis: a design argument?

The argument for the apparent fine-tuning of the laws of nature is often conflated by religious writers with the
Rare Earth hypothesis. The hypothesis may make a rather good case that there are probably not trillions of Earth-like planets in our universe, and instead such planets are bound to be quite rare, but at least one Earth or more may be quite likely without special 'design'. After all, there are about 300 billion galaxies in the universe, each containing about 300 billion stars, and possibly there are as many or more planets. So the chances that at least one, possibly more, planets are by default 'just right' for the development of complex, intelligent life might be quite good. I therefore reject the use of the hypothesis as a design argument.


Occam’s razor

Before I close this chapter, I should make a few observations regarding the issue of Occam’s razor,
which states:

Entia non sunt multiplicanda praeter necessitatem, roughly translated as "entities must not be multiplied beyond necessity." An alternative version Pluralitas non est ponenda sine necessitate translates "plurality should not be posited without necessity."

It is often claimed that the multiverse violates Occam’s razor. This would certainly hold for a multiverse of "anything possible does in fact exist", with, in terms of origin, completely disconnected universes (which is not the type of multiverse that scientists usually have in mind). However, it would not necessarily hold for a multiverse that started by eternal inflation (or simply inflation) and in which all the single universes forming the multiverse are just "bubbles" (or domains) from that mother universe. This multiverse could be seen as being really just one single entity, thus not violating Occam’s razor.

On the other hand, atheists often accuse the God hypothesis of violating Occam’s razor: it supposedly introduces immense complexity because a designer God necessarily would have to be even more complex than the universe He created. However, this is false on two fronts:

1) Occam’s razor, as it is stated in its orginal formulation (see above) clearly refers to the multitude of entities to explain a certain phenomenon – one should make as few assumptions as possible. It does not refer to the complexity of any of those entities. The objection thus misunderstands Occam’s razor.

2) In mainstream theology, God is not even complex, as discussed above.

***

So what if the atheist would claim, in order to avoid the multiverse fine-tuning problem, that there exist perhaps trillions of multiverses, and one of them happens just by chance to be fine-tuned enough to, again just by chance, produce our particular universe? Now that would definitely violate Occam’s razor big time! Just like the scenario of, in terms of origin, completely disconnected universes (see above).


2. The origin of the universe: eternal God, eternal matter or eternal fields

Experience shows that the vast majority of atheists (though not all) are convinced naturalists, or choose naturalism as their default position – while atheism is often defined as simply just "lack of belief", mostly it results in worldviews that do make positive claims, just like theism does. There are only two possibilities for the naturalist that can be taken seriously: the creation of the universe from eternal matter or from eternal fields, like the quantum vacuum (which some characterize as a physical ‘nothing’; yet it is not nothing as explained in the previously referred to article
"Of Nothing"). Some principle must have been there forever (again, it cannot simply ‘at some point’ have come out of nothing) and since the naturalist negates an eternal God, these are the logical choices.


2.1 Eternal matter

In the classical oscillatory model of the universe we have the universe as eternal matter – our particular universe is part of an endless cycle of expansion and contraction. This model has become questionable with the discovery that our universe will probably expand forever which would break a putative cycle (in a somewhat modified version which eschews this problem, this oscillatory model still plays a role in Smolin’s hypothesis of Cosmic Natural Selection, discussed below). Yet eternal matter might form a wider background from which our particular universe was generated. In the
ekpyrotic model, for example (or one may think of equivalent other options if string theory, upon which it is based, will be refuted), we have the birth of our universe from a collision of membranes (branes) in multi-dimensional space; a cyclic (oscillatory) model is based on this.

An essential demand on eternal matter would have to be that it does not have to obey the second law of thermodynamics (this demand would also have to hold for the ‘re-set’ of entropy of a potential oscillatory universe upon each bounce from a ‘big crunch’). Otherwise, of what use would eternal matter be if eventually it would all run down into an undifferentiated mush that would not have the thermal/motional energy anymore to produce universes? Basically, one would have to assume that eternal matter is a kind of perpetuum mobile (something that science has shown not to exist in our universe).

Where, for example, does the energy of collision in the ekpyrotic model come from if the second law of thermodynamics holds in an eternal universe? It could never self-renew, and if it cannot, it would eventually run down into thermal randomness, and one would be forced to ask the question: where did it come from in its original ‘fresh’ state?

If the postulated eternal matter once had to be in an original ‘fresh’ state, it cannot be self-sufficient and eternal after all, certainly not in a state that eternally can produce universes. Thus it would beg the question for an originator of this matter anyway.

Of course, energy is equivalent to matter, but analysis shows that this does not solve the ‘moving’ problem: the universe becomes less and less capable of converting matter to energy (one can see this by analyzing the issue of star formation and star burning).

Certainly, one may believe in the magic of a wider universe where the second law of thermodynamics does not hold, but I find this unlikely (we know how matter behaves *)), and we probably can never observe this, given the absolute observational limits in cosmology (the particle horizon). Granted, there are theoretical models that suggest such behavior of matter – or rather, there are models that are made to fit such behavior of matter desired on theoretical grounds. However, there is no evidence from what science actually observes. Here, blind faith needs to replace observational evidence.

*) Yes, we know that all matter randomly moves at all times on the microscopic particle level, but this is different from eternal movement with always fresh kinetic energy on the macroscopic level. And a universe (a large closed system of spacetime and matter) for which the second law of thermodynamics holds will become cooler and cooler over time, restricting also microscopic movement more and more (heat is related to particle movement). So the idea that modern physics has relegated the ‘moving’ problem to the dustbin by showing that all particles are in motion all the time is false.


2.2. Eternal field

An alternative to eternal matter would be an eternal field. Think of the quantum vacuum.

We know that quantum vacua can produce virtual particles and anti-particles that, however, eliminate each other in the tiniest fractions of milliseconds. Some extrapolate that the universe could have arisen in a similar manner from a quantum vacuum, from almost nothing. However, we do not even remotely have any observational and experimental evidence that would make a link between such hugely different events like the humble appearance of a tiny, extremely shortlived virtual particle and, even if it started on the quantum sub-microscopic level, an event of such unbelievable magnitude of energy as the Big Bang – the universe is calculated to have been about 1E32 Kelvin hot (that is billions of billions of billions and more Kelvin) a miniscule fraction of a second after its beginning, at Planck time 1E-43 seconds, before the inflationary epoch. Thus, the birth of our universe from a quantum vacuum is an extraordinary claim born from pure, far-fetched theoretical speculation that has little to do with any actual observation in science.

Indeed, as Alexander Vilenkin, one of the ‘fathers’ of quantum cosmology which deals with this kind of scenario, wrote in his paper
Quantum Cosmology and Eternal Inflation: "sadly, quantum cosmology is not likely to become an observational science." But if it cannot become observational science, what kind of science is it then? The natural sciences are founded on observation and experiment. Shifting the foundations of the natural sciences towards an exploration of the world by pure thought alone would throw us back to – well, the pre-scientific world (even if the concepts and the mathematics are now more sophisticated).

Certainly, there will be those that say that in eternal fields anything can happen at some point, unlikely as it may seem, but this is the ultimate ‘just-so’ story and as such not credible.

Eternal matter that does not obey the second law of thermodynamics, and eternal fields that can produce sudden high-energy events from ‘almost nothing’? All those ‘scientific’ scenarios are not science at all, they are philosophy dressed up in the language of science. Yes, there are detailed theoretical mathematical models that predict such things, but without observational evidence any serious scientific claim cannot be made. Mathematics can easily be used to ‘create’ such scenarios, but in the natural sciences mathematics describes reality, it does not create it.

In fact, in order to be able to believe in a naturalistic origin of the universe, the atheist must negate data on what observational science tells us about actual matter, energy and fields, and instead believe in miraculous properties of such entities that science has not shown to exist.


2.3. A universe from nothing?

This article,

A universe from nothing

makes highly questionable claims.Not only are there no observational and experimental links, whatsoever, between quantum fluctuations and the birth of our macroscopic universe, even if it started at the sub-microscopic level, the authors also make an incredible blunder in defining nothing:


"In other words, the total energy of the universe is zero! It is remarkable that the universe consists of essentially nothing, but (fortunately for us) in positive and negative parts."


Yet it is quite a strange ‘nothing’ when both ‘the positive and negative parts’ that constitute it are most definitely something. Let me illustrate the problem with an example from numbers: while the sum of +5 and -5 is 0, nobody in their right mind would ever argue that 0 can produce +5 and -5 (or make that +1 million and -1 million, or any other number, if you will). This shows just how utterly absurd the naturalistic claim of a universe from nothing as being ‘zero energy’ really is (Stenger uses the ‘zero energy’ argument too).

The Big Bang was certainly an extreme-energy event – see above for temperatures involved. Only now, with the
Large Hadron Collider, will we be able to recreate, on a small scale, enormous energies as they existed miniscule fractions of a second after the Big Bang.

And again, the 'physical nothing' of empty space, the quantum vacuum, is not really nothing at all, see:
"Of Nothing". The follow-up article, "More Sweet Nothings…" deals with precisely the issue of zero energy and, apart from further highlighting the philosophical absurdity, argues, with citations from the cosmological literature, that a calculation of the total mass-energy of the universe is not even possible on a technical level.


2.4. Conclusion

Scenarios of eternal matter or spontaneous creation of universes from eternal fields blatantly contradict, or are not at all supported by, any observations from science about actual matter. We know how matter behaves, and it does not behave this way. This further points to a completely different (immaterial) source for the origin of the universe – a source that also solves the design problem of the laws of nature that govern our universe. This source would be God.


3. Critique of Cosmological Natural Selection

3.1. The hypothesis

Lee Smolin has proposed the hypothesis of Cosmological Natural Selection (CNS), inspired by the mechanisms of biological evolution, in several scientific articles and in the book The Life of the Cosmos. The hypothesis starts with the premise that the laws of nature, more specifically the particular values of the various physical parameters, have evolved by natural selection. Black holes are generated by the collapse of certain stars, and internally form a singularity, where the density reaches infinity. According to the hypothesis, universes are generated from the singularity in black holes, just like our universe appears to have been born from a singularity (in general a point or region in spacetime at which some physical quantity such as the density of mass or energy, the temperature or the strength of the gravitational field, becomes infinite, cf. The Life of the Cosmos, p.79). The singularity in black holes thus ‘bounces’.

Those universes that produce the most black holes also produce the most progeny of universes and thus become the most ‘typical’ ones because they overwhelm other types of universe in sheer number. Yet universes that have the physical parameters to produce the most black holes also happen to have the right parameters for life. Thus, our universe becomes just one of the most ‘typical’ ones, merely through the fact that it is optimized for the production of black holes, not for the production of life. Because our universe is a ‘typical’ one, there is nothing to be surprised about the fact that we are here.

Until the moment where universes are formed that produce black holes, the hypothesis basically proceeds via the model of the classical oscillatory universe. It starts with a random universe that expands and again contracts. Upon each bounce (expansion) the physical parameters mutate slightly. Eventually, after an endless series of such expansions and contractions the parameters are ‘just right’ to produce a universe with black holes, and finally strong propagation of that kind of universe ensues, since each of the black holes within it gives birth to a new universe (of course, this releases these new universes from the demand to contract again, something that would contradict what we observe in our own universe, which shows accelerated expansion and thus will probably expand forever). The propagation of universes is very strong. Our universe, for example, contains about 1E18 (a million trillion) black holes (one for every 10,000 stars), and if each one of them creates a universe, we get as many new universes just from one single universe. The creation of all these new universes is invisible to us, because they all expand into spacetimes different than the one of our universe.


3.2. Objections

1) There are theoretical models predicting that black holes bounce, but clearly there is no observational evidence that this actually happens. A singularity within a black hole is one thing, a ‘fecund’ singularity such as in the Big Bang another. We also do not know if the entropy of a back hole singularity would be comparable to that of the singularity from which the Big Bang arose (Penrose holds that, while the singularity of the Big Bang had extremely low entropy, the entropy of a black hole singularity is enormous, see The Emperor's New Mind, p. 442 f.). Apart from not having been observed, black hole bounce does not even appear to be a theoretical view shared by all cosmologists, otherwise it would have found its way into any multiverse hypothesis based on inflation or eternal inflation as well.

2) The small increments of mutation are a purely ad hoc invention to make CNS work. The slight mutation rate would ensure that ‘fertile’ universes can continually be made, once the physical parameters arrive at ‘just right’ values, since the black holes in any given universe would spawn progeny with physical parameters that are not too far off the desired values as to not become immediately ‘infertile’ again. While Smolin does not explicitly mention it as a reason, a just slight mutation rate would also be required to ensure that the cosmological constant would not randomly turn positive and cause the initial oscillating universe undergoing the series of mutations, which eventually is supposed to lead to the ‘right’ values, to expand forever at this point – before it can produce universe-generating black holes that release this universe from the demand to contract again. This would result in the breaking of the endless cycle of expansion and subsequent contraction.

Yet while a just slight mutation rate would be required to make CNS work, there is no way to obtain observational evidence for that, just as there is no way to test the existence of a multiverse as required by the hypothesis (see the above discussion of the particle horizon).

Smolin nonetheless claims that CNS is a true scientific hypothesis since it is falsifiable, i.e. it would fail if some of its predictions do not work out – predictions that are not at all directly related to the workings of the hypothesis, such as upper limits on the mass of neutron stars. However, this is ascribing an exaggerated role to Popperian ideas about science. Yes, falsification of wrong hypotheses and lack of falsification of potentially right ones is an important part of how science progresses, and Popper was certainly right about that, but the main business of science is positive verification by observation and experiment. The theory of evolution and the theory of the Big Bang (not to speak of the atomic theory, quantum theory, general relativity etc.) are such strong scientific theories because they have successfully undergone verification by positive evidence on many levels. How else than because of all the positive lines of verification could Martin Rees come to the reasonable conclusion that he is now 99 % certain – as practically certain as it gets – that the Big Bang happened (book Just Six Numbers)? And would you seriously suggest that a cancer drug works because there is a lack of falsification that it does not – rather than positive verification that it does?

Thus, the fact that a hypothesis is falsifiable is not enough to make it scientific. It has to be able to actually be verified, to be positively tested. I am confident that almost all scientists who like me perform positive testing by observations and experiments will agree.

3) It has been pointed out that black hole production in our type of universe may not be optimal or close to optimal at all, making it highly unlikely that our kind of universe has been naturally selected as a ‘typical’ universe due to its supposed ability to make the most black holes.

The astronomer Joseph Silk raises, among other criticisms, the following objection in a review of The Life of the Cosmos (Science 244 (1997): 644):


"Our universe is far – by about four orders of magnitude, or a factor of 10,000 – from being optimally loaded with black holes. Most cosmologists are convinced that enhancing the amplitude of primordial density fluctuations would enhance the black hole fraction. We are very far from saturation. Smolin's response to this criticism is an appeal to self-regulation of the inflaton, the field responsible for inflation, a concept that is beyond any present physics of which I am aware. He argues that small changes in fundamental parameters won't do much. That is simply untrue. Tilt the spectral index of primordial fluctuations blueward by 10 percent and one would fill the early universe with collapsed objects, possibly destined to make black holes. Even worse, imagine adding the tiniest admixture of black holes early on. Perhaps during an early phase transition some rare horizon volume received a fatal compression that pushed it over the precipice of black hole formation. The primordial black holes conserve their mass as the dominant relativistic energy density of the universe redshifts away. The result: one in a billion, even one in a trillion, is all it might take in terms of large primordial horizon-scale overdensities to result in a universe that today is vastly overloaded with black holes compared to what we see around us."


To be fair, Smolin does not exactly argue that our kind of universe must be optimized for black hole production above all other kinds of universe, and he had already considered the direct formation of primordial black holes, without the intermediate of star formation (cf. p. 102 of The Life of the Cosmos). He argues that we should just be on one of the peaks of the cosmological fitness landscape (an analogy to the biological fitness landscape), not necessarily on the highest peak of all. However, if our peak in the landscape is so much smaller than others where black hole formation is 10,000 times more efficient, how then can ours still be a ‘typical’ universe? (Smolin does not make the impression that he had considered such a magnitude of difference between ‘optimal peaks’ within his fitness landscape.)

Susskind agrees with Silk on the minute density contrasts in the early universe and adds further thoughts (from a
debate Susskind vs. Smolin):


"If for example, the minute density contrasts in the early universe, which had the unnaturally small numerical value of 1E-5 were not so weak the universe would have been dominated by small black holes. Those black holes might have coalesced into larger black holes, but I said I would be generous and count them all.

"Combine the increase of density contrast with an increase in the strength of gravity and a rapid inflation prehistory and you can make stupendous numbers of black holes. In fact if gravity were made as strong as it could reasonably be, every elementary particle (except photons and gravitons) would be a black hole!

"I have exactly the opposite opinion from Smolin's. If the universe were dominated by black holes all matter would be sucked in, and life would be completely impossible. It seems clear to me that we live in a surprisingly smooth world remarkably free of the ravenous monsters that would devour life. I take the lack of black holes to be a sign of some anthropic selection."


4) Yes, CNS may make the multiverse hypothesis seem more aesthetically appealing, since the final outcome is that we live in one of many ‘typical’ universes, rather than in a very ‘atypical’ universe among many life-prohibiting ones. But CNS does nothing to alleviate the fundamental problem: we would still have to initially postulate an incredible number of other kinds of universes (in this case, all developing in succession before ours) just to randomly arrive at our kind of universe, at which point natural selection can kick in to make it just one of many then ‘typical’ universes.

In other words, in order to arrive at universes that are made ‘probable’ through natural selection (just like supposedly ours then) we still do not avoid the initial staggering improbability of this final ‘probable’ state, resulting in the need to go through an elaborate process of births of an endless number of universes via expansion/contraction before arriving at our kind of universe.

Why not then simply revert to the model of the classical oscillatory (cyclic) universe and postulate, without black hole bounce and ‘natural selection’, that our universe is just the probable result from a process of births of an endless number of universes through expansion/contraction all with slightly different parameters, before arriving at ours? (Not that this would save the hypothesis from any of the problems discussed below.)

5) As outlined, until the moment where universes are formed that produce black holes, the hypothesis basically proceeds via the model of the classical oscillatory universe. It starts with a random universe that expands and again contracts. Upon each bounce (expansion) the physical parameters have to mutate just slightly for CNS to work (see above).

One problem here is again the demand of the ‘re-set’ of entropy to a low value upon each bounce in an endless series of bounces followed by contractions. Apart from a correct initial state of the next universe to be able to evolve structures within it (becoming important at the latest at the final stage of black hole formation within the oscillatory universe), this would be necessary even just to prevent the cycles of expansion/contraction from becoming ever longer until they finally stop (a problem that has been repeatedly pointed out for oscillatory universe scenarios). As discussed above, this reset of entropy would go against the second law of thermodynamics, and thus would demand ‘miraculous’ properties that we do not observe from real matter.

Another problem is that CNS also introduces fine-tuning issues. There must be, analogous to the common multiverse, a mechanism that provides the flexibility for a wide-ranged random distribution of physical constants in very fine grades over the diverse cycles of expansion/contraction, in order to allow for a sufficient variety of physical constants that solves the 'fine-tuning' problem of our particular kind of universe.

Moreover, CNS introduces an additional fine-tuning requirement, which is that all successive mutations occur at a just slight rate, regardless of the kind of physical parameters changed. Sudden leaps are not allowed at any time [for the reasons for this requirement, see above point 2)].

Thus, also to this kind of initially successive, rather than simultaneous, multiverse
Paul Davies’ criticism would apply:


"The multiverse theory is increasingly popular, but it doesn’t so much explain the laws of physics as dodge the whole issue. There has to be a physical mechanism to make all those universes and bestow bylaws on them. This process will require its own laws, or meta-laws. Where do they come from? The problem has simply been shifted up a level from the laws of the universe to the meta-laws of the multiverse."


All in all, my two main objections to naturalistic explanations also hold for CNS: the need for design is not circumvented, and we would have to assume new, magical properties of matter that science does not observe in real matter (i.e. here the scenario that the second law of thermodynamics does not apply).

Interestingly, CNS is supposed to explain the evolution of the laws of nature without resort to an outside agent being some immutable external law, but the (fine-tuning) law of keeping a just slight mutation rate at all times to steer the very evolution of law would appear to be such an immutable external law itself.


3.3. Conclusion

The hypothesis of CNS, elegant and inventive as it may be, does not avoid the difficulties that hold for other naturalistic explanations (the fine-tuning problem, ‘miraculous’ properties of matter). Not just that, as has been pointed out by astronomers and cosmologists, it may simply fail on technical grounds.

Given all that, CNS does not appear to offer a viable alternative to the assumption that the universe has been created by God.


4. The uncaused universe

An idea that some people have is that certain effects have been shown by science to be uncaused. Adding to the confusion are careless pronouncements by some cosmologists like "the universe may have spontaneously sprung into existence from nothing".

Are there ‘uncaused’ events in science? Let us look at science in practice. As far as I know, being a scientist myself, in the tens of thousands scientific laboratories around the world the principle of looking for natural causes to natural effects is still very much alive. In fact, science as it is currently practiced and will be in the foreseeable future, is firmly based on this central principle. It obviously includes the broader assumption that every effect has a cause.

There appears to be some confusion, however, as to whether the findings from quantum mechanics suggest a loosening of the bond between cause and effect. Such a loosening does not really take place. Yet what does happen in the realm of quantum processes, is that a cause does not have a deterministic effect anymore, but a probabilistic effect. That the bond between cause and effect is unbroken is proven by the fact that the statistical distribution of the effects can be represented by exact mathematical formulas.

This can be well illustrated by radioactive decay: The cause for radioactive decay is the instability of certain types of atom which triggers them to loose a particle, e.g. a beta-particle, and in the process to convert into another element. Yet radioactive decay is also a quantum process.

If you have an agglomeration of 32-Phosphorus (32-P) atoms, or an agglomeration of molecules containing 32-P atoms, it is impossible to tell which one of the 32-P atoms will decay next to give stable 32-Sulfur (32-S). However, it is known that the half-life of 32-P is 14.28 days, i.e. after this time half of the material has decayed to 32-S, regardless which precise molecules out of the agglomeration of atoms do the decaying. This holds for any quantity of 32-P that is more than unimaginably miniscule. Even a chemically barely detectable trace amount of 1 femtomol still has 600 million 32-P atoms. Obviously, this is still such a huge number that, statistically, also this tiny trace amount will always decay with a half-life of precisely 14.28 days. The cause for the decay is the instability of the 32-P nucleus, and the effect is always this precisely determinable half-life. Thus, there is a clear correlation between cause and effect, a probabilistically determined correlation. Certainly, on the local level of the lowest imaginable quantities, statistics cease to work, but the correlation between cause and effect is still there. Let us assume, hypothetically, that we have an agglomerate of just three 32-P atoms. One may decay in, let’s say, the next two minutes, one in 4 weeks, and another one in 10 months. Obviously, a statistically determined half-life of 14.28 days will only work on a global level of many atoms, but not on the local level of these three atoms. The effect is random – who can predict when exactly these three atoms will decay? Nobody can. But is the cause for the decay different from that for a larger agglomeration of 32-P atoms, for which a half-life of precisely 14.28 days could be determined? No, of course not. The cause is still the exact same instability of the 32-P nucleus.

Thus, the effect of decay is still tied to that cause, even though the factor of precise statistical determinability falls away. The cause is the same, regardless if the effect is that the decay takes place within 2 minutes, or after 10 months.

***

It should be clear from this that the concepts of ‘random effect’ and ‘cause-less effect’ are two very different things. ‘Random’ in science means ‘by chance’, ‘unpredictable’, ‘indeterministic’ but not ‘uncaused’.

Therefore, an ‘uncaused’ event, much less an entire ‘uncaused’ universe, ‘spontaneously from nothing’, has no support from any observational findings in science.

Nothing can spontaneously come out of nothing. Philosophical consideration shows that only an eternal first principle can be uncaused. As mentioned before, something must always have been there, an eternal God, eternal matter or eternal fields, which caused our universe.


Home