Let us start off with Boltzmann Brains. Imagine a universe that is nothing more than a diffuse cloud of stuff, without any structure to speak of. Perhaps it is at the heat death end of a universe such as ours, perhaps it is because no such structure has emerged yet. That does not matter. It also does not matter what the "stuff" is. It can be lots of elementary particles, the sort that we know of; it can be some variant from another universe; or it can just be a chaotic primordial bit soup, just raw random information.1 What this diffuse universe is made of is irrelevant to the argument made here.
UPDATE: this post has received valuable criticism pointing out places in which it is unclear (which I expected, having scribbled it quickly) and specific issues. I will be updating it accordingly, whenever I get time (which may, sadly, take a while).
This diffuse cloud is animated by the very slight amount of heat provided by cosmic dark energy. This means that, over a very extended amount of time, pretty much any configuration of items in this cloud will happen. Based on this, it could form some variant of a young sperm whale or a bowl of petunias. More germane to the topic, the idea behind Boltzmann Brains is that eventually there will be an arrangement of this "stuff", occurring at random, that implement the same processes that take place in a brain. It does not have to be made of neurones, just to replicate the processing that goes on in, say, your own brain as you read this blog. The important part is that, like atoms inside an actual brain, like information inside a computer, this stuff can be arranged in such a manner that it carries out some form of processing analogous to what a brain does while stimulated. Such “brains” would be self-aware and all the while they exist would labour under the impression that they are just like us since there is nothing preventing configurations from replicating the same stimuli we are getting.
This may seem crazy but it’s a staple of discussions around the standard cosmological model. If you find it somewhat disturbing, you may be reassured to know that I believe that such an arrangement would in fact most likely never have time to materialise.
If every possible structural arrangement eventually takes place, it isn’t just strange self-aware structures somewhat analogous to brains that will materialise. Assuming those do have the time to happen, there is another kind of structure that is guaranteed to come into existence. It is far, far simpler than a self-aware entity with the illusion of living a human life — and by the same logic therefore far, far more likely to emerge.
It is a structure with the simple, basic property that it is able to use its environment to make copies of itself. A replicator.
And the fact is: once you start having a replicator, you are no longer in a diffuse universe at maximal entropy (at least not relatively locally). There is a principle at work that continuously creates further structure. All the Boltzmann Brains, sperm whales, and petunia bowls currently in formation get eaten up by a pullulation of simple replicating structures.
It is possible that a given replicator will eventually fail due to interactions with a harsh environment. In fact a great number may fail to become perennial. But eventually, one will.
A replicator, particularly an early, simple one that has not yet evolved a boundary, is also likely to make copying errors. Many of those copies will fail to replicate. But some will replicate differently. This introduces diversity. In turn, the replicators progressively become one another’s environment — there is competition.
The odds are pretty good that I’m missing something, but if I’m not it strikes me as practically certain that a high-entropy universe must cause stable, evolutionary, complex, and diversity-producing structures to emerge. (Algorithms, games, and evolution suggests strongly that diversity is typically produced under evolutionary conditions2.)
There is something instead of nothing because there has to be something instead of nothing. Well I guess that’s one major philosophical question that’s not sorry to finally be put to rest.
The complexity of the initially possible replicators is likely strongly constrained by the amount of energy available from the dark energy. (I wonder if this can be represented simply as a given amount of bit shifting?) Within that set, there are likely further constraints from those which are more likely to survive than others. There could be an avenue of research in figuring out what the constraints and possible replicators would be (pointers welcome).
Once that first set has bloomed, it would need to gather more energy for more complex behaviour. It may be that some replicators can make use of some lever effects to focus energy from a larger area and have it flow more usefully. When a more efficient replicator happens, it is likely to start reproducing exponentially fast as it harnesses more energy to replicate more to harness more energy… This process might eventually be stopped or slowed down by some population dynamics (or some form of the the square-cube law), but it would certainly be spectacular. It is an interesting question whether you could produce inflation with such machinery.
Evolution is anentropic, which would explain a low-entropy early observable universe. It would only be able to maintain its anentropic dominance if it isn’t energy-starved by its growth though, and entropy growth would then resume. Assuming time’s arrow is founded on entropy (which I’m still mulling over), this gives us one.
It is commonly noted that the universe is strangely “fine-tuned for life” (and structure in general) but if the fabric of reality is itself evolutionary then of course it will be fine-tuned for life: that’s just another way of saying fine-tuned for evolution. If we assume that more stable forms of structures are fitter in an environment that naturally tends towards maximising entropy, over aeons of iterations it would select for the sort of apparently miraculous stability that we notice.
Evolutionary processes can get stuck at local optima, they produce diversity, and they can also operate independently at different scales. These aspects can have interesting consequences. The odds are good that if we had a globally optimal mechanism for the production of structure then discovering the way in which it operates would provide a description of everything. But if we are instead dealing with a diversity of replicator populations (that have presumably stricken some equilibrium) at the most fundamental level of our reality, we can’t explain everything to be coherent. You would then expect that the world’s inherent logical structure (including maths) would contain random facts.
This also hints at the fact that it may not be possible to unify everything, not in the sense that is usually understood as a grand Theory of Everything. It may entail that you can unify quantum mechanics and general relativity only in the same manner that you could unify cellular behaviour and sociology — the result would be at best extremely complex, contrived, and if you somehow managed to make it predictive it would lack in explanatory power. The unification is elsewhere. It is not in the traditional sense of yielding a single theory able to establish predictions for a joint set of phenomena. But where unification in prediction may not be possible, unification in explanation remains attainable.
This seems like a good point at which to bring up Leibniz:
And by this means there is obtained as great variety as possible, along with the greatest possible order; that is to say, it is the way to get as much perfection as possible.
I am certainly not a domain expert, and even if I were much research in several areas is still required to buttress up the speculation herein. But I have to admit that I find it quite compelling and I very much like the research programme that falls out from this line of thinking.3 Let me know what you think! ∎