In an idle moment I was messing around with some equations.
I took
- The equation relating the entropy of a black hole to the surface area of its event horizon,
- The equation for its Schwarzschild radius, to calculate the area, and
- The equation relating the entropy of a system to the number of its (accessible, equally probable) microstates,
to work out the number of microstates of the smallest
possible black hole, just out of interest.
By setting the Schwarzschild radius of the black hole to a
Planck length, as giving (intuitively) the smallest possible black hole, you
end up with the number of microstates being eπ, and not one, which I had
(intuitively) thought would be the minimum possible number of microstates. I
thought this would be the "unit" black hole as it were, with a single
(integer) microstate, and was not expecting a transcendental number instead of
an integer number of microstates.
What did this mean? Certainly, eπ is equal to -1-i, which neatly expresses everything in terms of unity, but leaves
me none the wiser regarding physical interpretation. However, I remember
conversations with people much smarter than me about how we cannot rely on
physical or geometrical interpretations, that these are inevitably naive.
Indeed, we would be paralysed, crippled, in our scientific endeavours, if we
always had to relate our subject to something that can be comprehended by
analogy with our limited everyday experience, even if such analogies are a
source of awe and enthusiasm. The motivations and the manifestations of our
scientific endeavours must always remain distinct.
Nevertheless I asked my friend Aidan Keane what the physical
meaning of this was, because he is actually a proper published cosmologist and
black hole expert whereas I am just some schmuck with a degree in astronomy and
physics. Chin stroking and head scratching ensued on both his part and mine and
then he pointed out that eπ - otherwise known as Gelfond's constant - is
equal to sum of the volumes of all even-dimensional unit (hyper-)spheres.
Now we are getting somewhere, I thought: a geometrical
interpretation.
So maybe there in an inconsistency in the approach to area
and volume, maybe the problem is my naive approach to calculating the surface
area of the black hole, and maybe adopting an approach that includes higher
dimensions would make it all cancel out to give unity instead of eπ. After
all, the event horizon is in a sense not part of the universe: we can observe
objects approaching it but never actually reaching it in all reference frames
other than the reference frame of the event horizon itself, due to relativistic
time dilation. It is misleading to think of the microstates of the black hole
in the same way one thinks of the microstates of a thermodynamic system
hypothetically contained within the event horizon in the same sense a ball
contains air. The event horizon is the black hole. There is nothing beyond
it. It is the end of time and edge of space. We are discussing the microstates
of the event horizon itself, whose dimensionality transcends the
observable universe.
But what's special about even dimensionality then? Apart
from turning the argument of the gnarly gamma function in the expression for
the volume of an n-dimensional unit sphere into an integer.
"Collapsing tesseracts" in the movie Interstellar?
Anyway, the whole thing got me thinking again about some
ideas I had about finite state machines twenty years ago back when I was
writing a master's dissertation on random number generation. My thinking then
about the states of a finite state machine are equally applicable to the
microstates of a black hole here, I think.
Let's consider a system which at any given time can be in
one of a finite number of possible states, and makes a transition from one
state to another as a consequence of an interaction with its surroundings
corresponding to an exchange of information between the system and its
surroundings: input and output.
Let's imagine the possible transitions do not include all
possible pairs of states though. Each state has a list of possible successor
states which includes some, but not all, other states. The possible predecessor
states are therefore similarly limited to some, but not all, other states.
All states may be equally accessible and equally probable over
time, given a sufficiently long (random) sequence of transitions,
satisfying the requirements of a macroscopic observable like entropy based on
the total number of accessible, equally probable microstates of the
system. Crucially, however, the probability of an individual
"eventual" state after a number of transitions depends on the initial
state if the (random) sequence of transitions is not sufficiently long. (N.B. I
am using the term "eventual" rather than "final" because I
am reserving the term "final" for states from which the system cannot
make a transition, states which are their own successor). This difference in
probability is because not all states are possible successor (or predecessor)
states of each other. Given an initial state, the list of possible eventual
states only increases to include all states after a sufficient number of
transitions. Similarly, given the eventual state, the list of possible
initial states with which the sequence of transitions commenced only includes
all states for sufficiently long sequences of
transitions.
So we have one entropy, a macroscopic entropy, related to
the number of states that are accessible to the system after a
"sufficiently long" sequence of transitions, and we have another
entropy, a sort of "incomplete" or "interim" entropy,
related to the number of states that are accessible only after a sequence of
transitions that is not "sufficiently long", that is, a number of
states that is less than the total number of states reflected in the
macroscopic, observable entropy. This still holds even if the system is in
thermal equilibrium with its surroundings and the macroscopic entropy is
constant.
We can consider a "forward" and a
"backward" incomplete or interim entropy related to the number of
possible eventual and initial states for each intermediate state. That is, we
can calculate this incomplete entropy working backwards from some intermediate
state to a list of possible initial states, or working forwards to a list of
possible eventual states, for a given sequence of transitions. The forward and
backward incomplete or interim entropies of intermediate states may not be
equal because they arise from different lengths of sequences of transitions.
That is, the intermediate state may be “closer” to one or other of the initial
and eventual state in the sequence of transitions.
However, even when the lengths of the sequence through which
the system transits before and after the intermediate state are the same, the
incomplete entropies (backward and forward) might not be equal. We can think in
general without reference to specific initial or eventual states. The
incomplete entropy arising from the number of possible initial (forward) and
eventual (backward) states will not be equal if states tend to have more
successors than predecessors (or vice versa: fan-in versus fan-out). The number
of possible eventual (or initial) states will not be the same except within
sets of states connected by sequences of transitions such that the number of
successor and predecessor states are the same.
One can consider a "residual" entropy for each
intermediate state related to the difference between these two incomplete
entropies, which is zero for sufficiently long sequences as both tend to the
same macroscopic value, but which is not zero for shorter sequences where the
rate at which successive states acquire either predecessors or successors
differ as the sequence of transitions in which they are embedded increases in
length. The residual entropy of a state is zero for sufficiently long
transition sequences, corresponding to observable macroscopic entropy, since
the backward and forward incomplete entropies are "completed" and
equal to the macroscopic entropy and so cancel. So the residual entropy is
globally zero, but may be locally non-zero if individual states have different
numbers of successor and predecessor states (and may also be locally zero).
There is a another thing to consider. The state of the
system changes as a result of an interaction between the system and its
surroundings in a way that is not unphysical. This is where we stop talking
about mathematics and start talking about physics. This is where our
experimental observations about how one thing depends on another, our data, are
involved. The state changes as a result of an interaction with the surroundings
in a way that involves an exchange of information to which we have access.
This consideration is important because this is the reason
why we would consider finite transition sequences that are not sufficiently
long to result in a macroscopic observable entropy in the first place (without
having to impose some non-physical, artificial constraint on the system in
order to do so).
Each transition is associated with an exchange of
information between the system and its surroundings. The specific nature of
that information selects pairs of state between which a corresponding
transition can be made, like the input of a finite state machine. That is,
there is a finite number of transitions (less than the total number of possible
transitions) with associated pairs of states, to which a given input corresponds.
For a given information exchange, there is a number of candidate pairs of
states between which the system can have made a transition. This list of pairs
does not include all pairs between which transitions can be made. There is
therefore a list of possible initial states which does not include all states.
A subsequent information exchange resulting in another
transition reduces this list of possible initial states of the sequence
further, since the possible initial states entail a list of possible successors,
some of which are eliminated by the observation of the subsequent input and
related restriction of the list of possible transitions in the same way as
before. This proceeds until the sequence uniquely identifies the initial state.
The importance of this is that each state is uniquely
identifiable entirely on the basis of the properties of the system itself. The
states do not require arbitrary labels. The input (and output) sequences that
make these properties manifest do not need to have any special properties. They
can be finite random sequences in the way discussed by Kolmogorov and Chaitin.
The precise "values" of the input (e.g. binary digits) do not matter.
The only thing that matters about the label is its uniqueness, not the characters
with which it is written. The process can be repeated with different random
sequences and the uniqueness will persist, since it arises from properties of
the system, not the input submitted to it. We do not require any "divine
intervention" to differentiate between the states: we do not require an
arbitrary injection of information to make the system understandable by the
application of arbitrary state labels. The system is completely
self-contained.
These finite sequences that uniquely identify the states of
the system, the state labels that are intrinsic to the system itself, are
subject to the arguments above regarding residual entropy, by definition, since
they are based on limitation of states addressed by a sequence of transitions
to a number less than the total number of possible states. This is the reason
why these considerations regarding residual entropy were entertained in the
first place: as a way to describe a kind of "dynamics" which the
states will exhibit in so far as they can be uniquely identified.
Which brings us back to black holes. Earlier I reserved the
term "final state" for a special purpose. We can consider a black
hole to represent a final state, one from which the system will never exit, one
whose only successors are itself. This could, in this model, correspond to a
single unitary final state, or a collection of states all of which are
connected to each other, so that they are all possible successors of each
other. As transitions occur between them, the list of possible initial states
does not decrease as the list of successor states is unrestricted, and the
process by which an initial state can be uniquely identified by an intrinsic
state label is truncated before it is complete and reaches a conclusion. We
cannot uniquely identify any states in this structure. All we can know is the
residual entropy of eventual states up to a certain horizon of this structure.
The dimensionality of the structure is infinite since the states are connected
to themselves.
The exchange of information which prompts the transitions is
lossless, by which I mean physically they happen at the speed of light (with no
information lost in directions transverse to the direction in which information
is propagated to describe inequalities between the energy density of magnetic
and electric fields, for example). The trajectories along which transitions
progress from one state of the system to the next are null geodesics and
their statistical description assumes a geometrical interpretation. The
thermodynamic characterisation of a black hole in terms of a surface area
indicates the possibility of a statistical or combinatorial formulation of
General Relativity that may allow us to build a bridge with quantum theory. We
should relate the geometry of space-time to its information content rather than
its matter-energy content.
However, if space-time geometry is to be considered in terms
of the residual entropy of the states in which we observe the cosmos, we are
faced with a paradox. The residual entropy is not observable to the system's
surroundings, where only macroscopic entropies are manifested. The universe can
only be observed from within, where residual entropies are manifested. To
describe the universe requires us to be part of the universe, in accordance
with the (final) Anthropic Principle. Our direct experience of the universe,
and our ability to describe it at the most fundamental level on the basis of
that experience, without resort to divine intervention or any other arbitrary sources
of information beyond the content of the universe itself, vouchsafe its, and
our, reality.
"The universe can only be observed from within..."
ReplyDeleteI can observe everything happening in the universe of a videogame I play. The programmer of the source-code is basically "god". The player, in a sense, or depending on how much freedom/abilities the source-coder grants him/her, can sort-of act like a god (eg. in a "Sandbox" type of game). I don't believe in any particular theory, but it is a possibility that we could be in a simulation. All laws of physics in a game have to be programmed. We have math that can explain all our "real life's" physics as well. It's very plausible that those physics could have been programmed as well. Anyways, I'm starting to blabber but I think you get my point :)
Thanks Vince, actually, you raise the point that is the real motivation behind this, which is to demonstrate that the (complete) laws of physics can only exist in a universe that is not a simulation, that is in fact "real". We can simulate a universe but only with incomplete physics - we will always have to leave something out (probably nothing terribly important for the purpose of the simulation).
ReplyDelete