Nature shows us only the tail of the lion. But there is no doubt in my mind that the lion belongs with it even if he cannot reveal himself to the eye all at once because of his huge dimension.— Albert Einstein
What theoretical physics needs, I've long believed, is to violate some assumption that is shared in common by both classical physics and quantum mechanics. Everyone nowadays seems to understand that something different is needed, but I suspect the "radical" new theories I've heard of aren't radical enough.
So in this post I'm going to suggest a class of physical theory that seems to me to be radical enough (or if, you prefer, weird enough) to shake things up.
Indirect evidence of a wrong assumptionSuppose you're trying to find the best way to structure your description of something. (Examples: choosing the structure for a computer program to perform some task; or choosing the structure for a theory of physics.)
What you hope to find is the natural structure of what you're describing — a structure that affords a really beautiful, simple description. When you strike the natural structure, a sort of resonance occurs, in which various subsidiary problems you may have had with your description just melt away, and the description practically writes itself.
But here's a problem that often occurs: You've got a structure that affords a pleasing approximate description. But as you try to tweak the description for greater precision, instead of the description getting simpler, as it should if you were really fine-tuning near the natural structure, instead the description gets more and more complicated. What has happened, I suggest, is that you've got a local optimum in solution space, instead of the global optimum of the natural structure: small changes in the structure won't work as well as the earlier approximation, and may not work at all, so fundamental improvement would require a large change to the structure.
I suggest that physics over the past century-and-change has experienced just such a phenomenon. Classical physics was pleasing, but an approximation. Our attempts to come closer to reality gave us quantum mechanics, which has hideously ugly mathematics. And our attempts to improve QM... sigh.
You'll find advocates of these modern theories, and before them advocates of QM, gushing about how beautiful the math is, but frankly I find this wishful thinking. They focus on part of the math that's beautiful, and try to pretend the ugliness out of existence. Ignoring the elephant in the room. In the case of QM, the elephant is commonly called "observation", and in more formal social situations, "wave function collapse".
But it would be a mistake to focus too much on the messiness of QM math. If physics is stuck in a local optimum, we need to look foremost at big things that classical and quantum have in common, rather than getting too focused on details by which they contrast.
The saga of determinism and localityTwo really big ideas that have been much considered, in the contrast between classical and quantum physics, are determinism and locality.
In the classical view of reality, there are three dimensions of space and one dimension of time. In space, there are point particles, and space-filling fields. The particles move continuously through space over time, each tracing out a one-dimensional curve in four-space. The fields propagate in waves over time. The model is deterministic because, in principle, the state of everything at one moment in time completely determines the state of everything at all later moments in time. The model is local because neither particles nor waves travel faster than a certain maximum speed (the speed of light).
QM depicts nature as being fundamentally nondeterministic. Einstein didn't approve of that (it apparently offended his sense of the rhyming scheme of nature, as I've remarked before). God does not play dice, he said.
It's important to realize that Einstein was personally responsible, through his theory of special relativity, for making classical physics more local. Prior to relativity, classical physics did not prohibit things from moving arbitrarily fast; consequently, in considering what would happen to a given volume of space in a given interval of time, there was always the chance that by the end of the interval, some really fast particle might come zooming through the volume that had been on the other side of the universe at the beginning of the interval.
This relation between Einstein and locality helps us appreciate why Einstein, in attempting to demonstrate that quantum mechanics is flawed, constructed with two of his colleagues the EPR paradox showing that QM requires information to propagate across space faster than light. That is, in an attempt to discredit nondeterminism he reasoned that quantum nondeterminism implies non-locality, and since non-locality is obviously absurd, quantum nondeterminism must be wrong.
Perhaps you can see where this is going. Instead of discrediting nondeterminism, he ultimately contributed to discrediting locality.
Okay, let's back up a few years, to 1932. As an alternative to quantum nondeterminism, Einstein was interested in hidden variable theories. A hidden variable theory says that the state of reality is described by some sort of variables that evolve deterministically over time, but these underlying variables are fundamentally unobservable, so that the nondeterministic quantum world is merely our statistical knowledge about the hidden deterministic reality. In 1932, John von Neumann proved, formally, that no hidden variable theory can produce all exactly the same predictions as QM. (All hidden variable theories are experimentally distinguishable from QM.)
This is an example of a no-go theorem, a formal proof that a certain kind of theory cannot work. Often, the most interesting thing about a (correct) no-go theorem is its premise — precisely what it shows to be impossible. Because twenty years later, in 1952, David Bohm published a hidden variable theory experimentally indistinguishable from QM. The hidden variable theory was correct. The no-go theorem was correct. We can therefore deduce that what Bohm did must be different from what von Neumann proved impossible.
And so it was. On careful examination, von Neumann's proof assumes the hidden variable theory is local. Bohm's hidden variable theory has a quantum potential field that can propagate arbitrarily fast, so Bohm's theory is non-local. Einstein remarked, "This is not at all what I had in mind."
Now we come to Bell's Theorem. Published in 1964 (nine years after Einstein's death), this was another no-go theorem, based on a refinement of the EPR experiment. Bell showed that the particular probability distribution predicted by QM, in that experiment, could not possibly be produced by a deterministic local hidden variable theory.
Okay. Here again, maybe you can see already where I'm going with this. I'm preparing to propose a particular way of violating an assumption in the premise of Bell's Theorem. This particular violation may allow construction of a theory that isn't, to my knowledge, what any of the above cast of characters had in mind, but that might nevertheless be plausibly called a deterministic local hidden variable theory.
Changing historyThe probability distribution Bell was considering, the one that couldn't be produced by a deterministic local hidden variable theory, has to do with the correlation between observations at two distant detectors, where both observations are based on a generating event that occurred earlier in time, at a point in space in between the detectors.
And one day some years ago, reading about all this, it occurred to me that if you think of these three events —two observations and one generation— as just three points between which signals can be sent back and forth, it's really easy to set up a simple mathematical model in which if you start with a uniform probability distribution, set the system going, and let the signals bounce back and forth until they reach a steady state, the probability distribution of the final state of the system will be exactly what QM predicts. This idea is somewhat reminiscent of a modern development in QM called the transactional interpretation (different, but reminiscent).
The math of this is really easy; it doesn't involve anything more complicated than a dot product of vectors. Wait, propagating back and forth in time? What does that even mean?
There are a lot of really badly thought-out depictions of time travel in modern science fiction. For which I'm sort-of grateful, because over the years I've been annoyed by them, and therefore thought about what was wrong with them, and thereby honed my thinking about time travel.
It seems to me the big problem with the idea of changing history is, what does "change history" mean? In order for something to change, it has to change relative to some other dimension. If a board is badly milled, its thickness may vary (change) along the length of the board, meaning its thickness depends on how far along its length you measure. The apparent magnitude of a star may vary with distance from the star. The position of a moving train varies over time. But if history changes, relative to what other dimension does it change? It isn't changing relative to any of the four dimensions of spacetime.
Let's suppose there is a fifth dimension, relative to which the entire four-dimensional spacetime continuum can change. As a simple name, let's call it "meta-time". This would, of course, raise lots of metaphysical questions; a favorite of mine is, if there's a fifth dimension of meta-time, why not a sixth of meta-meta-time, seventh of meta-meta-meta-time, and so proceed ad infinitum? Though fun to muse on, those sorts of questions aren't needed right now; just suppose for a moment there's meta-time, and let's see where it leads.
While we're at it, let's suppose this five-dimensional model is deterministic in the sense that, in principle, the state of spacetime at one moment in meta-time completely determines the state of spacetime at all later moments in meta-time. And let's also suppose the five-dimensional model is local in the sense that changes to spacetime (whatever they are) propagate, over meta-time, at some maximum rate. (So if you hop in your TARDIS and make a change to history on a particular day in 1963 London, that change to history propagates outward in space at, say, no more than 300,000km per meta-second, and propagates forward and backward in time at no more than one second per meta-second.)
That bit of math I mentioned earlier, in which the QM probability distribution of Bell's Theorem is reproduced? That can be made to use just this kind of model — a five-dimensional system, with three dimensions of space, one of time, and one of meta-time, with determinism and locality relative to meta-time. Granted, it's only a toy: it's nothing like a serious attempt to model reality with any generality at all, just a one-off model describing the particular experimental set-up of Bell's Theorem.
I've got one more suggestion to make, though. And I still won't have a full-blown theory, such as Bohm had (there's stuff Bohm's theory didn't include, but it did have some generality to it), but imho this last point is worth the price of admission. I wouldn't call it "compelling", because atm this is all too outre to be compelling, but I for one found it... remarkable. When I first saw it, it actually made me laugh out loud.
Metaclassical physicsWondering what a full-blown theory of physics of this sort might look like, I tried to envision what sorts of things would inhabit this five-dimensional model.
In classical physics, as remarked, space contains point particles interacting with fields. And when you add in time, those things that used to look like point particles appear instead as one-dimensional curves, tracing the motion of the particle through spacetime. I was momentarily perplexed when I tried to add in meta-time. Would the three events in Bell's experiment, two observations and one generation, interact through vibrations in these one-dimensional curves traced through spacetime? Modern string theory does make a big thing out of stuff vibrating. Also, though, a one-dimensional curve vibrating, or otherwise evolving, over meta-time traces out a two-dimensional surface in the five-dimensional space-time-metatime continuum. We set out on this journey hoping to simplify things, hoping ideally to strike on the natural structure of physical reality and achieve resonance (the ring of truth?).
But wait. Why have point particles in space? Point particles in classical physics are nice because of the shape of the theory they produce – but points in space don't produce that shape of theory when they're moving through both time and meta-time. And those one-dimensional curves in spacetime don't play nearly such a central role in QM, notwithstanding they make a sort of cameo appearance in Feynman's path integral formulation.
What is really fundamental to QM is the elephant in the room, the thing that makes such a hideous mess out of QM mathematics: observation, known at up-scale parties as wave function collapse. QM views spacetime as a continuum punctuated by zero-dimensional spacetime events — essentially, observations.
And as spacetime evolves over meta-time, a zero-dimensional spacetime event traces out a one-dimensional curve.
So now, apparently, we have a theory in which a continuum is populated by zero-dimensional points and fields, evolving deterministically over a separate dimension with a maximum rate of propagation. Which is so much like classical physics that (as mentioned) when I saw it I laughed out loud.