Tempora mutantur, nos et mutamur in illis.

(Times change, and we change with them.)— Latin Adage, 16th-century Germany.

I want to understand how a certain kind of mathematical system can act as a foundation for a certain kind of physical cosmos. The ultimate goal of course would be to find a physical cosmos that matches the one we're in; but as a first step I'd like to show it's possible to produce certain kinds of basic features that seem prerequisite to any cosmos similar to the one we're in. A demonstration of that much ought, hopefully, to provide a starting point to explore how features of the mathematical system shape features of the emergent cosmos.

The particular kind of system I've been incrementally designing, over a by-now-lengthy series of posts (most recently yonder), is a rewriting system —think λ-calculus— where a "term" (really more of a graph) is a state of the whole spacetime continuum, a vast structure which is rewritten according to some local rewrite rules until it reaches some sort of "stable" state. The primitive elements of this state have two kinds of connections between them, *geometry* and *network*; and by some tricky geometry/network interplay I've been struggling with, gravity and the other fundamental forces are supposed to arise, while the laws of quantum physics emerge as an approximation good for subsystems sufficiently tiny compared to the cosmos as a whole. That's what's supposed to happen for physics of the real world, anyway.

To demonstrate the basic viability of the approach, I really need to make two things emerge from the system. The obvious puzzle in all this has been, from the start, how to coax quantum mechanics out of a classically deterministic rewriting system; inability to extract quantum mechanics from classical determinism has been the great stumbling block in devising alternatives to quantum mechanics for about as long as quantum mechanics has been around (harking back to von Neumann's 1932 no-go theorem). I established in a relatively recent post (thar) that the quintessential mathematical feature of quantum mechanics, to be derived, is some sort of wave equation involving signed magnitudes that add (providing a framework in which waves can cancel, so producing interference and other quantum weirdness). The geometry/network decomposition is key for my efforts to do that; *not* something one would be trying to achieve, evidently, if not for the particular sort of rewriting-based alternative mathematical model I'm trying to apply to the problem; but, contemplating this alternative cosmic structure in the abstract, starting from a welter of interconnected elements, one first has to ask where the geometry — and the network — and the distinction between the two — *come* from.

Time after time in these posts I set forth, for a given topic, all the background that seems relevant at the moment, sift through it, glean some new ideas, and then set it all aside and move on to another topic, till the earlier topic, developing quietly while the spotlight is elsewhere, becomes fresh again and offers enough to warrant revisiting. It's not a strategy for the impatient, but there *is* progress, as I notice looking back at some of my posts from a few years ago. The feasibility of the approach hinges on recognizing that its value is not contingent on coming up with some earth-shattering new development (like, say, a fully operational Theory of Everything). One is, of course, always *looking* for some earth-shattering new development; looking for it is what gives the whole enterprise shape, and one also doesn't want to become one of those historical footnotes who after years of searching brushed past some precious insight and failed to recognize it, so that it had to wait for some other researcher to discover it later. But, as I noted early in this series, the simple act of pointing out *alternatives* to a prevailing paradigm in (say) physics is beneficial to the whole subject, like tilling soil to aerate it. Science works best with alternatives to choose between; and scientists work best when their thoughts and minds are well-limbered by stretching exercises. For these purposes, in fact, the more alternatives the merrier, so that as a given post is less successful in reaching a focused conclusion it's more likely to compensate in variety of alternatives.

In this series of physics posts, I keep hoping to get down to mathematical brass tacks; but very few posts in the series actually do so (with a recent exception in June of last year). Alas, though the current post does turn its attention more toward mathematical structure, it doesn't actually achieve concrete specifics. Getting to the brass tacks requires first working out where they ought to be put.

Dramatis personaeContents

Dramatis personae

Connections

Termination

Duality

Scale

A rewriting calculus is defined by its *syntax* and *rewriting rules*; for a given computation, one also needs to know the *start term*. In this case, we'll put off for the moment worrying about the starting configuration for our system.

I'm guessing that, to really make this seeming-randomness trick work, the cosmos ought to be made up of some truly vast number of events; say, 10^{60}, or 10^{80}, or on up from there. If the network connections are *really* more-or-less-uniformly distributed over the whole cosmos, irrespective of the geometry, then there's no obvious reason not to count events that occur, say, within the event horizon of a black hole, and from anywhere/anywhen in spacetime, which could add up to much more than the currently estimated number of particles in the universe. Speculatively (which is the mode all of this is in, of course), if the galaxy-sized phenomena that motivate the dark-matter hypothesis are too big, relative to the cosmos as a whole, for the quantum approximation to work properly —so one would expect these phenomena to sit oddly with our lesser-scale physics— that would seem to suggest that the total size of the cosmos is *finite* (since in an infinite cosmos, the ratio of the size of a galaxy to the size of the universe would be exactly zero, no different than the ratio for an electron). Although, as an alternative, one might suppose such an effect could derive, in an infinite cosmos, from network connections that aren't distributed altogether uniformly across the cosmos (so that connections with the infinite bulk of things get damped out).

With the sort of size presumed necessary to the properties of interest, I won't be able to get away with the sort of size-based simplifying trick I've gotten away with before, as with a toy cosmos that has only four possible states. We can't expect to run a simulation with program states comparable in size to the cosmos; Moore's law won't stretch that far. For this sort of research I'd expect to have to learn, if not invent, some tools well outside my familiar haunts.

The form of cosmic rewrite rules seems very much up-for-grabs, and I've been modeling guesses on λ-like calculi while trying to stay open to pretty much any outre possibility that might suggest itself. In λ-like rewriting, each rewriting rule has a *redex pattern*, which is a local geometric shape that must be matched; it occurs, generally, only in the geometry, with no constraints on the network. The redex-pattern may call for the existence of a tangential network connection —the β-rule of λ-calculus does this, calling for a variable binding as part of the pattern— and the tangential connection may be rearranged when applying the rule, just as the local geometry specified by the redex-pattern may be rearranged. Classical λ-calculus, however, obeys *hygiene* and *co-hygiene* conditions: hygiene prohibits the rewrite rule from corrupting any part of the network that isn't tangent to the redex-pattern, while co-hygiene prohibits the rewrite rule from corrupting any part of the geometry that isn't within the redex-pattern. Impure variants of λ-calculus violate co-hygiene, but still obey hygiene. The guess I've been exploring is that the rewriting rules of physics are hygienic (and Church-Rosser), and gravity is co-hygienic while the other fundamental forces are non-co-hygienic.

I've lately had in mind that, to produce the right sort of probability distributions, the fluctuations of cosmic rewriting ought to, in essence, *compare* the different possible behaviors of the subsystem-under-consideration. Akin to numerical solution of a problem in the calculus of variations.

Realizing that the shape of spacetime is going to have to emerge from all this, the question arises —again— of why some connections to an event should be "geometry" while others are "network". The geometry is relatively regular and, one supposes, stable, while the network should be irregular and highly volatile, in fact the seeming-randomness *depends* on it being irregular and volatile. Conceivably, the redex-patterns are geometric (or mostly geometric) because the engagement of those connections within the redex-patterns *cause* those connections to be geometric in character (regular, stable), relative to the evolution of the cosmic state.

The overall character of the network is another emergent feature likely worth attention. Network connections in λ-calculus are grouped into variables, sub-nets defined by a binding and its bound instances, in terms of which hygiene is understood. Variables, as an example of network structure, seem built-in rather than emergent; the β-rule of λ-calculus is apparently too wholesale a rewriting to readily foster ubiquitous emergent network structure. Physics, though, seems likely to engage less wholesale rewriting, from which there should also be emergent structure, some sort of *lumpiness* —macrostructures— such that (at a guess) incremental scrambling of network connections would tend to circulate those connections only within a particular lump/macrostructure. The apparent alternative to such lumpiness would be a degree of uniform distribution that feels, to my intuition anyway, unnatural. One supposes the lumpiness would come into play in the nature of stable states that the system eventually settles into, and perhaps the size and character of the macrostructures would determine at what scale the quantum approximation ceases to hold.

Clearly, how the connections between nodes —the edges in the graph— are set up is the first thing we need to know, without which we can't imagine anything else concrete about the calculus. Peripheral to that is whether the nodes (or, for that matter, the edges) are *decorated*, that is, labeled with additional information.

In λ-calculus, the geometric connections are of just three forms, corresponding to the three syntactic forms in the calculus: a variable instance has one parent and no children; a combination node has one parent and two children, operator and operand; and a λ-expression has one parent and one child, the body of the function. For network connections, ordinary λ-calculus has one-to-many connections from each binding to its bound instances. These λ network structures —variables— are correlated with the geometry; the instances of a variable can be arbitrarily scattered through the term, but the binding of the variable, of which there is exactly one, is the sole asymmetry of the variable and gives it an effective singular location in the syntax tree, required to be an ancestor in the tree of all the locations of the instances. Interestingly, in the vau-calculus generalization of λ-calculus, the side-effectful bindings are somewhat less uniquely tied to a fixed location in the syntax tree, but are still one-per-variable and required to be located above all instances.

Physics doesn't obviously lend itself to a tree structure; there's no apparent way for a binding to be "above" its instances, nor apparent support for an asymmetric network structure. Symmetric structures would seem indicated. A conceivable alternative strategy might use time as the "vertical" dimension of a tree-like geometry, though this would seem rather contrary to the loss of absolute time in relativity.

A major spectrum of design choice is the arity of network structures, starting with whether network structures should have fixed arity, or unfixed as in λ-like calculi. Unfixed arity would raise the question of what size the structures would tend to have in a stable state. Macrostructures, "lumps" of structures, are a consideration even with fixed arity.

Termination
In exploring these realms of possible theory, I often look for ways to defer aspects of the theory till later, as a sort of Gordian-knot-cutting (reducing how many intractable questions I have to tackle all at once). I've routinely left unspecified, in such deferral, just what it should mean for the cosmic rewriting system to "settle into a stable state". However, at this point we really have no choice but to confront the question, because our explicit main concern is with mathematical properties of the probability distribution of stable states of the system, and so we can do nothing concrete without pinning down what we mean by *stable state*.

In physics, one tends to think of stability in terms of asymptotic behavior in a metric space; afaics, exponential stability for linear systems, Lyapunov stability for nonlinear. In rewriting calculi, on the other hand, one generally looks for an *irreducible form*, a final state from which no further rewriting is possible. One could also imagine some sort of *cycle* of states that repeat forever, though making that work would require answers to some logistical questions. Stability (cyclic or otherwise) might have to do with constancy of which macrostructure each of an element's network connections associates to.

If rewriting effectively explores the curvature of the action function (per the calculus of variations as mentioned earlier), it isn't immediately obvious how that would then lead to asymptotic stability. At any rate, different notions of stability lead to wildly different mathematical developments of the probability distribution, hence this is a major point to resolve. The choice of stability criterion may depend on recognizing what criterion can be used in some technique that arrives at the right sort of probability distribution.

There's an offbeat idea lately proposed by Tim Palmer called the invariant set postulate. Palmer, so I gather, is a mathematical physicist deeply involved in weather prediction, from which he's drawn some ideas to apply back to fundamental physics. A familiar pattern in nonlinear systems, apparently, is a fractal subset of state space which, under the dynamics of the system, the system tends to converge upon and, if the system state actually comes within the set, remains invariant within. In my rewriting approach these would be the stable states of the cosmos. The invariant set should be itself a metric space of lower dimension than the state space as a whole and (if I'm tracking him) uncomputable. Palmer proposes to *postulate* the existence of some such invariant subset of the quantum state space of the universe, to which the actual state of the universe is required to belong; and requiring the state of the universe to belong to this invariant set amounts to requiring non-independence between elements of the universe, providing an "out" to cope with no-go theorems such as Bell's theorem or the Kochen–Specker theorem. Palmer notes that while, in the sort of nonlinear systems this idea comes from, the invariant set arises as a consequence of the underlying dynamics of the system, for quantum mechanics he's postulating the invariant set with no underlying dynamics generating it. This seems to be where my approach differs fundamentally from his: I suppose an underlying dynamics, produced by my cosmic rewriting operation, from which one would expect to generate such an invariant set.

Re Bell and, especially, Kochen-Specker, those no-go theorems rule out certain kinds of mutual independence between separate observations under quantum mechanics; but the theorems can be satisfied —"coped with"— by imposing some quite subtle constraints. Such as Palmer's invariant set postulate. It seems possible that Church-Rosser-ness, which tampers with independence constraints between alternative rewrite sequences, may also suffice for the theorems.

DualityWhat if we treated the lumpy macrostructures of the universe as if they were primitive elements; would it be possible to then describe the primitive elements of the universe as macrostructures? Some caution is due here for whether this micro/macro duality would belong to the fundamental structure of the cosmos or to an approximation. (Of course, this whole speculative side trip could be a wild goose chase; but, as usual, on one hand it might not be a wild goose chase, and on the other hand wild-goose-chasing can be good exercise.)

Perhaps one could have two coupled sets of elements, each serving as the macrostructures for the other. The coupling between them would be network (i.e., non-geometric), through which presumably each of the two systems would provide the other with quantum-like character. In general the two would have different sorts of primitive elements and different interacting forces (that is, different syntax and rewrite-rules). Though it seems likely the duals would be quite different in general, one might wonder whether in a special case they could sometimes have the same character, in which case one might even ask whether the two could settle into identity, a single system acting as its own macro-dual.

For such dualities to make sense at all, one would first have to work out how the geometry of each of the two systems affects the dynamics of the other system — presumably, manifesting through the network as some sort of probabilistic property. Constructing any simple system of this sort, showing that it can exhibit the sort of quantum-like properties we're looking for, could be a worthwhile proof-of-concept, providing a buoy marker for subsequent explorations.

On the face of it, a basic structural difficulty with this idea is that primitive elements of a cosmic system, if they resemble individual syntax nodes of a λ-calculus term, have a relatively small fixed upper bound on how many macrostructures they can be attached to, whereas a macrostructure may be attached to a vast number of such primitive elements. However, there *may* be a way around this.

I've discussed before the phenomenon of quasiparticles, group behaviors in a quantum-mechanical system that appear (up to a point) as if they were elementary units; such eldritch creatures as phonons and holes. Quantum mechanics is fairly tolerant of inventing such beasts; they are overtly approximations of vastly complicated underlying systems. Conventionally "elementary" particles can't readily be analyzed in the same way —as approximations of vastly complicated systems at an even smaller scale— because quantum mechanics is inclined to stop at Planck scale; but I suggested one might achieve a similar effect by importing the complexity through network connections from the very-large-scale cosmos, as if the scale of the universe were wrapping around from the very small to the very large.

We're now suggesting that network connections provide the quantum-like probability distributions, at whatever scale affords these distributions. Moreover, we have this puzzle of imbalance between, ostensibly, small bounded network arity of primitive elements (analogous to nodes in a syntax tree) and large, possibly unbounded, network arity of macrostructures. The prospect arises that perhaps the conventionally "elementary" particles —quarks and their ilk— could be *already* very large structures, assemblages of very many primitive elements. In the analogy to λ-calculus, a quark would correspond to a subterm, with a great deal of internal structure, rather than to a parse-tree-node with strictly bounded structure. The quark could then have a very large network arity, after all. Quantum behavior would presumably arise from a favorable interaction between the influence of network connections to macrostructures at a very large cosmic scale, and the influence of geometric connections to microstructures at a very small scale. The structural interactions involved ought to be fascinating. It seems likely, on the face of it, that the macrostructures, exhibiting altogether different patterns of network connections than the corresponding microstructures, would also have different sorts of probability distributions, not so much quantum as *co-quantum* — whatever, exactly, that would turn out to mean.

If quantum mechanics is, then, an approximation arising from an interaction of influences from geometric connections to the very small and network connections to the very large, we would expect the approximation to hold, not at the small end of the range of scales, but only at a subrange of intermediate scales — not too large and at the same time not too small. In studying the dynamics of model rewriting systems, our attention should then be directed to the way these two sorts of connections can interact to reach a balance from which the quantum approximation can emerge.

At a wild, rhyming guess, I'll suggest that the larger a quantum "particle" (i.e., the larger the number of primitive elements within it), the smaller each corresponding macrostructure. Thus, as the quanta get larger, the macrostructures get smaller, heading toward a meeting somewhere in the mid scale — notionally, around the square root of the number of primitive elements in the cosmos — with the quantum approximation breaking down somewhere along the way. Presumably, the approximation also requires that the macrostructures not be *too large*, hence that the quanta not be too small. Spinning out the speculation, on a logarithmic scale, one might imagine the quantum approximation working tolerably well for, say, about the middle third of the lower half of the scale, with the corresponding macrostructures occupying the middle third of the upper half of the scale. This would put the quantum realm at a scale from the number of cosmic elements raised to the 1/3 power, down to the number of cosmic elements raised to the 1/6 power. For example, if the number of cosmic elements were 10^{120}, quantum scale would be from 10^{40} down to 10^{20} elements. The takeaway lesson here is that, even if those guesses are off by quite a lot, the number of primitive elements in a minimal quantum could still be rather humongous.

Study of the emergence of quasiparticles seems indicated.

Digital physics with a non cubic 3D game of life model as a starting point may be used to create this complete model.

ReplyDeleteI have seen videos of 2d models similar to game of life simulating particles with very simple rules.

A timely reminder to further investigate the digital-physics approach. I've various reasons to be skeptical about that approach, which I think I may have mentioned in past blog posts; but skepticism (which should be ubiquitous anyway) is certainly no reason to neglect a clearly relevant branching alternative. Thanks.

Delete