There may be said to be two classes of people in the world; those who constantly divide the people of the world into two classes, and those who do not.— Robert Benchley, Vanity Fair, February 1920.
In this post I suggest, and explore, an alternative way of thinking about the relationship between relativity and quantum mechanics. This is part of my broad program of shaking up our thinking with alternative approaches to basic physics, but not part of my ongoing series, within that broad program, exploring term-rewriting physics; the idea here and the term-rewriting theme might be combinable, but, going into this, I fail to see any obvious need for that to be possible, nor obvious way for it to work.
Two of the deepest, most elemental mysteries of modern physics are (1) why gravity differs from the other fundamental forces, and (2) why the two great theories of post-classical basic physics, relativity and quantum mechanics, are so profoundly incompatible with each other. These two mysteries are evidently related to each other, focusing on different parts of the same elephant. The point in both cases is to reconcile the incompatibilities: to unify gravity with the other forces, and to unify relativity with quantum mechanics. My long-running exploration of term-rewriting calculi for basic physics focuses on the first question, of gravity versus the other forces, starting from an observed structural similarity between term-rewriting calculi and the basic forces. This post focuses on the second question, of the rift between the two great theories, starting from a thought experiment I conjured for a post years ago when explaining the deep connection between special relativity and locality (here).
In past posts I've mentioned several reasons for accumulating a repertory of alternative approaches to a subject, even if the alternatives are not mutually compatible. One may draw on such a repertory for spare parts to use when assembling a new theory, and even spare parts not directly used may provide inspiration (here). One may use alternative ideas as anchors when wading into an established paradigm that is easy to get trapped in, the better to tow oneself out again after acquiring whatever one waded in to learn: that is, the more alternative approaches one knows about going in, the less likely that during close contact with the paradigm one will be seduced into imagining the paradigm is the only way things can be done (there). One may use alternative ideas to ensure that when one does choose to accept some facet of an established paradigm, it is an informed decision rather than made through not knowing of any other choice (over thar). And, aside from all those things, I'm a great believer in unorthodoxy as a mental limbering-up exercise (numerous mentions scattered throughout this blog).
This kind of repertory-accumulation calls for a sort of skeptical open-mindedness that doesn't arise in the normal practice of paradigm science, and engages hazards normal science doesn't have to cope with (hazards on both sides; pro and con, open-mindedness and skepticism). The essential power of paradigm science, per Kuhn's Structure of Scientific Revolutions, is that researchers can throw themselves wholeheartedly into exploring the potential of the paradigm because they have no distractions by any alternative to the paradigm: all rivals have been ruthlessly suppressed. The ruthless suppression means, though, that the mainstream view of alternative approaches is distorted not only by the perspective mismatch between different paradigms, but by the mainstream deliberately heaping ridicule on its rivals. (I've been especially aware of this in my dissertation area.) Alternative-science researchers often add further murk, not only by being sometimes under-cautious themselves, but also through their style of presentation. I recall once studying (briefly) a fairly impressive alternative-science book whose main quirk, that I noticed at the time, was a casually asserted conviction by the author that their mentor's research, which they were apparently quite competently expanding upon, had been arbitrarily suppressed by mainstream scientists who saw it as a threat to their employment. Yikes. This part of the author's presentation really did sound very crackpot-fringe, except that it was quite bizarrely off-kilter from the classic crackpot rant — because it wasn't a rant, but calmly presented as pedestrian fact. I eventually concluded this was likely explained by another part of Kuhn's account of a paradigm: the paradigm provides a model of what valid research in the field should look like. This researcher's mentor likely had ranted that way, and they simply took the claims on-board. Which I find a bit mind-blowing; but my immediate point is that one has to take alternative-science works one-at-a-time, expecting to encounter varying degrees and flavors of psychoceramics that may call for customized handling: what to filter out, what to take seriously, and what, occasionally, to flat-out discard.
At any rate, the current post operates, as one might hope to find in explorations of this sort, on two levels: it explores a specific approach, and in the process it gathers insights into the larger space of possible approaches that this specific approach belongs to. Honestly, going into this with the specific approach in mind, I didn't foresee the extent of the larger insights coming out of it. A single alternative hypothesis, such as term-rewriting physics, breaks out of the conventional mold but, with only two data points, doesn't offer much of a view of the broader range of possibilities, really only occasional glimpses; but two alternative hypotheses —or even better, more than two; the more the merrier, presuming they're very different from each other— allow some significant triangulation, for a stereoscopic view of the terrain. I do mean to give the specific hypothesis here its full measure of attention while I'm about it; but the general insights offer quite an impressive vista, and I'll be studying the viewshed extensively as I go. Or, in a more navigational metaphor, this is going to be something of a wild ride through the space of possible theories, with tangent paths densely packed along the way, shooting off in all directions; the contrast with my past, relatively much tamer explorations came as, candidly, something of a shock to me.
Rather a heavy jolt, in fact. With all those arguments in favor of exploring many different approaches to a problem, this time the range of possibilities has come roaring on me and I'm experiencing a potential downside of the technique. Alternative scientists have this in common with mainstream scientists: when it comes time to pursue their chosen paradigm in-depth, they need some way to put on blinders and ignore the clamor of alternatives. There's one set of ways the mainstream blithely dismisses fringe theories, another set by which scientists working on the fringes blithely dismiss mainstream theories, because both groups have to do this in order to get any in-depth work done. The parallels go deeper, I'm observing; each group, in order to eliminate distractions without exhausting themselves, resorts to shortcut techniques that the other group criticizes with some justice — perhaps worth a blog post in itself, if I can assemble one. But, on the other hand, if you can ever (as I have been trying to do) once begin to see the full range of theories possible, it can be overwhelming. I find myself inadequately prepared, and in need of some novel strategies to cope.
A side issue, btw, on which I admit to some ambivalence has been the role of detailed math in my exploratory posts on alternative physics. On one hand it seems highly advisable not to get lost in details when looking at multiple paradigms that differ in much higher-level, structural ways. On the other hand, I'm anxious to get down to brass tacks. Sometimes inspiration starts close to the detailed math; as in my post on metaclassical physics, which started with an algorithm for generating a probability distribution. My long-running series on co-hygiene started with a high-level-structural similarity, lacking low-level details, between the mathematics of term-rewriting calculi and the basic forces in physics. For a structural inspiration to advance much beyond the inspirational stage depends on someone furnishing it with some solid mathematical details; as string theory emerged in the early 1970s when an approach to quantum mechanics that had been kicking about since proposed about three decades earlier by Werner Heisenberg (roughly, he proposed that in describing elementary particles one should abandon space and time entirely, treating a particle as a black box with inputs and outputs) merged with some mathematics for the description of vibrating strings. The overall trend in an extended theory-development process, spanning years or decades, is likely to be punctuated by a series of upward steps in mathematical specifics, perhaps with long intervals between. The nascent idea of the current post, like my term-rewriting approach to physics, is in the market for a further infusion of low-level math to give it clearer form.
This post does not have a punchline. Conclusions reached are mostly about possible paths that might be worth further exploration; and there are lots of those, identifying which feels in itself worthwhile; but no particular path is chased down to its end here, not even within the specific central hypothesis of the post. Expect here to find questions, rather than answers.
ContentsWhat things look like
What things look like
Why two theories
Slow things
Target
Third law
Second law
Length contraction
Shockwaves
Geometry
Fields
Where to go from here
To begin slowly, here's a simple and important point: the apparent "shape" of any principle in physics depends not only on the underlying mathematical model of what is happening, but on how one interprets it. No, my demonstration of this point is not about the interpretation of quantum mechanics; it's about the interpretation of relativity.
I submit that relativity does not have to be understood as preventing us from traveling faster than light; within a fairly reasonable set of assumptions, it can be understood as allowing travel at arbitrarily high speeds but shaping what it looks like and its consequences. To show this, I propose the following thought experiment.
Let us suppose we have what amounts to a giant ruler stretching, say, a thousand light-years; of negligible mass, so we won't have to worry about that detail; marked off in light-seconds, light-minutes, light-hours, and so on. We start at one end of the ruler, and we wish to travel very fast along the length of the ruler. What happens as we try to do that?
According to the usual interpretation of relativity, the faster we go, the more our clock slows down, preventing us from ever actually reaching the speed of light. A stationary observer —stationary with respect to the ruler, that is— sees our clock slowing down so that although we may continue to accelerate, the effort we put in is spread out over a longer and longer time and we never actually reach light-speed. However, suppose we choose to define our velocity in terms of the ticks of our clock and the markings on the ruler. Keep in mind, this is purely a matter of interpretation: a change in which ratio of numbers we choose to call "velocity". As we continue to increase our speed along the length of the ruler, the ruler appears —to us— to contract (although, to the stationary observer, we contract along our direction of motion). If we keep speeding up, after a while we'll be passing a light-second-marker on the ruler once each second-on-our-clock; what the stationary observer sees is that we travel one light-second along the ruler in somewhat more than one second, but our clock is now running somewhat more slowly so that it only advances by one second during that time. Continuing to accelerate, we can reach a point where we're passing ten such marks on the ruler for each tick of our clock; to us it appears the ruler has greatly contracted, while to a stationary observer it appears our clock has greatly slowed so that we pass ten light-second-marks of the ruler while our clock only advances by one second. According to our chosen interpretation, this means we are traveling at ten times the speed of light. Supposing we have the means to accelerate enough (a technological detail we're stipulating to), we can reach a "velocity" of a hundred times the speed of light, or a thousand, or whatever other (finite) goal we choose. Granted, in conventional terms it still appears to the stationary observer that our clock is slowing down and we stay below the speed of light; while to us they appear to contract in our direction of travel, and their clocks slow down behind us or speed up ahead of us (afaics; some of these details are usually omitted from simplified descriptions of the thought experiment). When we reach the other end of the ruler and slow to a stop, more than a thousand years will have elapsed for the stationary observer; while to us the elapsed time will have been much shorter, and the extensive aging of stationary observers will have been, under our interpretation, a strange consequence of our rapid travel.
I am not, atm, in any way advocating this unconventional interpretation of relativity. I only draw attention to it as a demonstration that what the predictions of a mathematical model "look like" depends on how you choose to interpret them.
Why two theoriesWaves and particles have been alternative views of elementary physical phenomena at least since Descartes; sometimes one is more advantageous, sometimes the other. Waves, of course, are fields in motion, an extraordinarily complex —one might argue, unboundedly complex— phenomenon whose complexity may, in itself, be a reason to sometimes simplify one's view of things by considering particles. There is also a natural drive to use particles when the physical effect under consideration is inherently discrete. In principle, it seems, one can get along almost exclusively with either one, but then one has to introduce just a bit of the other. J.S. Bell noted, especially, that quantum mechanics describes everything exclusively with waves up until it becomes necessary in practice to account for discrete events, at which point one arbitrarily interjects wave-function collapse. All of this is a form of the discrete/continuous balance, which I discussed in a previous post (yonder).
No, I don't propose here to view wave/particle duality (nor any of the other complementary pairs I've mentioned so far) as the origin of the split between quantum mechanics and relativity. But with all this setting the stage, I'm now ready to introduce my main theme for the post, which starts out, at least, as another simple thought experiment, not based on any sort of modern physics.
Consider a "classical" universe, with three Euclidean dimensions of space and one of time, and classical particles moving about in this space at arbitrary velocities. That is, let us suppose a universe with no curvature to its spacetime, and no fields.
This scenario is deliberately just a sketch, of course, and an unavoidably incomplete one; its point is just the sort of simplification noted above as an advantage of particles. Without fields we have, following the usual sort of classical treatment, no action-at-a-distance at all, and our point-like particles flying about with no forces between them will continually miss each other and thus not affect each other. So, yes, our sketch is going to miss some things because it's just a sketch; but let's see whether there's something interesting we can get out of it. (If we didn't play around with various parts of a problem, rather than trying to take on everything at once, it's doubtful we'd make any progress at all.)
If it helps to have a more specific mathematical framework, here's a way to think of it; keeping in mind that, because this is just a stepping stone on our way to some unimagined theory we'd like to find, whatever degrees of freedom we build into our mathematical framework may be much less structurally radical than what we eventually end up changing, later on. That said: Our central starting premise is that the entire configuration of the universe is a set of particles with position, velocity, and presumably some other properties which we leave unspecified for now. How each particle behaves over time must then be some sort of function of the whole system, also not precisely determined for now but, broadly speaking, we expect each particle to travel most of the time in a more-or-less-straight line, and expect each particle to be significantly affected by other particles only when it comes close to them even though interactions won't actually require point-collisions (since, on the face of it, point-collisions should be infinitely unlikely). The two unknown elements of the scenario, which we suppose are small enough for us to learn something useful from the scenario even while initially blurring them out, are the other properties that particles have (conventionally mass, charge, etc.), and the way particles passing close enough to each other deflect each other's trajectories.
Suppose we are interested in what happens in some particular volume of space over some particular interval of time; as a for-instance, it could be a spherical volume of space with a radius of "one foot" —by which we'll mean a light-nanosecond, for which a foot is a standard (and fairly good) approximation— and an interval of one nanosecond.
If we had relativity, guaranteeing that particles travel no faster than the speed of light, we could safely assume that, for a particle to impinge on our volume-of-interest before the end of the time interval, the particle would have to have been within one foot of the volume at the start of the interval, since at the speed of light the particle could only travel one foot, one light-nanosecond, by the end of the interval. Even considering secondary effects with a series of particles affecting each other, no such chain of effects could impact the volume by the end of the interval if the chain started more than a foot away from the volume. This is what makes relativity a local theory, that there is no need to consider anything outside an initial sphere with a two-foot radius. In practice, the advantage isn't that we're sure we haven't omitted something within that two-foot radius from our description of the system, but rather that we can get away with blithely claiming there's nothing else there, and if we've taken pains to "control" any actual experiment, our claim will usually have been true, while on the rare occasions there really was something else there, we can usually throw out that data on grounds the experiment was insufficiently controlled.
However, in this thought experiment we're supposing a "classical" (i.e., Euclidean) universe, with no speed limit, so that in fact a particle, if it's moving fast enough, could be anywhere in the universe at the start of the interval and still pass through the volume-of-interest during the one nanosecond we're studying. (Yes, even if the universe were of infinite size, since the velocity is unbounded too.) It's still true, more or less, that not every particle in the universe can impinge on the volume-of-interest during the interval-of-interest. We can mostly disregard any particle moving too slowly to overcome its distance from the volume at the start of the interval — unless, of course, some distant slow particle gets deflected, and thus sped up, by some other particle, but we're largely disregarding such secondary effects for the nonce; and there's also the complication of whether a distant, fast particle is aimed in the wrong direction, which must also take into account its possible deflection by distant slow particles (or even by other fast particles), again bringing our unknown elements into play. But even after dismissing whichever of these cases we can actually dismiss, we still don't have any obvious, practical upper bound on how many particles might be involved, and we really don't have any specific foreknowledge of most of those particles exactly because they're arbitrarily far away to start with.
Now, here's a deceptively-innocent-sounding question. What does a particle moving faster than light look like? Or more to the point, what does a whole universe's worth of particles moving faster than light and potentially impinging on the volume-of-interest look like? Recall, from our earlier thought-experiment, that even under relativity, interpreting "what it looks like" can make a profound difference in how the theory is understood. Could it be, that the universe of fast particles looks like... a quantum wave function? After all, we surely can't account individually for each and every particle in the universe, nor even for just all the fast ones; so any description we produce of all those fast particles will be something like a probability distribution.
In fact, it seems we're really going to want to separate our picture of this situation into two parts: for the slow things, we'll want to assume we know specifically what those things are, whereas with the universe of fast things we can't get away with that assumption so we'll have to handle it probabilistically. In the traditional nineteenth-century approach to this sort of situation, we would still assume that we know about a few particular particles (perhaps even just one), and then we would summarize all our assumptions about the rest of the universe by positing some sort of space-filling field(s) — but we're then, typically, constrained to assume that these space-filling fields are propagating at no more than the speed of light, which may not be a workable assumption at all if the rest of the universe includes lots of stuff faster than that. Limiting field propagation to the speed of light is likely to be especially problematic when the known particles we're considering are themselves moving faster than the field propagates. Limited field propagation speed is, on the face of it, naturally suited to slow, local situations. Keeping in mind those still-pending unknown elements, it seems plausible we would develop two separate sets of theoretical machinery to handle these two cases, fast and slow: a local deterministic theory for when we're focusing exclusively on the slow stuff, and a non-local probabilistic theory for when we care more about the influence of the fast rest of the universe. And that's just what we do have: relativity and quantum mechanics. It's unsurprising, in this view, that the two theories wouldn't mix.
Our scenario is still just a sketch, needing to be fleshed out mathematically with those unknown elements; and we hope to do it in a way that fits reality and provides some sort of clue to unifying relativity and quantum mechanics. If all this is a right track rather than, say, a flight of fancy, it ought in principle to be part of the rhyming scheme of physics and should therefore, once we're onto it, flow more-or-less smoothly from the natural structure of things. These sorts of speculations tend (such is the impression I've picked up over the decades) to concern themselves with explaining why quantum mechanics emerges, but my instinct here tells me to start by looking instead to explain why relativity emerges. Where does all that bending geometry to stay below the speed of light come from? And for that matter, how does that particular speed come to play a distinguished role in things?
Slow thingsIf, in some to-be-determined refinement of our sketch, quantum mechanics focuses mainly on allowing for the collective weight of all the fast things, but disregards some unknown element(s) to do with the slow stuff, and relativity attends to the local unknown element(s) but relentlessly disregards the collective weight of the fast stuff, then both theories are approximations. I've been speculating on this blog for some time that quantum mechanics may be an approximation, which in itself has felt a bit odd since physicists brag, justifiably, about how very accurate QED is. Relativity is also considered accurate. Is it really credible to speculate that both are approximations?
Maybe. As an aid to thought we should perhaps ask, at this point, what we mean by "approximation"; considering what we were able to do, earlier, by fiddling with our definition of "velocity", surely some careful thought on "approximation" wouldn't be out of order. For example, on the slow-and-local side of things, the very fact that we are considering a specifically chosen set of slow-and-local things is itself an approximation of reality, even if the results are then correct-to-many-decimal-places within the assumptions: as noted, if we perceive that something has interfered from outside, we say the experiment has been corrupted and throw out that case.
If traditional fields are a summary assumption about the rest of the universe (as I've suggested in several previous posts, though the suggestion is going to zag violently sideways later in this post), we might plausibly expect this interpretation-as-a-summary-assumption to apply as well to the gravitational field, and thus to the relativistic curvature of spacetime. We're exploring a supposition in this post, though, that the overall scenario can be understood as Euclidean, and the relativistic curvature comes from our insistence on local focus. My first intuition here is that the relativistic curling inward to stay below the speed of light comes from a sort of inward pressure generated by the fast-universe leaning against the barrier of our insistent local assumption. Metaphorical pressure, of course.
As additional evidence that the conventional approach to fields is broken, I note stochastic electrodynamics, which —as best I figure— says that a classical system can reproduce some (maybe all) quantum effects if it's given random initial conditions on the order of Planck's constant.
As an alternative to the "inward pressure" metaphor: perhaps, by admitting only the local part of what's going on, we lop off the fast part of things. This suggests that somehow the form of our refined mathematical description would lend itself to lopping-off of this sort.
The use of tensors in relativity seems to be a way of internalizing the curvature of spacetime into the dynamical equations; building the locality assumption into the mathematical language so that, if the fast/slow dichotomy is really there, it becomes literally impossible to speak of the fast-universe in that language. Suppose we want to express the dynamics of the system —whatever those are— in coordinates of our absolute Euclidean spacetime. What would that look like? Trained in the usual relativistic approach to these things, one might be tempted to simply take the absolute Euclidean space as a "stationary" reference frame and use Lorentz transformations to describe things that aren't stationary; but we know that's not altogether what we're looking for, because it excludes the fast-universe.
At this juncture in the exploration, it seems to me that I've spent the past three-plus decades underestimating the obscuring power of relativity, and afaics I've been following the herd in this regard. Quantum mechanics is so ostentatiously peculiar, we've been spending all our metaphysical angst on it and accepted relativity with, in the larger scheme of things, scarcely a qualm. My own blog posts on physics, certainly, have heavily focused on where the weirdness of quantum mechanics comes from, touching lightly or not at all on relativity. Yet, consider what I just wrote about relativity: "it becomes literally impossible to speak of the fast-universe in that language." In my post Thinking outside the quantum box, I noted particularly that, in terms of memetic evolution, a scientific paradigm can improve its survival fitness either by matching reality especially well, or by inducing scientists to commit to a conceptual framework in which they cannot ask any questions that would expose weaknesses of the theory. Quantum mechanics explicitly controls how questions can be asked, and in my discussion I, as usual, barely mentioned relativity. But relativity controls the mathematical framework softly, so that we don't even notice what is missing. Which is why, though it's a favorable development that this angle of attack offers a symmetric explanation for the relativity/quantum split, what I find most promising about it is its ability to view relativity as preventing a question from being asked.
This view of relativity appears to be a direct consequence of having omitted fields in the first place. Einstein's theory of relativity traces back to problems created by the interaction between velocity of an observer and velocity of propagation of the electromagnetic field (through the thought experiment of chasing a light beam). By setting aside fields, we've deferred facing the observer-velocity/propagation-velocity problem; I did notice this deferral repeatedly as I wended the path to this point, but had no reason to dwell on it yet; if we're on a right track, though, in coming upon a natural structure we expect a resonance effect in which the pieces of the puzzle should all fall into place, including that one.
What sort of dynamics could account for that peculiar bending inward that causes the language of relativity to limit discussion to the slow-universe? Meditating on it, here's one thought. If the effect can be viewed as a lopping off of the fast part of reality, this suggests that at the upper end of the relativistic velocity range —where the weird stuff, the "bending", occurs— we're only seeing part of the picture: something is being lost, i.e., something is not being conserved. Which smacks of the sort of reinterpretation-by-playing-definition-games that we used to tamper with relativity up toward the top of this post. I'm also recalling, from freshman physics, there was this folderol about potential energy versus kinetic energy that always sounded pretty dubious to me (and, as I recall, to some of my instructors too): it hung together, but still somehow felt like a bit of a shell game, with energy shifting into reality from... where?
Which brings us back to the fast half of the scenario.
TargetIf we expect to get useful insight out of solving for the unknowns in this sketch, we need a solution that accounts for both halves; fast, as well as slow. Relativity is quintessentially local, of course, but, once we step beyond that framework, non-locality isn't technically difficult (whatever philosophical qualms one might have): mathematical models addressing quantum phenomena —of which I can think, off hand, of about four that have been mentioned on this blog at one time or another— have no apparent difficulty achieving non-local (even instantaneous) transmission of internal information needed to drive the model; the technical challenge is targeting the transmission to where the internal information is needed.
First there is, of course, the conventional approach to quantum mechanics. Non-local internal information propagation is handled so painlessly it's easy to overlook that it's happening, thanks to Fourier analysis — a ubiquitous tool of modern quantum mechanics, whose neat trick is to represent arbitrary behavior of a wave function as a sum of (in general) infinitely many sine waves. You can describe, in this manner, a wave that propagates across spacetime at finite velocity; but you can describe anything else this way, too. For, even if the described wave were local in its behavior, the components of the sum are sine waves, and any sine wave is inherently non-local. The internal information represented by a sine wave, whatever that information is, occurs throughout all of spacetime in undiminished form; periodically, yes, but there's always another peak, another trough. So non-locality isn't a technical difficulty. Targeting the information is another matter, though the conventional solution to this too is easily overlooked. The internal information needed for a given particle is delivered specifically to the one particle that needs to know by the expedient of giving each particle its own wave function. Although we make much of how there are supposedly only four fundamental forces, accounted for (well, three of the four) by the Standard Model, in the detailed math we custom-fashion a separate force for each particle. Or, occasionally, for each entangled set of particles, which is nearly the same thing since the math of large-scale entanglement tends to get hideously messy.
I tentatively don't count basic de Broglie–Bohm pilot wave theory as a separate model here, on grounds that —in minimal form— it seems to me indistinguishable for this purpose from the conventional approach. Basic pilot-wave theory handles internal information distribution in essentially the same way as the conventional approach and merely, afaict, interprets what is happening differently; hence, presumably, Einstein's remark about it, "This is not at all what I had in mind." (According to Bohm, Einstein had suggested that quantum mechanics is an incomplete picture of reality in the same sense that a mechanical clock is incomplete if some of the gears are missing; but Bohm's pilot-wave theory, one might say, relabels the existing gears without really adding any.)
My reservation on this part of the count is that latent in pilot-wave theory is a potential feature that may fairly qualify as new on structural grounds: a pilot wave might not produce the sort of probability distribution conventional quantum mechanics calls for. The term for this sort of thing is quantum non-equilibrium: the idea that we see the usual quantum weirdness, rather than some other kind of weirdness, merely because we're living in a world whose hidden variables have settled into an equilibrium state. If Louis de Broglie explored this angle, I haven't heard of it; but scuttlebutt says David Bohm did consider it. I'm inclined to believe this, since Bohm, working at least a couple of decades later, evidently explored the idea much more thoroughly than de Broglie had had a chance to before it was rejected at the 1927 Solvay Conference.
Stochastic electrodynamics (SED) is, to my understanding, quite a different kettle of fish from pilot-wave theory. Notwithstanding that, as I write this, Wikipedia's opening sentence on SED claims it's "an extension of the de Broglie–Bohm interpretation". Wikipedia does note there's a tremendous range of work under the SED umbrella; but, afaics, what makes it all properly SED is that its basic strategy is to consider a classical system augmented by random background electromagnetic radiation on the order of Planck's constant. This "zero-point field" is a sort of pilot-wave, but doesn't in itself address the information-targeting problem at all; there are no customized fields (wave functions), and nothing else overtly replacing them. There's some work (I've fairly readily turned up) claiming that pilot-waves should arise as emergent phenomena in SED, apparently related to a relativistic-wave-equation effect called Zitterbewegung (literally, jittery motion; first proposed, according to Wikipedia, by Erwin Schrödinger). One gathers this emergence claim is not uncontroversial.
Still, despite the controversy, there's a certain fascination to the prospect of a theory in which all quantum weirdness is emergent from a classical system, and one might wonder why researchers haven't been flocking to SED in much greater numbers. I suspect this is, at base, because SED is a blatantly incomplete theory: it makes no pretense at all of explaining where the zero-point field comes from (though one technical paper I found suggested many practitioners have a vague intuition that it comes from the electromagnetic forces of the rest of the universe). One is struck that, as it happens, accounting for the summed influences of the rest of the universe is just what the current blog post proposes to do.
Then there's the transactional interpretation. From my forays into the transactional interpretation, its wave functions are conventional but, at least in the form primarily advocated by its chief architect John G. Cramer, it introduces one distinctive structural feature: an additional time-like dimension along which a quantum system develops under direction of its wave function. Cramer calls the additional dimension pseudo-time. The system is thereby able to reach equilibrium through a process orthogonal to spacetime, so that non-equilibrium is purely internal and cannot, even in principle, be observed.
My own term-rewriting approach, as it's lately been developing, also uses an additional time-like dimension for development to (some sort of) equilibrium; I used to call this extra dimension meta-time, and having outgrown that name I've yet to settle on an alternative name (so, even if I've maybe heard Cramer isn't happy with the name pseudo-time that he's now stuck with, I've thus far had no great luck either). Spacetime in my approach is a network of discrete elements (a "term") rather than a continuum, which precludes conventional wave functions. The most distinctive structural feature is that targeted internal information exchange is achieved through explicit point-to-point (or at least, contemplating one-to-many linkage between a variable binding and its bound instances, party-line) network connections.
The current post, though, seems to call for some different approach from any of these. The point here is to explore possible advantages of analyzing the universe in terms of fast and slow particles. To target internal information to specifically where it needs to go, something must be added to the sketch: either some whole new primary structural element, such as customized fields or customized network connections, which however seems likely to import the flavor of some other model rather than bringing out the flavor of the current one; or some peripheral device to, say, guide particles with the precision needed for entanglement. And whatever new thing we introduce, we want it to mesh well with the machinery we use to derive relativity.
Third lawTo every action there is an equal and opposite reaction.— Common paraphrase of Newton's third law of motion.
In looking for a way through all this underbrush, there's an interesting distinction in these various arrangements, between structures that obey Newton's third law, and structures that don't. Wave functions in conventional quantum mechanics are spooky partly because they react but do not act; the wave function describes things that the particle might do, and is therefore affected by other things in the physical world, but most of the wave function —distributed throughout the entire universe, after all— does not affect any of the physical universe, except for the direct effect of the one particular event to which the wave function collapses, and the indirect effect of statistical distributions of quantized events. Yes, this sort of reaction-without-action is atypical of classical physics; but before one gets too carried away with how unclassical it is, note that constant fields that act but do not react are used —in practice— routinely in classical physics — and, for that matter, in quantum physics. When Einstein had to describe gravitational fields that simultaneously act on matter and react to it, he struggled to cope with the circularity.
Particles, on the other hand, are pretty much always supposed to react to everything under consideration, and act on each other, though typically they don't act on fields; granting, they may generate fields that other particles react to.
Of further interest, reaction-without-action seems closely associated with the task of targeted internal information; that is, in constructing a field to describe reaction one specifies just what is going to do the reacting.
It seems that, as often used practically in elementary physics, particles are things but fields are not things; rather, fields represent aspects of interactions between things, either action or reaction but not so often both at once. We're almost —though not quite— safe to say that these are the essence of what a theory of physical reality represents: things, and the interactions between them; thus particles and, in some form or other, fields. Not quite safe for a couple of reasons: conceptually, quantum entanglement leaves room to wonder whether particles are independent things after all; and technically, orthodox fields in full generality aren't quite so straightforward (a point I'll expand upon further below, when it comes up in the natural flow of the discussion). But the sketch in our current thought experiment calls only for independent particles, so let us proceed from there.
Second lawThe law that entropy always increases — the second law of thermodynamics — holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell's equations — then so much the worse for Maxwell's equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.— Sir Arthur Stanley Eddington, The Nature of the Physical World (first edition was 1928), chapter 4.
Since our foremost interest here is in consequences of the sketch, we don't really want to introduce some vast new structure, unrelated to the sketch, in the unknowns added to complete the sketch. We'd also prefer to have an exact theory, even if it's intractable so that we're always going to have to introduce some sort of approximation when tackling any real-world problem (as long as we're getting some sort of useful insight from it).
Moreover, recall J.S. Bell's observation that the trouble with quantum mechanics is lack of clarity on where to draw the line between quantum and classical realms, i.e., when to collapse the wave function (discussed in a previous post). Now consider what this implies, for our current thought experiment, about interactions of sets of things. We can imagine trying to assemble our model from pairwise interactions between particles, though we don't at this point know how we would then account for entanglement. We can imagine treating the whole system as one big entangled unit, but really there doesn't appear to be any practical use at all in this "wave function of the universe" approach. And in between these two, we have a different form of the same problem that Bell was objecting to, lack of clarity on where to drawn the line. So my own sense of the situation is that the only choice likely to provide an in-any-way-useful exact solution is pairwise interaction; at least, pairwise interaction "under the hood".
Where, then, does entanglement come in? Two possibilities come to mind. Either we derive entanglement as an emergent phenomenon from our pairwise interactions, or we derive entanglement as some sort of apparent distortion, akin to the apparent distortion we're already trying to conjure to account for curved relativistic space time. I don't have, going into this, any prior intuition for the emergent-phenomenon approach to this, besides which there are minds working on some variant of it already (so I needn't count, amongst my possible motives for pursuing it, a lack of exploration of that region by others). The distortion approach seems worth some attention; it certainly ought to be good for mind-bending. It is, to put this delicately, not immediately obvious how the distortion would work; for relativity, we've some hope that lopping off the "fast" part of the phenomenon would somehow leave behind curved spacetime, but is it remotely imaginable that dropping some terms from one's equations would "leave behind" entanglement? Plus the conundrum of just what one expects to "lop off" when looking at the quantum-like influence of the fast-universe.
To bring out this point in more detail, the distinction between lopping-off in relativity versus quantum mechanics is that in relativity it seems we may be able to lop off some of the energy, presumably siphoning off some of the energy to the fast-universe (which we'd likely manage with some slight-of-hand, per above); but in quantum mechanics, what is there to siphon off? Either things are correlated with each other, or they aren't. How can a correlation between things be the result of siphoning something off? This does bring up, rather in passing, a crucial point: the things in quantum mechanics are... slow. That is, quantum mechanics, like relativity, is a theory about the slow-universe, and if there is any siphoning-off to be done, it's going to be siphoning off into the fast-universe, just as it was with relativity. As for the main question, what can be siphoned off to leave behind correlation... how about siphoning off disorder? Or at least, redistributing it. Admittedly, I've long been quietly disturbed by the whole issue of whether total entropy in the cosmos is increasing or decreasing; increasing entropy is disturbing in one way, and decreasing entropy is disturbing in a whole different way.
This might be a good moment to consider the emergence element after all, inasmuch as that seems to have been dependent on some sort of equilibrium assumption, which may tie in with siphoning off, or otherwise shifting about, of disorder.
With or without emergence, though, entropy is slippery; as it's been taught to me, entropy isn't something you specify as a behavior of a system, but something you derive from that behavior. So any solution of this sort is going to be intuited from the principles and shaped to produce the right sort of entropic effect. An encouraging sign is that, again, this is changing the form of the question: rather than asking directly what would generate a quantum wave function, we're asking what sort of complementary entropic system would, when subtracted from reality, leave behind a quantum wave function. We're conjecturing that our theories, both relativity and quantum mechanics, are only about a part, perhaps a smallish part, of the universe, while a lot of other coexisting stuff isn't covered and follows different rules; the whole fast/slow thing is a guess that got us started, and for this post I'll stick with it and see where it goes, but the idea of a coexisting part of reality following different rules is more general and may outlast the fast/slow guess. In my most recent post in my term-rewriting physics series I suggested something similar involving "macrostructures", large-scale cosmological patterns (yonder). The vibe of this coexisting-part-of-reality hypothesis is also reminiscent of dark matter.
Contemplating that comparison with dark matter, it occurs to me that, indeed, dark matter is essentially stuff that has to fall outside the purview of our familiar physical laws; which in turn is essentially the function assigned for this post to the fast-universe, and is also, substantially, a way of saying that our existing physical theories are special cases of something more general.
The imprecision of entropy —that it doesn't pin down the behavior of a system but merely measures a feature of it— should be at least partly counterbalanced by the fact that quantum mechanics doesn't pin down the behavior of a system either. One of the oddest things about quantum mechanics is, already, that the Schrödinger equation doesn't specify any precise behavior at all, but merely takes some arbitrary classical behavior embodied by the Hamiltonian parameter Ĥ and produces a distorted probabilistic spectrum of possibilities. But for the current purpose, this ought to be an advantage, because probabilities are the stuff that entropy is made of.
In general research for this post, semi-idly perusing explanations of quantum mechanics on the internet, I turned up incidentally a somewhat off-hand remark that the three great branches of nineteenth-century physics were Mechanics, Thermodynamics, and Electrodynamics. (One can, btw, save a great deal of time in this sort of perusal, by discarding any supposed-explanation as soon as it claims 'quantum mechanics says a particle can be in two places at once' or equivalent, or if it refers to many-worlds as a "hypothesis" or "theory" or talks about "proving" it. If the presenter doesn't understand the subject well enough to distinguish between an interpretation and a theory, they literally don't know what they're talking about. I do not use the term literally figuratively.) This trichotomy raises a curious point that, while it seems inapplicable to the current post, may be worth pursuing in some future exploration: we are thoroughly accustomed to treat thermodynamics as an emergent property of large systems, of the statistical behavior of collections of things. Why? Why should mechanics and electrodynamics be considered primitive while thermodynamics is not? I recall noting, as I skimmed a modern quaternionic treatment of Maxwell's equations, that it claimed to unify electromagnetism with heat, and thinking, what the heck? Heat is at a totally different logical level, how can it even make sense to unify it with one of the fundamental forces? The question again is, why have we gotten into this habit of treating heat as a high-level emergent phenomenon? I suspect this is a conceptual legacy from the nineteenth century, when it seemed possible to derive thermodynamics stochastically from fine-grained deterministic particles. If we don't have deterministic particles anyway, though, it's not apparent we have anything to lose by making thermodynamics lower-level than these other "fundamental" forces. (Even if one prefers a deterministic foundation, that problem might be just as tractable working downward from thermodynamics.) With a befuddling implication that the primitive particles we've been studying could be emergent phenomena.
Length contractionNow who would think
and who forecast
that bodies shrink
when they go fast?
It makes old Isaac's theory
look weary.— Relativity (to the tune of Personality), in The Physical Revue, Tom Lehrer, 1951.
It's all very well to suggest, in a hand-wavy sort of way, that as you try to accelerate toward the speed of light you don't actually get there because, approaching light-speed, more and more of that acceleration leaks out into the fast-universe; but sooner or later you're going to have to explain how that actually works. Just what sort of shifting to/from the fast-universe would actually serve to generate special relativity? In fishing for a thought-experiment to aid intuition on this, it's problematic to start with some primitive event that induces acceleration, because one gets distracted by trying to explain the nature of the primitive event. A particle decaying into two (or more) particles? An elastic collision between particles? I suggest a different thought-experiment, more abstract and therefore simpler (and, oddly enough, also reminiscent of classic Mythbusters, which derived an important element of fun from professionally conducted demolition): suppose we have a spherical object, traveling near to the speed of light, and we blow it up. (Maybe it's a firecracker, or a Death Star.)
Suppose this spherical object is moving (relative to us, presumed stationary observers) at 99% of the speed of light. At that relative velocity, there's going to be a pronounced special-relativistic effect. That is, when we say it's "spherical", we mean that someone at rest relative to the object would see it as spherical. Call its direction of travel the x-axis. Because (for us) it's moving at 0.99c, we see it as an ellipsoid, circular in its profile in the yz-plane, but greatly flattened in its x dimension.
When it explodes, it throws off a shell of some sort —gas or bric-a-brac or whatnot— which moves outward from the sphere at constant velocity in all directions; this is, again, what an observer stationary relative to the object sees. Suppose, for convenient reference, that the sphere itself, or at least a cinder of somewhat-similar size, is left behind at the center, continuing along with its original course and speed. To see the relativistic effects without getting confused by velocities that coincidentally cancel because they're too similar to each other, suppose the shell is moving outward at 9.9% of the speed of light. What shape do we see, in our very different reference frame? Evidently the yz profile of the shell is still circular. But the fore and aft sides of the shell are asymmetric. In accordance with special relativity, the fore side of the shell must still be traveling at less than the speed of light, although it's going faster than the remaining object; so this fore side of the shell appears very flattened up against the light-speed barrier. The aft side, though, is now traveling at considerably less than light-speed, slightly under 0.9c, which is still a "relativistic" velocity with definite length-contraction but should exhibit considerably less length-contraction than the remaining object. So the aft side of the shell is going to bulge outward decidedly more than the fore side.
What are we to make of this peculiarly asymmetric shell, flattened along the x-axis but much more so on its positive-x than its negative-x side? The shape evidently expresses, in some sense or other, the curvature of spacetime around the object. I'm reminded of the impression I've recently gathered, in my ongoing efforts to grok tensors, that the heart of the subject lies in the interplay of various sorts of derivatives — which may also be true of whatever sort of quaternionic treatment is dual to tensors, judging by my explorations on that front (yonder). That angle on things, though, seems contingent on some high-caliber insights I've not yet acquired on tensors and quaternions; so for the current post I'll let it lie.
If the bending-inward, as an object approaches the speed of light, is because some things get shunted out into the fast universe, for our purpose we need to say more than just that they're shunted out. We have to be concerned with just what happens to those shunted things, what form they take in the fast-universe — because otherwise what's the point of having this particular sort of new theory that says the stuff outside the old theories differs mainly by being faster-than-light. Will the new fast stuff somehow be targeted to return specifically to where it came from as the object decelerates away from light-speed, in which case it's not clear in what sense the targeted stuff is faster-than-light; or will it come back to haunt us through quantum-style effects, since we're supposing quantum mechanics is the theory that deals with the fast-universe affecting slow things?
ShockwavesIn discussing, above, why the fast-universe hypothesis would naturally lead to two separate theories, I noted the awkwardness of fast particles interacting with a field whose propagation speed is slower than the particles. As a discouragement to considering the scenario, I'd stand by this. Contemplating Einstein's reasoning, it seems that the notion of time was tied to the perception of light in a way that would foster philosophical absurdities if particles moved faster than the propagation speed of the electromagnetic medium. However, a century's worth of living with quantum mechanics ought to have made us a lot more tolerant of philosophical absurdities than we were when the framework of relativity was set up; and traveling faster than the medium of propagation is not strictly a technical obstacle. We have, in fact, a quite familiar real-world example of what can happen when an object emanating waves travels faster than the waves propagate: a sonic boom.
Intriguingly, a sonic boom is about as close to discontinuous as you can expect to get in sound propagating through a fluid medium. I've blogged before on the discrete/continuous balance (yonder; which I also alluded to in the discussion of why-two-theories); various modern theories of physics tend to suffer from an overdose of continuity, and compensate by introducing additional discreteness by various means; notably, wave-function collapse in the Copenhagen interpretation, or standing waves in string theory (and elsewhere, such as the aforementioned SED). Loop quantum gravity adds discreteness directly by quantizing space (although, admittedly, my intermittent forays have not yet given me a proper feel for that technique). So... if particles of the fast-universe produce something akin to a sonic boom, might this be an alternative source of discreteness? One might reasonably expect it to manifest in the quantum theory which considers effects from the fast-universe, rather than relativity which we're supposing systematically omits those effects.
There is an important distinction to be tracked, here, between waves that vibrate in the direction that the wave moves, versus waves that vibrate perpendicular to the direction the wave moves. Technically, the first are called longitudinal waves, the second, transverse waves. A sonic boom, as such, is a sound wave in a fluid medium; it's a wave of pressure, which is longitudinal: the medium moves in the direction of the wave's propagation to produce variations of pressure across surfaces orthogonal to the direction of propagation. Longitudinal waves occur in a scalar field; compression causes the scalar field value to increase (high pressure), decompression to decrease (low pressure). However, in the usual treatment under Maxwell's equations, the electrical and magnetic fields are vector fields, with no scalar component, and electromagnetic waves are thus purely transverse.
Here we get into a bit of history. Around the turn of the century (no, not that century, the other one), Oliver Heaviside considered the possibility of longitudinal electromagnetic waves (Electromagnetic Theory Volume II, 1899, Appendix D) and roundly rejected them. He first observed simply that the observed phenomena of light aren't consistent with longitudinal waves; they don't arise in elastic solids, nor light reflection and refraction. Then he dove anyway into an extensive mathematical exploration of how to consistently extend the existing equations with compressional waves, only to reach, on the far side of it all, the same conclusion.
Heaviside, if I may say so, knew what he was about. I expect his conclusions were thoroughly solid within the scope of his chosen assumptions, his chosen conceptual framework. To put this in proper perspective, though, what were his chosen assumptions?
Oliver Heaviside and J. Willard Gibbs were the primary contenders on the vector side of the great vectors-quaternions debate of the 1890s. As I've discussed on this blog before (most recently last year), quaternions were discovered in 1843 by William Rowan Hamilton, with the specific purpose of filling a technical need to represent directed magnitudes in three-dimensional space as mathematical values. Today, if we want to study some more-or-less-hairy structure, we may, somewhat casually (by historical standards), set up an algebra for it, specifying how to describe the things and what operations can be performed on them, and off we go. Not so in 1843. There was no idea of mathematically studying structures that don't have all the well-behavedness properties of ordinary numbers (not unless you count things like alchemical tracts that associate certain integers with certain metals, certain metallurgical operations with certain arithmetical operations, and then propose to transmute lead to gold arithmetically, which sort of thing has been implicated in associating the name of eighth-century Persian alchemist Jabir ibn Hayyan with the etymology of our word gibberish). Through the work of Leonhard Euler et al., mathematicians had only just recently been forced to swallow complex numbers (which really are fabulously well-behaved by algebraic standards), and still had indigestion therefrom. Quaternions are, from a well-behavedness standpoint, the most conservative generalization that can be taken beyond the complex numbers, just barely less well-behaved by giving up a single axiom —commutativity of multiplication— while retaining, most especially, unique division.(Since quaternion multiplication isn't commutative, i.e. in general ab ≠ ba, quaternion left-division and right-division are separate operations. Each non-zero quaternion a has a unique multiplicative inverse a−1 with a a−1 = a−1 a = 1, and one then has right-division a / b = a b−1, left-division b \ a = b−1a.)
The trick to constructing quaternions is that, instead of introducing a single square root of minus-one, i, as for complex numbers, you introduce three separate square roots of minus-one, traditionally called i, j, k, corresponding to the three orthogonal axes of three-dimensional space. The three imaginary units anti-commute with each other, thus ij = −ji etc., and are cyclically related with ijk = −1. The general form of a quaternion is q = w + xi + yj + zk. Part of Hamilton's genius was realizing that you can't get all the well-behavedness he wanted in just three dimensions; you need the three mutually symmetric imaginaries for the three dimensions and a fourth real (i.e., non-imaginary) term w. (Btw, yes he did immediately think to interpret this fourth dimension, metaphysically, as time, so we could defensibly call it t rather than w; but I digress.) Carried away with enthusiasm over how beautifully the mathematics of quaternions worked out, he invented a great many new words to name different concepts in quaternion analysis; including scalar for the real part of q, vector for the non-real part, and tensor for the length of the whole thing (square root of w2 + x2 + y2 + z2), which is where we get all those words, although the meaning of tensor has since changed beyond apparent recognition. He proceeded to devote the rest of his life to exploring quaternion analysis, and seems to have eventually worked himself to death on it. Meanwhile, though, mathematicians took his idea of generalized numbers and axiomatic foundations, and ran with it. By the last couple of decades of the century, mathematicians had pretty thoroughly outgrown that instinct to retain every last drop of well-behavedness, and Gibbs and Heaviside proposed a system of vector analysis that would treat the triples of spatial coordinates ⟨x, y, z⟩ as values for their own sake, dispensing with the scalar w and likewise dispensing with the whole idea that there were any imaginaries involved. And that, of course, is what set the stage for the great debate of the 1890s.
Notice, though, that in choosing vector analysis over quaternion analysis, what you are giving up is, straightforwardly, the scalar term. And scalars are what you need for longitudinal waves. (Heaviside, in his digression on the mathematics of longitudinal waves, seems to have gone about it by deriving a scalar from the vectors, quite a different approach from carrying a scalar term alongside each vector, but within the bounds of his chosen conceptual framework.) When modern investigators pursue the idea of longitudinal electromagnetic waves —rather outside the mainstream, but I see there was a 2019 conference with some of this sort of thing in Prague, under the title Physics Beyond Relativity— these investigators are apt to do so using a quaternionic generalization of Maxwell's equations. Maxwell himself, btw, never wrote his equations in the form we use today; they were cast into that form by Gibbs and Heaviside. Maxwell was a quaternion enthusiast, and used two forms: Cartesian coordinates for manual calculation, and quaternions for conceptual understanding. Maxwell died of cancer in 1879, at the age of 48 — eight months after William Kingdon Clifford, another quaternion enthusiast, died of tuberculosis aged 33, and about two years before Gibbs's system of vector analysis was first printed.
Fair warning: if you undertake to study the modern literature on longitudinal electromagnetic waves, brace yourself for some of that psychoceramic filtering I mentioned at the top of this post. Authors may vary, but those I've seen who seriously engage this topic tend to combine it with some variant of Nikola Tesla power-transmission, which, though perhaps only moderately iffy in itself, is immediately adjacent to the full-blown conspiracy theory of Nikola Tesla limitless-free-energy, according to which the Powers That Be suppressed his limitless-free-energy invention because it would have undermined their monopoly on centralized power distribution and thereby their control over the general population. (If you enjoy a good conspiracy theory, there are some lovely ones about Tesla at conspiracies.net; I'd warn you to turn off JavaScript and cookies before visiting that site because it's probably a trap to invade your privacy, but of course you already know the entire internet is such a trap... right? :p )
GeometryWhen we say that the geometry in our thought-experiment is Euclidean, we're saying something about how the distribution of particles, by location and velocity (or "position and momentum"; however exactly one happens to frame it), can be. In deriving quantum-equivalent discreteness from sonic booms of fast particles, this so-to-speak "Euclid-driven" distribution should be not only coped with, it should be desirable, for otherwise we ought to question whether the particular premise of this thought experiment is really worth further pursuit after all. We've gotten a great deal of general insight in this post from pulling apart the different parts of our models (notably, particles and fields) and understanding how the different parts can be used in various theories, but as for this particular theory, that Euclid-driven distribution is its essence, and either we want that essence, or perhaps it's time we try shopping somewhere else.
Is there any sign of such a distribution in quantum mechanics? Overtly, there doesn't seem to be; the only opportunities for "tuning" in quantum mechanics seem to be far more global, like Planck's constant, the fine-structure constant, the gravitational constant, the somewhat-notorious cosmological constant (the sorts of things that come up in the so-called Dirac large numbers hypothesis — though it's a somewhat sobering indication of how hazardous this sort of investigation can be, that Dirac, with currently quite a good reputation in the physics community, gets the respectable front of this approach named after him, while in Wikipedia's article on Eddington, his particular pursuit of the approach is described as "almost numerological"; and before you say that's because of details of how they pursued it: yes, that's kind of the point). Similarly, in stochastic electrodynamics the only obvious parameter in the influence of the rest of the universe is the size of the zero-point field, i.e., Planck's constant again. If we're going to get anywhere with the specifically Euclid-driven approach, we need a place in the theory for more shaped input; either more customized to the particular systems we study (made up of slow particles), or at least more customized to the particular configurations of the fast-universe that affect our particular systems-of-interest.
My approach to term-rewriting physics is another example of attempting to assign a nontrivial structure to the non-local connections of the model; with, in that case, no initial expectation of anything remotely geometric about the "network" other than, perhaps, a vague suspicion there might be something rotational involved. Compare-and-contrast the Kaluza–Klein-style approach of string theory, in which a rotational element of this sort comes in through an overtly geometrical structure. Well into my exploration of the term-writing approach, I speculated on the possibility of emergent macrostructures that would, presumably, have some unimagined sort of dual "co-geometry", whereas string theory introduces new fine-grained local geometry, and the current post tries to impose a single geometry across the entire range of speeds (slow and fast). Details are especially lacking for the term-rewriting physics treatment, which starts with little notion of what sort of structure the network would have, other than broad inspiration from variable-binding patterns in vau-calculi; but the only obvious function of that unknown structure in the model is to pipe in seeming-nondeterminism, with no more specific purpose overtly indicated. It seems, though, that in both of my exploratory approaches I'm supposing there is some definite structure to the way the rest of the universe impinges on our system-of-interest to produce quantum phenomena. My intuition evidently favors this as a place for some of those missing gears that Einstein was talking about.
Suppose, then, we're trying to fashion a bit of machinery to fit into this gap in the theory where the overall structure of the network of non-local connections should go. What is this missing gizmo supposed to do? We've said, piping in nondeterminism, but there's little guidance in that. From the example of the Schrödinger equation, it seems that, if quantum mechanics represents the rest of the universe impinging non-locally on whatever system we're interested in, then the rest of the universe is acting as a sort of distorted mirror for whatever we're doing.
The very blankness of quantum mechanics on this point, the lack of any apparent overall structure to the non-local network, may itself be the shortcoming of the quantum theory that it systematically prevents us from asking about. Which might be an even more subtle sort of distraction, after all, than relativity offers; at least the speed of light has an obvious part to play in relativity, whereas this is just not there, something one doesn't think of because there's no reason one would think of it.
So perhaps in my next explorations into alternative physics, rather than using quantum mechanics and relativity as starting points from which to look for different ways to address the same things they address, I ought instead to be looking for macrostructure that's altogether outside their scope. Alternatively (sort of), it would be rather elegant if the shape of the interaction between macro- and microstructures were somehow responsible for the shape of the Schrödinger equation, the distorting lens that transforms classical systems-of-interest into their quantum forms; not that I've any idea atm how that would work.
FieldsIn background research whilst drafting this post, I stumbled on a whole alternative-science pocket community I hadn't encountered before, represented at the Prague conference linked earlier, Physics Beyond Relativity. The conference as a whole appears to have been a fairly eclectic collection of more-or-less-fringe science sharing a common negative theme of doubting some aspect or other of relativity; but, browsing the material, I gradually discerned a significant subset with a more positive unifying theme (perhaps following the conference organizers, who may have deliberately grouped similarly-themed talks near each other on the schedule).
Early on in perusing the conference site, I was rather bemused by the assertion that Lorentz contraction —which, the site pointedly remarks, has never been experimentally confirmed— was invented, following the Michelson–Morley experiment, to save the ether. There are several interesting points in that. The bit about experimental confirmation highlights the thorny general problem of what experiments do and don't confirm. The allusion to Michelson–Morley reminds me of a rant (seen many years ago, and which I cannot, alas, pin down atm) by a physicist wondering when and how Michelson–Morley had acquired a sort of mythological status, whereas their understanding of the history was that Michelson–Morley was just one amongst many elements that contributed to the consensus paradigm-shift in physics. But what really caused me to do a double-take was the bit about saving the ether. Say what? I was taught that the ether theory was discredited by Michelson–Morley, and this is pretty much what Wikipedia says about it (unsurprisingly, per Wikipedia's mainstream bias). The point about the ether turns out, I've concluded, to be rather central to that subset-unifying theme; but it took me a few steps to get there. (To be very clear, there were also plenty of talks at the conference indifferent re ether, and at least one openly advocating it.)
The point about experimental evidence for Lorentz contraction: It's actually quite hard to experimentally demonstrate Lorentz contraction, as on the face of it you'd have to accelerate some macroscopic object to near the speed of light and observe what it looks like — from the side, so e.g. accelerating a spaceship to near-light-speed going directly away from us wouldn't suffice. The only things we've accelerated that much are particles in particle accelerators, which might as well be point-particles. It was pointed out, though, that the particles themselves come in bunches, and if you treat one of those as a macroscopic object, you can see whether it contracts. Well, it doesn't. Experimentally observed. The official answer is, of course, that the individual particles are expected to contract, but the interval between them doesn't contract. Which, to me as a third party watching all this, rather demonstrates the principle that what an experiment demonstrates depends on your interpretation; the mainstream interpretation here doesn't seem glaringly unreasonable, but there may be an element of post-facto justification in it; mainstream science is, somewhat by definition, heavily invested in concluding that what we're looking at is consistent with the prevailing paradigm. While these other scientists look at the same thing and see it as contradicting the paradigmatic prediction, which... isn't exactly implausible, either.
Another sub-theme at the conference is Weber electrodynamics, another bit of alternative science I'd somehow either never crossed paths with, or at least no so that it stuck with me; an alternative to the Maxwell electrodynamics thoroughly embraced by mainstream physics (occasioning another double-take on my part). Weber, like Maxwell though several years earlier, had gone about unifying a bunch of pre-existing equations to produce an overall description of the behavior of electrically charged particles; but whereas Maxwell synthesized a wave theory, Weber's starting point was Coulomb's law and, accordingly, his single equation describes a point-to-point force between two charged particles — with no field involved. Weber's generalization of Coulomb's law depends both on the distance between the particles, and on its first and second derivatives, the derivatives appearing in ordinarily negligible terms because they're divided by c2 — the square of the speed of light. In the structural terms discussed above, though, Weber's equation is structurally interesting in that it's targeted: it specifies exactly which particles are affected by the force, rather than describing a field which would then be expected to affect whatever particles happen to encounter it.
That dependence on the first and second derivatives, I admit to finding rather fascinating. For one thing, it puts me in mind, just a bit, of the acceleration-dependent MOND alternative to Newton's law of gravitation, which has been knocking around for several decades now as an alternative to the dark-matter hypothesis. (Admittedly, when it comes to it, I have trouble quite wrapping my head around the role of acceleration in Maxwell, which dives down some sort of rabbit hole to do with the metaphysics of the magnetic field; there's likely a whole blog post in that alone.)
Weber's equation occasions another example of alternative interpretations of the same observation. A peculiar property of the equation, criticized by Hermann von Helmholtz, is that under certain circumstances it leads to effectively negative inertial mass. This involves very small distances — essentially, so it's suggested, the size of the nucleus of an atom. Negative inertial mass means attractive and repulsive forces swap places, with the implication that this could account for positively charged protons within an atomic nucleus not instantly blowing apart the nucleus. That's worth another double-take. Mainstream physics basically introduces a whole additional fundamental force (the strong nuclear force) to provide an attractive force to balance that terrific repulsive force between protons. The big-picture sense here is that our whole description of nature has been vastly complicated by using global ether-like fields rather than point-to-point forces.
It is, thereby, a fascinating exercise in the subjectivity of observation, to read in Wikipedia's article on Weber electrodynamics (linked above) the statement that "Despite various efforts, a velocity-dependent and/or acceleration-dependent correction to Coulomb's law has never been observed". Which is true only if you do not count, as observational evidence of such a correction, the observed fact that atomic nuclei exist. It's a bit fascinating to reflect that the scientists on both sides (or however many sides this has) have such heavy psychological investments in their respective interpretations that most, if not all, of them are never going to budge without at the very least some truly extraordinary new development (and perhaps not even then).
A particular point raised by advocates of Weber, with interesting structural connections, is that Maxwell electrodynamics doesn't obey Newton's third law unless the electromagnetic field itself is treated as an object in itself; that is, the field can push a particle, and the particle is affected by the field, and the only thing the particle pushes back on is the field itself. Another way of saying this is that various quantities are not conserved unless the field is included. Presumably, this would be true of any orthodox field carrying a force — including the gravitational field, with the expectation that relativity, predicting gravity waves, would have this property as well. In fact, it seems that any non-quantum field would work this way, in general (although, as described earlier, classical fields are often treated as constants and thereby act but do not react). Note, though, that quantum waves are not subject to this objection, because of wave/particle duality: the wave does not distribute conserved quantities independent of particles, because when it actually interacts with something else, the wave itself is a particle. So all of this is tied in with quantization. With the heavy irony that this objection to relativity-style fields is contingent on these fields not being quantized. (Conversely, an objection to Weber electrodynamics, prominent in the Wikipedia article, is the existence of the phenomenon of radiation pressure — but this too appears to be an artifact of interpretation, in that the quantum strategy of substituting particles for waves should allow radiation pressure to be described in an ether-less manner as well.)
Compared to this radical rejection of fields in favor of pairwise interactions between particles, it makes sense that orthodox fields, even post-Michelson–Morley, would be perceived as ether theory. This, at any rate, I take to be the common tenet of that little alternative-science community, such as it is: that the basis of physics should be ether-free. My impression, from a sampling of talks at the conference, is that this subgroup have collectively a bunch of fragments of intuition that there's something there, and just need for an insight to come along to bring it all together (which is what we're all looking for, of course, whichever area of theory-space we're looking in).
All of which highlights a point I failed to bring out, earlier, in discussing sonic booms: shockwaves occur in an energy-carrying medium. That is, any object moving through such a medium —in an orthodox (i.e., non-quantized) handling of such a situation— has to invest energy, and, therefore, it has to slow down. The loss of energy to field-drag would have to be allowed for in such a theory. Evidently, it would alter the shape of the probability distribution of fast-universe velocities, shifting them downward and eventually tending to push things from the fast-universe to the slow-universe. Oddly, it would also seem to cause even slow things to slow further, reminiscent of the Aristotelian view that things in motion tend to come to rest; one is also reminded of Heaviside's criticism of electromagnetic compression waves, per above, that they apparently do not occur in elastic bodies (presumably because if they did, we would expect each bounce of a rubber ball to produce an accompanying burst of electromagnetic radiation). One would then either have to provide some means to also shift velocities upward, or accept a model in which the universe necessarily runs down. The idea of siphoning disorder into the fast-universe, suggested earlier to support quantum entanglement, might possibly play into this. On the face of it, certainly, shockwaves aren't compatible with an ether-free theory.
A final thought on this. In my first physics post on this blog (back in 2012), I said that when a long succession of theories just keep getting more complicated, they may all share in common some wrong assumption that is diverting them into this downward spiral, and I particularly recommended basic physics as an area where this appeared to have been happening for the past century or so. In those terms, the common premise of that pocket community is that the ether metaphor is the wrong assumption that's got mainstream physics in a cul-de-sac.
Where to go from hereThe fast/slow particle hypothesis itself hasn't been hugely successful, though it has some interesting features. It allows relativity to be viewed as preventing something from being said, and on the quantum-theory side it stirs speculation on how to balance the correlations of entanglement by siphoning off disorder into the fast-universe.
Where the thought experiment has paid off spectacularly is on the "stereoscopic view of the terrain" side of things, where, as noted at the top of the post, we've turned up a welter of ideas to play with. Most of these have somehow to do with the role of fields in basic physical theories, which figures since the immediate thought experiment was mostly about omitting the parts of conventional theory that involve fields. Important distinctions were made between fields describing action versus fields describing reaction, and between fields describing universal forces such as electromagnetism (or presumably gravity, though the form of that in relativity is trickier) versus fields describing targeted information such as the wave function of a particular particle or entangled subsystem. The large-scale structure of non-local interactions was flagged for further study, which doesn't obviously relate to the fields aspect of things (though, once one has said that, one naturally starts wondering about it). Weber electrodynamics pops up as a field-free approach that pulls in velocity and acceleration, inviting possible structural comparison/contrast with Maxwell electrodynamics; moreover, the velocity/acceleration aspect is vaguely reminiscent of the acceleration-dependent force law of MOND, while the point-to-point feature brings to mind the targeted information of quantum wave functions. Both connections are tempting, hooking up Weber, or really any field-free approach, with either relativity or quantum mechanics.
Riffing for a moment: MOND and dark matter were both devised to explain why stars toward the outer edge of galaxies don't move as they were expected to under Newtonian gravitation. Both are kludges. The dark-matter hypothesis says, these stars don't move the way they ought to if they were being pulled only by the mass of the things we can see, so let's pretend there's lots of massive stuff we can't see that's distributed so as to produce the observed motion. It's necessary to hypothesize a huge amount of invisible mass. The MOND alternative, proposed in 1983, is to tweak Newton's second law ( F = m a ) as it applies to gravitation, so that it remains Newtonian for large accelerations but exceeds the Newtonian values for accelerations much below some small critical value a0. Tampering with Newtonian dynamics doesn't bother me so much as having no intuition for why. If (for an example that readily comes to mind) the missing element were quantization of gravity, I'd guess the lower end of the curve might be attenuated rather than increased, akin to the way quantization attenuated predictions of black-body radiation (thereby forestalling the so-called "ultraviolet catastrophe").
Speaking of which, how do we introduce quantum-ness into an ether-less theory ala Weber? Looking at a small system-of-interest, consisting of a few particles, we can certainly consider the force between each pair of particles of interest (though the number of pairs goes up as the square of the number of particles); but it seems a rather daunting prospect to consider, for each particle of interest, pairwise forces in relation to a practically infinite number of particles-not-individually-of-interest in the rest of the cosmos. Refining the question, then, how does the ether hypothesis impinge on quantum theory? Recall the Schrödinger equation,
|
= | Ĥ | Ψ | . |
If I were desperately trying to narrow things down to a single approach —which I realize is a common sort of desperation amongst scientists (normal or otherwise) when caught in a paradigm-crisis situation— all this welter of possible directions could be quite frustrating. Which, just at this moment, is another good reason to favor a more liberally wide-ranging attitude.
I am fond of the idea of pairwise interactions between particles instead of fields because it is nicely symmetrical in time. However I know nothing about quantum mechanics and so I would need to study in order to be anything more than fond.
ReplyDeleteAlso I wanted to understand how magnitism could be explained as moving charges plus relativity. It turns out in every popular science mention of relativity I'd encountered, I'd completely skipped over "now" being a gradient.
I too am rather fond of the pairwise thing. The first speaker about Weber at the Prague conference, iirc, was quite taken with it too. As you mention time, though, that seems to me to be the biggest hurdle the pairwise approach needs to clear. Forces-at-a-distance between particles aren't instantaneous; and even if they were, relativity would introduce disagreements between observers about simultaneity. So how to handle propagation time of a Weber-style force, and how to integrate it with relativity, are imho quite fascinating topics for speculation.
DeleteThis is really fascinating because I've had an interest in off-mainstream physics for a long time - in fact since encountering Tom Bearden's writings in the 1980s, where he began ranting extensively about quaternions and the idea that the scalar field was dropped by Heaviside et al. I've been trying to understand that claim ever since.
ReplyDeleteOn quaternions particularly, I do feel there's something very curious missing there. I'm particularly fascinated by the origins of the vector 'del' operator (grad, curl, div) as Hamilton's 'nabla' - which I think Hamilton himself only defined on the vector part of a quaternion field - and wondering if there could ever be a fully integrated 'quaternion nabla', operating on a coupled vector and scalar field. One might think that the canonical example of such a field might be the electromagnetic scalar+vector potentials (A+B). But I struggle to understand what it would mean, physically, to add a curl to a gradient (or both to a delta-V). There aren't many writers on this subject at all. A modern writer who you might want to check out if you haven't is Doug Sweetser.
Another, more mainstream, scientific writer who also touches on the quaternion mystery is Terence Barrett, and his 'Topological Foundations of Electromagnetism' (2008), who repeats again this legend of the quaternion field being cut down to vectors, perhaps missing the very important contribution of the scalar field and what it could mean, and advocates for a 'SU(2) Electromagnetism', by which he means an electromagnetism with full quaternion symmetry, exhibiting beyond-Maxwell yet non-quantum physical effects. (And names several known effects that he thinks fall into that category already).
https://books.google.co.nz/books?id=e0-QdLqT-pIC
This is Doug Sweetser: https://dougsweetser.github.io/Q/
DeleteTo be clear, both Sweetser and myself have been thinking of 'quaternion nabla' as (delta + nabla). Which comes out (for a quaternion field defined as scalar field s + vector field v) as:
delta s + delta v
+ nabla s + nabla v
==
delta s + delta v
+ grad s + curl v - div v
==
scalar = delta s - div v
vector = delta v + grad s + curl v
From the standpoint of modern physics, and modern vector/geometric algebra, this equation feels extremely wrong, because its vector part is a sum of both vector quantities (gradient and delta-v) and pseudovectors (curl) which are considered two entirely separate mathematical objects.
But this seems to be what the underlying quaternion maths just does.... and quaternions are where we got our idea of 'vector cross multipication' as well as 'div, grad and curl' from. And quaternions are just pure mathematical objects, and they're also a very nice, simple, closed algebra, and closure feels like it ought to be an important thing. Most physicists assume that it's just an accident that you an cross-multiply two vectors and get a pseudovector; it's a kind of trick or pun, that only works in three dimensions, not really a real thing, and anyway a vector (v) just isn't a pseudovector (curl v). One's a linear translation and the other's an axis of rotation; physically, it does seem obvious that they're very different. So what's going on here? How could a very simple, fundamental equation be 'doing maths wrong' by mixing two different kinds of quantities?
I assume it's because quaternions are about rotation, not translation, in which case the question still is: why are chunks of quaternion nabla so physically useful, but the whole quaternion nabla isn't?
Another possibility that crosses my mind, though, is:
DeleteCould there be a 'quaternion nabla' which is NOT (delta + nabla)? Is part of the problem that we've been conceptualising the operator wrong?
For example, if we extended curl into four dimensions (ignoring relativity for the moment, and assuming that the fourth dimension is time) then curl might be defined over a difference of four vectors, not three:
* the change of vector from 'left'
* the change of vector from 'north'
* the change of vector from 'above'
* the change of vector from 'before'
and then we might see that a rapidly changing vector field might gain an extra bonus to its curl. Could this extra component tell us something helpful about highly dynamic vector fields, maybe rapidly expanding or contracting magnetic fields, that we normally ignore?
(This would actually be curl AND divergence, because they're linked together, so it would be an operator that took four 3D linear-translation vectors and gave us one 3D axis-of-rotation vector plus one scalar quantity representing divergence. Or 'convergence', negative divergence, as Hamilton's maths actually defines it; Heaviside inverted the sign.)
I'm in very deep waters here, though, and I don't have the maths to understand what I'm doing. Physically it feels intuitive, but intuition doesn't count for much in physics.
I suppose the question I have, given (delta + nabla) , eg as Sweetser uses here: https://dougsweetser.github.io/Q/Math/multiplying/
Deleteis, can this be given *any* classical physical interpretation?
If we had say a field of fluid flow (ignoring for the moment any associated scalar field - which *might* perhaps represent the density, or something like density), then nabla gives us the curl of it. If that flow then were to change over time, eg, to accelerate, then we might be bringing in the quaternionic elements of nabla - such as delta-V. The naive thing to do, as represented by Sweetser's equation here, would just be to add the delta-V to the curl. Which would represent the axis of rotation being skewed - rotating forward - in the direction of the local acceleration of the fluid flow. Is that a physically sensible interpretation, I wonder?
If 'curl skewed in the direction of acceleration' was a sensible thing to imagine, then we could also perhaps imagine the scalar field as a density field, and therefore a variation of density (ie the gradient) as being something a little like an acceleration of the flow in that direction.
And then we could perhaps put our components of the full quaternionic nabla together as:
* delta s = local density increase over time
* -div v = local density decrease caused by flow away from this point
* delta v = skew of curl of flow caused by local acceleration of flow
* grad s = skew of curl of flow caused by density increase in a direction causing acceleration of flow
* curl s = the unskewed curl of the flow assuming no change of parameters in time
I don't know if this is correct, but it's a naive interpretation of what just banging the parameters of (delta + nabla) together with a somewhat physically intuitive interpretation of an (s + v) flow field might mean. (phi + A) probably works nothing like that actually.
My gut feeling though is that while adding (gradient + delta-V) might be okay given a suitable definition of 'density', just adding an acceleration vector to a curl vector is still wrong.
DeleteI think we would want to *vector divide* the curl by the time-or-density-gradient-induced acceleration somehow, to end up with a curl-plus-negative-divergence.
Eg, if we've got a flow field like a river - let's say an undersea river, so it's embedded in a 3D flow field - that has a curl pointing 'upwards'... and then we see that whole flow field also accelerating in the upward direction... we wouldn't expect the curl to increase or decrease, as it would if we just added the acceleration vector to the curl vector. We'd expect to see the curl *rotating* along the axis needed to rotate the sideways flow of the river into the upwards flow it's suddenly acquired. At least I think would.
And it feels like that rotation would be a multiplication, and I don't know if it would be mathematically consistent with the good old quaternion multiplication equation we know and love.
But almost nobody in the world (except Sweetser) seems to be thinking about quaternion nabla, and those who are, aren't thinking in terms of extremely naive and dumb physical analogies like fluid flow-and-density-fields over time (and with good reason, because quaternions are really the algebra of rotations of 4D spheres), so this might just be a big weird dead end.
Also it's A + Phi, I think, not A + B. And there's a whole big weird-physics whisper-network rabbit hole there - which Barrett touches on - about gauge freedom and the Lorenz Gauge being part of the mechanism by which mainstream science accidentally lopped off part of the full quaternion field.
ReplyDeleteI find multiple perspectives on these things can be highly valuable, as different minds may come up with very different sorts of patterns. So saying, have you seen my earlier post on Nabla?
DeleteI'm in partial sympathy, and at the same time partial disagreement, with your suggestion that "intuition doesn't count for much in physics". Certainly one should tread cautiously with intuitions from ordinary experience of the physical world (though one ought, in any case, to be rather selective in when to tread incautiously); but modern physics has at least one foot in the mathematical world, and intuition counts for quite a lot in mathematics.
Oh! I did miss your 2019 'full Nabla' post, and that's given me a lot to think about! Especially on multiplying vs dividing. You seem to be sharing many of the same questions I have. I'm going to have to go back and reread all the posts I missed.
ReplyDeleteOh, by the way, on signatures:
"the norm of a quaternion is the square root of the sum of the squares of its components, √(t2+x2+y2+z2), whereas in Minkowski spacetime the three spatial elements should be negative, √(t2−x2−y2−z2). "
Yes, just (square rooting the) sum of the squares of the *components* of the quaternion seems to be the norm as we currently define it.... but why exactly *should* that be the way we define a quaternion norm? Aren't i,j,k imaginaries? Isn't the square of an imaginary a negative number? If instead of √(t2+x2+y2+z2), we had √(t2+ix2+iy2+iz2).... wouldn't that give us in fact the correct Minkowski signature? And mightn't it maybe even be more mathematically 'pure' to define a norm in such a way, as the sum of the four *actual* components of the quaternion, imaginary multipliers and all?
Also this
Delete"Hamilton seems to have first dabbled with it several years before he discovered quaternions, as a sort of "square root" of the Laplacian, at which time naturally he only gave it three components; and when he adapted it to a quaternionic form it still had only three components."
Nabla *before* quaternions! Back when Hamilton was still stuck on only using three dimensions! That would maybe explain a lot!
Iirc, when Hamilton wrote up his list of properties that, for practical reasons, he wanted "triples" to preserve, the property we usually colloquially describe as "unique division" was actually the Law of the Norms, that the sum-of-squares-of-components is a modulus of multiplication.
DeleteRe the history, nabla pre-dating quaternions: yes, I was quite pleased to unearth that; I'd had no idea and it explains a great deal. We're looking /back/ on all this, and the perspective is very different from the one Hamilton would have experienced moving forward into it.
Oh, one other random idea I've had, and I'm not sure how useful it might be:
ReplyDeleteQuaternions can be (scalar + vector) OR (tensor * versor).
(And we DEFINITELY need a replacement word for Hamilton's 'tensor' now that it belongs to tensor calculus... just like Prolog programmers can't use 'functor' anymore because of category theory.)
Could there be a quaternion differentiation operator that ran on tensors * versors rather than scalars * vectors?
The reason I like the versor form is that it makes clear that we're dealing with pure rotations separated from pure stretching operations. (With the real part of a 4D pure rotation being something like a phase shift or Lorentz boost, adapted perhaps to the speed of sound in a flowing medium?)
This reformulation of nabla might not help, but it might give another angle for visualisation.
Wikinews mourns loss of volunteer John Shutt : https://en.wikinews.org/wiki/Wikinews_mourns_loss_of_volunteer_John_Shutt
ReplyDeleteOnce you’ve been in PPC for a while, you will find out that campaign structure is vital to your chances of success. Your search network campaigns are intrinsically connected to your marketing strategy and business goals. Therefore, if your account lacks structure, it’s unlikely that things will go your way.
ReplyDeletehttps://ppcexpo.com/blog/google-ads-campaign-structure
This is a very nice one and gives in-depth information. I am really happy with the quality and presentation of the article. I’d really like to appreciate the efforts you get with writing this post. Thanks for sharing.
ReplyDeleteAutoCad Training In Pune
Spoken English Course in Chennai
ReplyDeleteIn this digitally growing world, the demand for Software Testing has increased a lot, there are many institutes and training centers for its training but why you should choose SevenMentor for Software Testing Classes in Pune is because it provides:1> Training from basics to advance level and its course is designed for all kinds of students.
ReplyDeleteSoftware Testing Course In Pune
Software testing is a widespread approach to evaluating and ensuring that a product or application performs as intended. The benefits of testing include preventing bugs, reducing development costs, and improving performance.
ReplyDeletesoftware testing courses .php
Very Informative blog. Java course
ReplyDeleteThank you for this informative article! Your presentation of each tip was very clear and I found the tips on Mobile Testing to be particularly useful. Looking forward to reading more of your insightful articles in the future.
ReplyDeleteclinicalresearchcourses
Clinical research courses are an essential tool for students looking to pursue a career in the field of clinical research. These courses provide students with a comprehensive understanding of the field, including its purpose, the types of studies that are conducted, and the importance of regulatory compliance.
ReplyDeleteclinicalresearchcourses
great post, keep posting Salesforce Course In Pune
ReplyDeleteSo, I'm turning to this community for guidance. I want to hear from those who have explored alternative networking courses beyond CCNA. Whether you're a seasoned professional or just starting out, your insights could be invaluable in helping me choose the best path forward.
ReplyDeleteCCNA Training in Nagpur
SevenMentor offers a comprehensive Data science training in Pune, designed to equip students with in-depth knowledge and practical skills in data analysis, machine learning, and statistical modeling. The curriculum covers essential topics such as Python programming, data visualization, and big data technologies, ensuring a well-rounded education. With a blend of theoretical lessons and hands-on projects, the course aims to prepare participants for real-world challenges in the data science field. Expert trainers, who bring industry experience and insights, guide students through the learning process, ensuring they gain proficiency in the latest data science tools and techniques. This course is ideal for aspiring data scientists looking to advance their careers in a rapidly growing industry.
ReplyDelete