Saturday, June 2, 2018

Sapience and the limits of formal reasoning

Anakin:      Is it possible to learn this power?
Palpatine:  Not from a Jedi.
Star Wars: Episode III – Revenge of the Sith, George Lucas, 2005.

In this post I mean to tie together several puzzles I've struggled with, on this blog and elsewhere, for years; especially, on one hand, the philosophical implications of Gödel's results on the limitations of formal reasoning (post), and on the other hand, the implications of evidence that sapient minds are doing something our technological artifacts do not (post).

From time to time, amongst my exploratory/speculative posts here, I do come to some relatively firm conclusion; so, in this post, with the philosophical implications of Gödel.  A central notion here will be that formal systems manipulate information from below, while sapiences manipulate it from above.

As a bonus I'll also consider how these ideas on formal logic might apply to my investigations on basic physics (post); though, that will be more in the exploratory/speculative vein.

As this post is mostly tying together ideas I've developed in earlier posts, it won't be nearly as long as the earlier posts that developed them.  Though I continue to document the paths my thoughts follow on the way to any conclusions, those paths won't be long enough to meander too very much this time; for proper meandering, see the earlier posts.

Contents
Truth
Physics
Truth

Through roughly the second half of the nineteenth century, mathematicians aggressively extended the range of formal reasoning, ultimately reaching for a single set of axioms that would found all of logic and mathematics.  That last goal was decisively nixed by Gödel's Theorem(s) in 1931.  Gödel proved, in essence, that any sufficiently nontrivial formal axiomatic system, if it doesn't prove anything false, cannot prove itself to be self-consistent.  It's still possible to construct a more powerful axiomatic system that can prove the first one self-consistent, but that more powerful system then cannot prove itself self-consistent.  In fact, you can construct an infinite series of not-wrong axiomatic systems, each of which can prove all of its predecessors self-consistent, but each system cannot prove its own self-consistency.

In other words, there is no well-defined maximum of truth obtainable by axiomatic means.  By those means, you can go too far (allowing proofs of some things that aren't so), or you can stop short (failing to prove some things that are so), but you can't hit the target.

For those of us who work with formal reasoning a lot, this is a perplexing result.  What should one make of it?  Is there some notion of truth that is beyond the power of all these formal systems?  And what would that even mean?

For the question of whether there is a notion of objective mathematical truth beyond the power of all these formal systems, the evident answer is, not formally.  There's more to that than just the trivial observation that something more powerful than any axiomatic system cannot itself be an axiomatic system; we can also reasonably expect that whatever it is, we likely won't be able to prove its power is greater axiomatically.

I don't buy into the notion that the human mind mystically transcends the physical; an open mind I have, but I'm a reductionist at heart.  Here, though, we have an out.  In acknowledging that a hypothetical more-powerful something might not be formally provable more powerful, we open the door to candidates that we can't formally justify.  Such as, a sapient mind that emerges by some combination of its constituent parts and so seemingly ought to be no more powerful than those parts, but... is.  In practice.  (There's a quip floating around, that "In theory, there is no difference between theory and practice. But, in practice, there is.")

A related issue here is the Curry-Howard correspondence, much touted in some circles as a fundamental connection between computation and logic.  Except, I submit it can't be as fundamental as all that.  Why?  Because of the Church-Turing thesis.  Which says, in essence, that there is a robust most-powerful sort of computation.  In keeping with our expectation of an informal cap on formal power, the Church-Turing thesis in this general sense is inherently unprovable; however, specific parts of it are formally provable, formal equivalence between particular formal models of computation.  The major proofs in that vein, establishing the credibility of the general principle, were done within the next several years after Gödel's Theorems proved that there isn't a most-powerful sort of formal logic.  Long story short:  most-powerful sort of computation, yes; most-powerful sort of formal logic, no; therefore, computation and formal logic are not the same thing.

Through my recent post exploring the difference between sapient minds and all our technological artifacts, I concluded, amongst other things, that  (1) sapience cannot be measured by any standardized test, because for any standardized test one can always construct a technological artifact that will outperform sapient minds; and  (2) sapient minds are capable of grasping the "big picture" within which all technology behaves, including what the purpose of a set of formal rules is, whether the purpose is achieved, when to step outside the rules, and how to improvise behavior once outside.

A complementary observation about formal systems is that each individual action taken —each axiomatic application— is driven by the elementary details of the system state.  That is, the individual steps of the formal system are selected on a view looking up from the bottom of the information structure, whereas sapience looks downward from somewhere higher in the information structure.  This can only be a qualitative description of the difference between the sapient and formal approaches, for the simple reason that we do not, in fact, know how to do sapience.  As discussed in the earlier post, our technology does not even attempt to achieve actual sapience because we don't know, from a technical perspective, what we would be trying to achieve — since we can't even measure it, though we have various informal ways to observe its presence.

Keep in mind that this quality of sapience is not uniform.  Though some cases are straightforward, in general clambering up into the higher levels of structure, from which to take a wide-angle view, may be extremely difficult even with sapience, and some people are better at it than others, apparently for reasons of both nature, nurture, and circumstance.  Indeed, the mix of reasons that lead a Newton or an Einstein to climb particularly high in the structure are just the sort of thing I'd expect to be quite beyond the practical grasp of formal analysis.

What we see in Gödel's results is, then, that even when we accept a reductionist premise that the whole structure is built up by axioms from an elementary foundation, for a sufficiently powerful system there are fundamental limits to the sorts of high-level insights that can be assembled by building strictly upward from the bottom of the structure.

Is that a big insight?  Formally it says nothing at all.  But I can honestly say that, having reached it, for the first time in <mumble-mumble> decades of contemplation I see Gödel's results as evidence of something that makes sense to me rather than evidence that something is failing to make sense to me.

Physics

In modern physics, too, we have a large-scale phenomenon (classical reality) that evidently cannot be straightforwardly built up by simple accretion of low-level elements of the system (quanta).  Is it possible to understand this as another instance of the same broad phenomenon as the failure, per Gödel, to build a robust notion of truth from elementary axioms?

Probably not, as I'll elaborate below.  However, in the process I'll turn up some ideas that may yet lead somewhere, though quite where remains to be seen; so, a bit of meandering after all.

Gödel's axiomatic scenario has two qualitative features not immediately apparent for modern physics:

  • Axiomatic truth appears to be part of, and therefore to evolve toward, absolute truth; the gap between the two appears to be a quantitative thing that shrinks as one continues to derive results axiomatically, even though it's unclear whether it shrinks toward zero, or toward some other-sized gap.  Whereas, the gap between quantum state and classical state is clearly qualitative and does not really diminish under any circumstances.
  • The axiomatic shortfall only kicks in for sufficiently powerful systems.  It's not immediately clear what property in physics would correspond to axiomatic power of this sort.
The sapience/formalism dichotomy doesn't manifest the same way for different sorts of structure; witness the aforementioned difference between computational power and axiomatic power, where apparently one has a robust maximum while the other does not.  There is no obvious precedent to expect the dichotomy to generate a Gödel-style scale-gap in arbitrary settings.  Nonetheless; might there still be a physics analog to these features of axiomatic systems?

Quantum state-evolution does not smooth out toward classical state-evolution at scale; this is the point of the Schrödinger's-cat thought experiment.  A Gödel-style effect in physics would seem to require some sort of shading from quantum state-evolution toward classical state-evolution.  I don't see what shading of that sort would mean.

There is another possibility, here:  turn the classical/quantum relationship on its head.  Could classical state-evolution shade toward quantum state-evolution?  Apparently, yes; I've already described a way for this to happen, when in my first post on co-hygiene I suggested that the network topology of spacetime, acting at a cosmological scale, could create a seeming of nondeterminism at comparatively small scales.  Interestingly, this would also be a reversal in scale, with the effect flowing from cosmological scale to small scale.  However, the very fact that this appears to flow from large to small does not fit the expected pattern of the Gödel analogy, which plays on the contrast between bottom-up formalism and top-down sapience.

On the other front, what of the sufficient-power threshold, clearly featured on the logic side of the analogy?  If the quantum/classical dichotomy is an instance of the same effect, it would seem there must be something in physics corresponding to this power threshold.  Physics considered in the abstract as a description of physical reality has no obvious place for power in a logical or computational sense.  Interestingly, however, the particular alternative vein of speculation I've been exploring here lately (co-hygiene and quantum gravity) recommends modeling physical reality as a discrete structure that evolves through a dimension orthogonal to spacetime, progressively toward a stable state approximating the probabilistic predictions of quantum mechanics — and it is reasonable to ask how much computational power the primitive operations of this orthogonal evolution of spacetime ought to have.

In such a scenario, the computational power is applied to state-evolution from some initial state of spacetime to a stable outcome, for some sense of stable to be determined.  As a practical matter, this amounts to a transformation from some probability distribution of initial states of spacetime, to a probability distribution of stable states of spacetime that presumably resembles the probability distributions predicted by quantum mechanics.  As it is unclear how one chooses the initial probability distribution, I've toyed with the idea that a quantum mechanics-like distribution might be some sort of fixpoint under this transformation, so that spacetime would tend to come out resembling quantum mechanics more-or-less-regardless of the initial distribution.

The spacetime-rewriting relation would also be the medium through which cosmological-scale determinism would induce small-scale apparent nondeterminism.

Between inducing nondeterminism and transforming probability distributions, there would seem to be, potentially, great scope for dependence on the relative computational power of the rewriting relation.  With such a complex interplay of factors at stake, it seems likely that even if there were a Gödel-like power threshold lurking, it would have to be deduced from a much better understanding of the rewriting relation, rather than contributing to a basic understanding of the rewriting relation.  Nevertheless, I'm inclined to keep a weather eye out for any such power threshold as I move forward.

1 comment:

  1. > Gödel proved, in essence, that any sufficiently nontrivial formal
    > axiomatic system, if it doesn't prove anything false, cannot prove
    > itself to be self-consistent.

    This statement, while often repeated, is not entirely correct. For Godel's second theorem to be applicable, the formal system must satisfy a number of technical properties ("Hilbert–Bernays provability conditions"). It is relatively easy to construct a formal system essentially equivalent to PA and that trivially proves its own consistency. This is done in Hilbert-Bernays opus. A more self-contained treatment is in Feferman's Arithmetization of metamathematics in a general setting. One way (very roughly) is to build a system PA1, with the same terms, formulae, inference rules, etc. as PA and define provability in PA1 as following: "P is a proof of S in PA1 iff: (i) P is a proof of S in PA and (ii) there is no shorter proof of 'not S' in PA".



    ReplyDelete