Anakin: Is it possible to learn this power?

Palpatine: Not from a Jedi.—Star Wars: Episode III – Revenge of the Sith, George Lucas, 2005.

In this post I mean to tie together several puzzles I've struggled with, on this blog and elsewhere, for years; especially, on one hand, the philosophical implications of Gödel's results on the limitations of formal reasoning (post), and on the other hand, the implications of evidence that sapient minds are doing something our technological artifacts do not (post).

From time to time, amongst my exploratory/speculative posts here, I do come to some relatively firm conclusion; so, in this post, with the philosophical implications of Gödel. A central notion here will be that formal systems manipulate information from below, while sapiences manipulate it from above.

As a bonus I'll also consider how these ideas on formal logic might apply to my investigations on basic physics (post); though, that will be more in the exploratory/speculative vein.

As this post is mostly tying together ideas I've developed in earlier posts, it won't be nearly as long as the earlier posts that developed them. Though I continue to document the paths my thoughts follow on the way to any conclusions, those paths won't be long enough to meander too very much this time; for proper meandering, see the earlier posts.

TruthContents

Truth

Physics

Through roughly the second half of the nineteenth century, mathematicians aggressively extended the range of formal reasoning, ultimately reaching for a single set of axioms that would found all of logic and mathematics. That last goal was decisively nixed by Gödel's Theorem(s) in 1931. Gödel proved, in essence, that any sufficiently nontrivial formal axiomatic system, if it doesn't prove anything false, cannot prove itself to be self-consistent. It's still possible to construct a more powerful axiomatic system that can prove the first one self-consistent, but that more powerful system then cannot prove *itself* self-consistent. In fact, you can construct an infinite series of not-wrong axiomatic systems, each of which can prove all of its predecessors self-consistent, but each system cannot prove its own self-consistency.

In other words, there is no well-defined maximum of truth obtainable by axiomatic means. By those means, you can go too far (allowing proofs of some things that aren't so), or you can stop short (failing to prove some things that are so), but you can't hit the target.

For those of us who work with formal reasoning a lot, this is a perplexing result. What should one make of it? Is there some notion of truth that is beyond the power of all these formal systems? And what would that even mean?

For the question of whether there is a notion of objective mathematical truth beyond the power of all these formal systems, the evident answer is, *not formally*. There's more to that than just the trivial observation that something more powerful than any axiomatic system cannot itself be an axiomatic system; we can also reasonably expect that whatever it is, we likely won't be able to prove its power is greater *axiomatically*.

I don't buy into the notion that the human mind mystically transcends the physical; an open mind I have, but I'm a reductionist at heart. Here, though, we have an out. In acknowledging that a hypothetical more-powerful something might not be formally provable more powerful, we open the door to candidates that we can't *formally* justify. Such as, a sapient mind that emerges by some combination of its constituent parts and so seemingly ought to be no more powerful than those parts, but... is. In practice. (There's a quip floating around, that "In theory, there is no difference between theory and practice. But, in practice, there is.")

A related issue here is the Curry-Howard correspondence, much touted in some circles as a fundamental connection between computation and logic. Except, I submit it can't be as fundamental as all that. Why? Because of the Church-Turing thesis. Which says, in essence, that there *is* a robust most-powerful sort of computation. In keeping with our expectation of an informal cap on formal power, the Church-Turing thesis in this general sense is inherently unprovable; however, specific parts of it are formally provable, formal equivalence between particular formal models of computation. The major proofs in that vein, establishing the credibility of the general principle, were done within the next several years after Gödel's Theorems proved that there *isn't* a most-powerful sort of formal logic. Long story short: most-powerful sort of computation, yes; most-powerful sort of formal logic, no; therefore, computation and formal logic are not the same thing.

Through my recent post exploring the difference between sapient minds and all our technological artifacts, I concluded, amongst other things, that (1) sapience cannot be measured by any standardized test, because for any *standardized* test one can always construct a technological artifact that will outperform sapient minds; and (2) sapient minds are capable of grasping the "big picture" within which all technology behaves, including what the purpose of a set of formal rules is, whether the purpose is achieved, when to step outside the rules, and how to improvise behavior once outside.

A complementary observation about formal systems is that each individual action taken —each axiomatic application— is driven by the elementary details of the system state. That is, the individual steps of the formal system are selected on a view looking up from the bottom of the information structure, whereas sapience looks downward from somewhere higher in the information structure. This can only be a qualitative description of the difference between the sapient and formal approaches, for the simple reason that we do not, in fact, know how to do sapience. As discussed in the earlier post, our technology does not even attempt to achieve actual sapience because we don't know, from a technical perspective, what we would be trying to achieve — since we can't even measure it, though we have various informal ways to observe its presence.

Keep in mind that this quality of sapience is not uniform. Though some cases are straightforward, in general clambering up into the higher levels of structure, from which to take a wide-angle view, may be extremely difficult even with sapience, and some people are better at it than others, apparently for reasons of both nature, nurture, and circumstance. Indeed, the mix of reasons that lead a Newton or an Einstein to climb particularly high in the structure are just the sort of thing I'd expect to be quite beyond the practical grasp of formal analysis.

What we see in Gödel's results is, then, that even when we accept a reductionist premise that the whole structure is built up by axioms from an elementary foundation, for a sufficiently powerful system there are fundamental limits to the sorts of high-level insights that can be assembled by building strictly upward from the bottom of the structure.

Is that a big insight? Formally it says nothing at all. But I can honestly say that, having reached it, for the first time in <*mumble-mumble*> decades of contemplation I see Gödel's results as evidence of something that makes sense to me rather than evidence that something is failing to make sense to me.

In modern physics, too, we have a large-scale phenomenon (classical reality) that evidently cannot be straightforwardly built up by simple accretion of low-level elements of the system (quanta). Is it possible to understand this as another instance of the same broad phenomenon as the failure, per Gödel, to build a robust notion of truth from elementary axioms?

Probably not, as I'll elaborate below. However, in the process I'll turn up some ideas that may yet lead somewhere, though quite *where* remains to be seen; so, a bit of meandering after all.

Gödel's axiomatic scenario has two qualitative features not immediately apparent for modern physics:

- Axiomatic truth appears to be part of, and therefore to evolve toward, absolute truth; the gap between the two appears to be a quantitative thing that shrinks as one continues to derive results axiomatically, even though it's unclear whether it shrinks toward zero, or toward some other-sized gap. Whereas, the gap between quantum state and classical state is clearly qualitative and does not really diminish under any circumstances.
- The axiomatic shortfall only kicks in for
*sufficiently powerful*systems. It's not immediately clear what property in physics would correspond to axiomatic power of this sort.

Quantum state-evolution does not smooth out toward classical state-evolution at scale; this is the point of the Schrödinger's-cat thought experiment. A Gödel-style effect in physics would seem to require some sort of shading from quantum state-evolution *toward* classical state-evolution. I don't see what shading of that sort would mean.

There is another possibility, here: turn the classical/quantum relationship on its head. Could *classical* state-evolution shade toward *quantum* state-evolution? Apparently, yes; I've already described a way for this to happen, when in my first post on co-hygiene I suggested that the network topology of spacetime, acting at a cosmological scale, could create a seeming of nondeterminism at comparatively small scales. Interestingly, this would also be a reversal in scale, with the effect flowing *from* cosmological scale *to* small scale. However, the very fact that this appears to flow from large to small does not fit the expected pattern of the Gödel analogy, which plays on the contrast between bottom-up formalism and top-down sapience.

On the other front, what of the sufficient-power threshold, clearly featured on the logic side of the analogy? If the quantum/classical dichotomy is an instance of the same effect, it would seem there must be something in physics corresponding to this power threshold. Physics considered in the abstract as a description of physical reality has no obvious place for *power* in a logical or computational sense. Interestingly, however, the particular alternative vein of speculation I've been exploring here lately (co-hygiene and quantum gravity) recommends modeling physical reality as a discrete structure that evolves through a dimension orthogonal to spacetime, progressively toward a stable state approximating the probabilistic predictions of quantum mechanics — and it is reasonable to ask how much computational power the primitive operations of this orthogonal evolution of spacetime ought to have.

In such a scenario, the computational power is applied to state-evolution from some initial state of spacetime to a stable outcome, for some sense of *stable* to be determined. As a practical matter, this amounts to a transformation from some probability distribution of initial states of spacetime, to a probability distribution of stable states of spacetime that presumably resembles the probability distributions predicted by quantum mechanics. As it is unclear how one chooses the initial probability distribution, I've toyed with the idea that a quantum mechanics-like distribution might be some sort of fixpoint under this transformation, so that spacetime would tend to come out resembling quantum mechanics more-or-less-regardless of the initial distribution.

The spacetime-rewriting relation would also be the medium through which cosmological-scale determinism would induce small-scale apparent nondeterminism.

Between inducing nondeterminism and transforming probability distributions, there would seem to be, potentially, great scope for dependence on the relative computational power of the rewriting relation. With such a complex interplay of factors at stake, it seems likely that even if there were a Gödel-like power threshold lurking, it would have to be deduced *from* a much better understanding of the rewriting relation, rather than contributing *to* a basic understanding of the rewriting relation. Nevertheless, I'm inclined to keep a weather eye out for any such power threshold as I move forward.

> Gödel proved, in essence, that any sufficiently nontrivial formal

ReplyDelete> axiomatic system, if it doesn't prove anything false, cannot prove

> itself to be self-consistent.

This statement, while often repeated, is not entirely correct. For Godel's second theorem to be applicable, the formal system must satisfy a number of technical properties ("Hilbert–Bernays provability conditions"). It is relatively easy to construct a formal system essentially equivalent to PA and that trivially proves its own consistency. This is done in Hilbert-Bernays opus. A more self-contained treatment is in Feferman's Arithmetization of metamathematics in a general setting. One way (very roughly) is to build a system PA1, with the same terms, formulae, inference rules, etc. as PA and define provability in PA1 as following: "P is a proof of S in PA1 iff: (i) P is a proof of S in PA and (ii) there is no shorter proof of 'not S' in PA".

I did qualify that the formal axiomatic system must be "sufficiently nontrivial".

DeletePA1 has exactly the same theorems as PA, so their formal "triviality" is the same.

ReplyDeleteIf you are concerned with modifying PA, you can avoid doing this altogether (see Feferman's op. cit. for details). In PA take Con—the usual formula expressing consistency. PA cannot prove Con. It is possible to build another formula Con', such that Con' also expresses PA consistency and PA proves Con'. Con and Con' equivalence can be proved at the meta-level, but PA cannot prove their equivalence.

ReplyDeleteRe Truth: "That last goal was decisively nixed by Gödel's Theorem(s) in 1931."

ReplyDeleteUnless you're a finitist. ;)

Re: "a sapient mind that emerges by some combination of its constituent parts and so seemingly ought to be no more powerful than those parts, but... is."

All minds we call sapient today are finite and inconsistent. They cannot model or represent all natural numbers. Gödel's Incompleteness Theorems don't seem to be applicable.

Of course, this is a 'finite' in context of very large numbers - tens of billions of nodes, hundreds of billions of connections, fractional transmissions, non-uniform organization. A single human brain is orders of magnitude larger, more sophisticated, and buggier than any formal system or program ever developed by humans.

Brains aren't the same as minds or sapience, e.g. in context of external memory. But so far every being we consider sapient has a rather hefty brain.

Re: "Church-Turing thesis"

On one hand, Turing complete computation models are inconsistent. On another, all physically realizable computations are bounded by space, time, matter, and energy and are thus not Turing complete.

Sapience is a condition we observe to exist within a finite system. There is no evidence of a 'soul' or mysterious something providing energy or computation or useful oracular input. All this stretching for infinite causes might be appealing to a spiritual philosophy that places humans on pedestals. But Occam's razor says: YAGNI.

Inconsistency as such is logical, rather than computational. (I note your introduction of the spurious concepts of soul and oracle. These suggest to me that your expectations have replaced what I'm saying with something quite different, and you've then responded to the replacement.)

DeleteIn practice, all logics are bounded by finite computation - finite representations and traces. And under these constraints, informal reasoning is certainly not more powerful than formal reasoning.

DeleteIndeed, formal logics can reason informally about themselves. For example, we could represent a huge neural-network proof-assistant for a logic, within that logic. The neural network might be better at constructing proofs than any human expert, yet be entirely opaque regarding the concepts and intuitions developed while training.

Unless you allow for a hidden, mysterious variable that would require *infinite* representation of informal reasoning within a formal system, there isn't even a small possibility for informal reasoning to be more powerful.

Evidently you didn't understand the post. No doubt there's a better way to explain my point, but, tbh, the available evidence suggests it wouldn't help in your case. Over time I've observed in general you have trouble self-diagnosing errors in your own thinking, and specifically in this area you fail to think carefully because of, evidently, fundamental misanthropy.

DeleteYou clearly understand some implication of Gödel's theorems.

Delete"question of whether there is a notion of objective mathematical truth beyond the power of all these formal systems, the evident answer is, not formally"

But it seems you don't clearly grasp that so-called "informal" reasoning is technically a way to say "computational/mechanical" reasoning, and thus still falls within the same limitations of seeking mathematical truths as the weakest formal system that can simulate the mechanics.

Thus, to you, it looks like I'm confusing computation and logic. To me, it seems you're failing to account for the relevance of computation to an entire half of your argument. When you claim informal reasoning might be superior, you're essentially waving your hands to claim about potential for some mysterious mechanics that cannot be simulated by any formal system.

Also, if I'm feeling misanthropy, you can blame the state of US and world politics right now. Do avoid the fundamental attribution error. I've still got a sputtering flame of optimism for humanity sheltered deep in my heart.

"But it seems you don't clearly grasp that so-called 'informal' reasoning is technically a way to say 'computational/mechanical' reasoning, [...]"

DeleteThis is a key point you're missing. (I did discuss it carefully in the earlier post on sapience and non-sapience, which I carefully cited at the top of this post, but idk if the earlier post would work any better for you, so, fine, we're discussing it here.) I don't call it "informal" because of some failure to realize that I should start by assuming sapience is reducible to formalism. I call it "informal" because I've realized I shouldn't start by assuming that. I'm not thinking about this less clearly; I'm thinking about it more cautiously. This is an intense exercise in

not jumping to a conclusion. It's relatively easy to set up an equation with an unknown in it, and not assume what the value of that unknown (although one can still mess up, e.g. by failing to notice that one has implicitly assumed it's non-zero, or the like). It's much, much more difficult to reason without making assumptions about one's own reasoning. One has to carefully, patiently consider the implications of the non-assumption, and this is where the misanthropy thing comes in: because you're evidently so eager to dismiss sapience (which is not necessarily a human trait, btw), it seems you can't exercise the needed patience in teasing out what sapience might have that formalism doesn't. Indeed, my previous and current posts have only made a start on that exploration.Regarding the state of US and world politics right now... I'm not without sympathy; I'm concerned atm with allowing it to interfere with clear thinking. I've recently been reading E.T. Bell's

Men of Mathematics(written in the 1930s, between WWI and WWII); Bell has various remarks on the French Revolution from mathematicians' perspectives. From his chapter on Lagrange: "[...] the revolting cruelties sickened him and all but destroyed what little faith he had left in human nature and common sense. [...] Although practically the whole of Lagrange's working life had been spent under the patronage of royalty his sympathies were not with the royalists. Nor were they with the revolutionists. He stood squarely and unequivocally on the middle ground of civilization which both sides had ruthlessly invaded." Kind of puts a different spin on describing a politician as a "centrist".I think sapience is very poorly defined. Even the words used when people attempt to define it - wisdom, discernment, intuitive knowing, transcendent knowledge - are, transitively, poorly defined. If sapience isn't well defined, how do we know humans have it? How can we know that a simple neural network that mastered surviving within the Asteroids game does not have sapience? Is sapience necessarily scoped to the human world, or to the world we evolved in?

DeletePeople refusing to define terms, while claiming they have it and others do not, starts to sound more like self-congratulatory tribal affiliations than a useful argument. And I have the impression all of humanity - or at least those elite enough to use the word - has been collectively patting itself on the back for sapience, while often refusing to seek any objective definition for it, perhaps because definition would threaten human supremacy.

I am interested in exploring mental phenomena we associate with 'sapience'. But I would dismiss an arbitrary claim, without objective measures, that humans are sapient and squirrels/slime molds/neural nets are not.

You try to not jump to conclusions. But haven't you already jumped to "humans are sapient"? Did you objectively define sapience before making this judgement?

Regarding you introducing 'unknown' variables for informal reasoning: Don't multiply entities beyond necessity.

If you want to "tease out what sapience might have that formalism doesn't", the best way to do that is to clearly define sapience - i.e. to give these properties of sapience some names and clear descriptions, then compare to formal systems and neural networks, etc.. If you need a hidden variable, leave it at the end of your definition for sapience - room for expansion after you take offense at a neural network exhibiting everything you attributed to sapience so far. At least that will be productive for all parties interested in sapience.

E.g. do neural networks have intuitive knowledge? Define this. How about: when a lot of little clues contribute to a valid judgement about the world, without an obvious primary source? Well, neural nets certainly have that, e.g. when lots of weak signals add up to a trigger.

When we have a lot of good definitions, we can do a lot of good with them. Trying to keep sapience all mysterious and undefined is good for only one thing: protecting the tribal affiliation, i.e. human supremacy.

You're criticizing me for not making your mistake; in essence your complaint is tantamount to ridiculing an algebraic equation in variable x on grounds that it's obviously invalid because the value of x hasn't been defined. I've been seriously exploring really interesting questions here, starting with whether it's even possible for sapience to be more powerful than formalism, and whether there is any evidence that it is; the difference is, you've chosen to refuse to ask those questions.

DeleteI'm criticizing introduction of variable x because the NEED for x hasn't been demonstrated. Occam's Razor. I'd make similar criticism if x were well defined, e.g. number of aliens beaming thoughts to me.

DeleteUnfortunately, we can't even begin to demonstrate necessity without an objective definition of sapience that can be effectively and falsifiably be applied to humans, squirrels, neural nets, and other things that can be said to 'think' in some manner.

Your approach to studying sapience seems much more philosophical than scientific. Sapience, if it exists in this world, should be robust enough to survive scientific scrutiny.

Also, whether sapience has potential to be more powerful than formalism is a very silly question to "start with".

DeleteMore interesting and salient starter questions, to me, are "how do I know whether I'm sapient"? and "how do I know whether someone/something else is sapient"?

And to demonstrate a need for 'more powerful than formalism', first you'll need to show that your definition of sapience objectively discriminates against formal systems.

Your last remark, about interesting questions, would almost be tempting to respond to (despite its somewhat immature wording) if not that it's embedded in a context of abuse and ignoring what I've actually written on the subject.

DeleteChange the context if that suits your preferences. You can always write a fresh blog post.

DeleteYou've written a few sentences on the subject of what sapience means to you across many blog posts, albeit usually entangled with the local context and other concepts.

If you could distill it down to the features you consider most critical - for example, adaptiveness to unexpected problems is a recurring theme, but should not be the whole of sapience (or we'd just use 'adaptiveness') - in a manner that we can begin to measure sapience, that would be quite useful.

The abusive atmosphere has been your doing, and you suggest

DeleteIshould do something about it by writing a post that I've not only already written, but that I explicitly mentioned in the first sentence of this post and provided an explicit link to. You are de facto trolling me. This discussion is closed.With fewer hostile assumptions this time (feel free to delete prior):

ReplyDeleteTuring machines don't exist, only finite approximations of them exist. Sapiences do exist, at least insofar as we consider humans to be sapient.

Any finite system can only reach a subset of mathematical truths. It is quite feasible to model an axiomatic system that can exhaustively explore all possible conclusions reachable by any finite system. The incompleteness theorems don't apply to the question of truths reachable within finite systems.

To be more powerful than formal systems (where 'power' is the narrow measure of access to 'truth'), a sapience must *first* be more powerful than finite systems. However, there is no evidence that sapiences need or have access to infinite anything.

Of course, in practice, even sixty binary decisions is beyond the technological limits of exhaustive checking.

Sapience could arguably provide more *efficient* access to truths, or at least a subset of them, via intuiting leaps of logic, developing hypotheses, working backwards to grounded arguments. They might prune searches they view as unproductive. Of course, there may also be biases and errors that cause a sapience to miss valid truths, to ignore subtle but essential distinctions between two similar arguments, to overreach then fail to correct.

Based on human history, at least, it seems that the vast majority of sapient thinking falls into error long before it reaches 'truth'. Those rigorous, structured methods - such as formal logics and falsifiable sciences - have greatly augmented our ability, but arguably don't need sapiences.

For example, in theory, we could automate our labs, automate hypothesis generation based on data, and automate production of new labs to potentially falsify hypotheses. Automating search for mathematical constructs that can predict observed properties would be part of hypothesis generation.

Some machine-learning with informal reasoning might augment the system, e.g. to automate generation of hypotheses that are more likely to survive testing, and adversarial generation of experiments that are more likely to falsify hypotheses. No need for an general AI that could comprehend natural language or form political opinions or have a sense of self and self-interest.

With these issues in mind, I'm not positively inclined to an assumption that sapiences are inherently powerful in pursuit of truth.

Today, our sapient minds are certainly among the most powerful tools we have for pursuit of truth. Despite the flaws of bias and intuition, the tedium of running experiments to falsify hypotheses, the challenges of recognizing useful patterns in data. But this speaks as much of the current limitations of our tech as it does the strength of our minds.