DOCTOR: I knew a Galactic Federation once, lots of different lifeforms so they appointed a justice machine to administer the law.
ROMANA: What happened?
DOCTOR: They found the Federation in contempt of court and blew up the entire galaxy.— The Stones of Blood, Doctor Who, BBC, 1978.
The biggest systemic threat atm to the future of civilization, I submit, is that we will design out of it the most important information-processing asset we have: ourselves. Sapient beings. Granted, there is a lot of bad stuff going on in the world right now; I put this threat first because coping with other problems tends to depend on civilization's collective wisdom.
That is, we're much less likely to get into trouble by successfully endowing our creations with sapience, than by our non-sapient creations leaching the sapience out of us. I'm not just talking about AIs, though that's a hot topic for discussion lately; our non-sapient creations include, for a few examples, corporations (remember Mitt Romney saying "corporations are people"?), bureaucracy (cf. Franz Kafka), AIs, big data analysis, restrictive user interfaces, and totalitarian governments.
I'm not saying AI isn't powerful, or useful. I'm certainly not suggesting human beings are all brilliant and wise — although one might argue that stupidity is something only a sapient being can achieve. Computers can't be stupid. They can do stupid things, but they don't produce the stupidity, merely conduct and amplify it. Including, of course, amplifying the consequences of assigning sapient tasks to non-sapient devices such as computers. Stupidity, especially by people in positions of power, is indeed a major threat in the world; but as a practical matter, much stupidity comes down to not thinking rationally, thus failing to tap the potential of our own sapience. Technological creations are by no means the only thing discouraging us from rational thought; but even in (for example) the case of religious "blind faith", technological creations can make things worse.
To be clear, when I say "collective wisdom", I don't just mean addressing externals like global climate change; I also mean addressing us. One of our technological creations is a global economic infrastructure that shapes most collective decisions about how the world is to run ("money makes the world go 'round"). We have some degree of control over how that infrastructure works, but limited control and also limited understanding of it; at some point I hope to blog about how that infrastructure does and can work; but the salient point for the current post is, if we want to survive as a species, we would do well to understand what human beings contribute to the global infrastructure. Solving the global economic conundrum is clearly beyond the scope of this post, but it seems that this post is a preliminary thereto.
I've mentioned before on this blog the contrast between sapience and non-sapience. Here I mean to explore the contrast, and interplay, between them more closely. Notably, populations of sapient beings have group dynamics fundamentally different from — and, seemingly, far more efficacious from an evolutionary standpoint than — the group dynamics of non-sapient constructs.
Not only am I unconvinced that modern science can create sapience, I don't think we can even measure it.
ContentsChess
Chess
Memetics
The sorcerer's apprentice
Lies, damned lies, and statistics
Pro-sapient tech
Storytelling and social upheaval
We seem to have talked ourselves into an inferiority complex. Broadly, I see three major trends contributing to this.
For one thing, advocates of science since Darwin, in attempting to articulate for a popular audience the profound implications of Darwinian theory, have emphasized the power of "blind" evolution, and in doing so they've tended to describe it in decision-making terms, rather as if it were thinking. Evolution thinks about the ways it changes species over time in the same sense that weather thinks about eroding a mountain, which is to say, not at all. Religious thinkers have tended to ascribe some divine specialness to human beings, and even scientific thinkers have shown a tendency, until relatively recently, to portray evolution as culminating in humanity; but in favoring objective observation over mysticism, science advocates have been pushed (even if despite themselves) into downplaying human specialness. Moreover, science advocates in emphasizing evolution have also played into a strong and ancient religious tradition that views parts/aspects of nature, and Nature herself, as sapient (cf. my past remarks on oral society).
Meanwhile, in the capitalist structure of the world we've created, people are strongly motivated to devise ways to do things with technology, and strongly motivated to make strong claims about what they can do with it. There is no obvious capitalist motive for them to suggest technology might be inferior to people for some purposes, let alone for them to actually go out and look for advantages of not using technology for some things. Certainly our technology can do things with algorithms and vast quantities of data that clearly could not be done by an unaided human mind. So we've accumulated both evidence and claims for the power of technology, and neither for the power of the human mind.
The third major trend I see is more insidious. Following the scientific methods of objectivity highly recommended by their success in studying the natural world, we tried to objectively measure our intelligence; it seemed like a good idea at the time. And how do you objectively measure it? The means that comes to mind is to identify a standard, well-defined, structured task that requires intelligence (in some sense of the word), and test how well we do that task. It's just a matter of finding the right task to test for... right? No, it's not. The reason is appallingly simple. If a task really is well-defined and structured, we can in principle build technology to do it. It's when the task isn't well-defined and structured that a sapient mind is wanted. For quite a while this wasn't a problem. Alan Turing proposed a test for whether a computer could "think" that it seemed no computer would be passing any time soon; computers were nowhere near image recognition; computers were hilariously bad at natural-language translation; computers couldn't play chess on the level of human masters.
To be brutally honest, automated natural-language translation is still awful. That task is defined by the way the human mind works — which might sound dismissive if you infer mere eccentricities of human thinking, but becomes quite profound if you take "the way the human mind works" to mean "sapience". The most obvious way computers can do automatic translation well is if we train people to constrain their thoughts to patterns that computers don't have a problem with; which seemingly amounts to training people to avoid sapient thought. (Training people to avoid sapient thought is, historically, characteristic of demagogues.) Image processing is still a tough nut to crack, though we're making progress. But chess has certainly been technologized. It figures that would be the first-technologized of those tasks I've mentioned because it's the most well-defined and structured of them. When it happened, I didn't take it as a sign that computers were becoming sapient, but rather a demonstration that chess doesn't strictly require whatever-it-is that distinguishes sapience. I wasn't impressed by Go, either. I wondered about computer Jeopardy!; but on reflection, that too is a highly structured problem, with no more penalty for a completely nonsensical wrong answer than for a plausible wrong one. I'm not suggesting these aren't all impressive technological achievements; I'm suggesting the very objectivity of these measures hides the missing element in them — understanding.
Recently in a discussion I read, someone described modern advances in AI by saying computers are getting 'better and better at understanding the world' (or nearly those words), and I thought, understanding is just what they aren't doing. It seems to me the technology is doing what it's always done — getting better and better at solving classes of problems without understanding them. The idea that the technology understands anything at all seems to me to be an extraordinary claim, therefore requiring extraordinary proof which I do not see forthcoming since, as remarked, we expect to be unable to test it by means of the most obvious sort of experiment (a structured aptitude test). If someone wants to contend that the opposite claim I'm making is also extraordinary — the claim that we understand in a sense the technology does not — I'll tentatively allow that resolving the question in either direction may require extraordinary proof; but I maintain there are things we need to do in case I'm right.
Somebody, I maintain, has to bring a big-picture perspective to bear. To understand, in order to choose the goals of what our technology is set to do, in order to choose the structural paradigm for the problem, in order to judge when the technology is actually solving the problem and when the situation falls outside the paradigm. In order to improvise what to do when the situation does fall outside the paradigm. That somebody has to be sapient.
For those skeptics who may wonder (keeping in mind I'm all for skepticism, myself) whether there is an unfalsifiable claim lurking here somewhere, note that we are not universally prohibited from observing the gap between sapience and non-sapience. The difficulty is with one means of observation: a very large and important class of experiments are predictably incapable of measuring, or even detecting, the gap. The reason this does not imply unfalsifiability is that scientific inquiry isn't limited to that particular class of experiments, large and important though the class is; the range of scientific inquiry doesn't have specific formally-defined boundaries — because it's an activity of sapient minds.
The gap is at least suggested by the aforementioned difficulty of automatic translation. What's missing in automatic translation is understanding: by its nature automatic translation treats texts for translation as strings to be manipulated, rather than indications about the reality in which their author is embedded. Whatever is missed by automatic translation because it is manipulating strings without thinking about their meaning, that is a manifestation of the sapience/non-sapience gap. Presumably, with enough work one could continue to improve automatic translators; any particular failure of translation can always be fixed, just as any standardized test can be technologized. How small the automatic-translation shortfall can be made in practice, remains to be seen; but the shape of the shortfall should always be that of an automated system doing a technical manipulation that reveals absence of comprehension.
Consider fly-by-wire airplanes, which I mentioned in a previous post. What happens when a fly-by-wire airplane encounters a situation outside the parameters of the fly-by-wire system? It turns control over to the human pilots. Who often don't realize, for a few critical moments (if those moments weren't critical, we wouldn't be talking about them, and quite likely the fly-by-wire system would not have bailed) that the fly-by-wire system has stopped flying the plane for them; and they have to orient themselves to the situation; and they've mostly been getting practice at letting the fly-by-wire system do things for them. And then when this stacked-deck of a situation leads to a horrible outcome, there are strong psychological, political, and economic incentives to conclude that it was human error; after all, the humans were in control at the denouement, right? It seems pretty clear to me that, of the possible ways that one could try to divvy up tasks between technology and humans, the model currently used by fly-by-wire airplanes (and now, one suspects, drive-by-wire cars) is a poor model, dividing tasks for the convenience of whoever is providing the automation rather than for the synergism of the human/non-human ensemble. It doesn't look as if we know how to design such systems for synergism of the ensemble; and it's not immediately clear that there's any economic incentive for us to figure it out. Occasionally, of course, something that seems unprofitable has economic potential that's only waiting for somebody to figure out how to exploit it; if there is such potential here, we may need first to understand the information-processing characteristics of sapience better. Meanwhile, I suggest, there is a massive penalty, on a civilization-wide scale (which is outside the province of ordinary economics), if we fail to figure out how to design our technology to nurture sapience. It should be possible to nurture sapience without first knowing how it works, or even exactly what it does — though figuring out how to nurture it may bring us closer to those other things.
I'll remark other facets of the inferiority-complex effect, as they arise in discussion, below.
MemeticsBy the time I'm writing this post, I've moved further along a path of thought I mentioned in my first contentful post on this blog. I wrote then that in Dawkins's original description of memetics, he made an understandable mistake by saying that memetic life was "still in its infancy, still drifting clumsily about in its primeval soup". That much I'm quite satisfied with: it was a mistake — memetic evolution has apparently proceeded about three to five orders of magnitude faster than genetic evolution, and has been well beyond primeval soup for millennia, perhaps tens of millennia — and it was an understandable mistake, at that. I have more to say now, though, about the origins of the mistake. I wrote that memetic organisms are hard to recognize because you can't observe them directly, as their primary form is abstract rather than physical; and that's true as far as it goes; but there's also something deeper going on. Dawkins is a geneticist, and in describing necessary conditions under which replication gives rise to evolution, he assumed it would always require the sort of conditions that genetic replication needs to produce evolution. In particular, he appears to have assumed there must be a mechanism that copies a basic representation of information with fantastically high fidelity.
Now, this is a tricky point. I'm okay with the idea that extreme-fidelity basic replication is necessary for genetic evolution. It seems logically cogent that something would have to be replicated with extreme fidelity to support evolution-in-general (such as memetic evolution). But I see no reason this extreme-fidelity replication would have to occur in the basic representation. There's no apparent reason we must be able to pin down at all just what is being replicated with extreme fidelity, nor must we be able to identify a mechanism for extreme-fidelity copying. If we stipulate that evolution implies something is being extreme-fidelity-copied, and we see that evolution is taking place, we can infer that some extreme-fidelity copying is taking place; but evolution works by exploiting what happens with indifference to why it happens. We might find that underlying material is being copied wildly unfaithfully, yet somehow, beyond our ability to follow the connections, this copying preserves some inarticulable abstract property that leads to an observable evolutionary outcome. Evolution would exploit the abstract property with complete indifference to our inability to isolate it.
It appears that in the case of genetic evolution, we have identified a basic extreme-fidelity copying mechanism. In fact, apparently it even has an error-detection-and-correction mechanism built into it; which certainly seems solid confirmation that such extreme fidelity was direly needed for genetic evolution or such a sophisticated mechanism would never have developed. Yet there appears to be nothing remotely like that for memetic replication. If memetic evolution really had the same sort of dynamics as genetic evolution, we would indeed expect memetic life to be "still drifting clumsily about in its primeval soup"; it couldn't possibly do better than that until it had developed a super-high-fidelity low-level replicating mechanism.
Yet memetic evolution proceeds at, comparatively, break-neck pace, in spectacular defiance of the expectation. Therefore we may suppose that the dynamics of memetic evolution are altered by some factor profoundly different from genetic evolution.
I suggest the key altering factor of memetic evolution, overturning the dynamics of genetic evolution, is that the basic elements of the host medium — people, rather than chemicals — are sapient. What this implies is that, while memetic replication involves obviously-low-fidelity copying of explicitly represented information, the individual hosts are thinking about the content, processing it through the lens of their big-picture sapient perspective. Apparently, this can result in an information flow with abstract fixpoints — things that get copied with extreme fidelity — that can't be readily mapped onto the explicit representation (e.g., what is said/written). My sense of this situation is that if it is even useful to explicitly posit the existence of discrete "memes" in memetic evolution, it might yet be appropriate to treat them as unknown quantities rather than pouring effort into trying to identify them individually. It seems possible the wholesale discreteness assumption may be unhelpful as well — though ideas don't seem like a continuous fluid in the usual simple sense, either.
This particular observation of the sapient/non-sapient gap is from an unusual angle. When trying to build an AI, we're likely to think in terms of what makes an individual entity sapient; likewise when defining sapience. The group dynamics of populations of sapients versus non-sapients probably won't (at a guess) help us in any direct way to build or measure sapience; but it does offer a striking view of the existence of a sapience/non-sapience gap. I've remarked before that groups of people get less sapient at scale; a population of sapiences is not itself sapient; but it appears that, when building a system, mixing in sapient components can produce systemic properties that aren't attainable with uniformly non-sapient components, thus attesting that the two kinds of components do have different properties.
This evolutionary property of networks of sapiences affords yet another opportunity to underestimate sapience itself. Seeing that populations of humans can accumulate tremendous knowledge over time — and recognizing that no individual can hope to achieve great feats of intellect without learning from, and interacting with, such a scholastic tradition — and given the various motives, discussed above, for downplaying human specialness — it may be tempting to suppose that sapience is not, after all, a property of individuals. However, cogito, ergo that's taking the idea of collective intelligence to an absurdity. The evolutionary property of memetics I've described is not merely a property of how the network is set up; if it were, genetic evolution ought to have struck on it at some point.
There are, broadly, three idealized models (at least three) of how a self-directing system can develop. There's "blind evolution", which explores alternatives by maintaining a large population with different individuals blundering down different paths simultaneously, and if the population is big enough, the variety amongst individuals is broad enough, and the viable paths are close enough to blunder into, enough individuals will succeed well enough that the population evolves rather than going extinct. This strategy isn't applicable to a single systemic decision, as with the now-topical issue of global climate change: there's no opportunity for different individuals to live in different global climates, so there's no opportunity for individuals who make better choices to survive better than individuals who make poorer choices. As a second model, there's a system directed by a sapience; the individual sapient mind who runs the show can plan, devising possible strategies and weighing their possible consequences before choosing. It is also subject to all the weaknesses and fallibilities of individuals — including plain old corruption (which, we're reminded, power causes). The third model is a large population of sapiences, evolving memetically — and that's different again. I don't pretend to fully grok the dynamics of that third model, and I think it's safe to say no-one else does either; we're all learning about it in real time as history unfolds, struggling with different ways of arranging societies (governmentally, economically, what have you).
A key weakness of the third model is that it only applies under fragile conditions; in particular, the conditions may be deliberately disrupted, at least in the short term; keeping in mind we're dealing with a population of sapiences each potentially deliberate. When systemic bias or small controlling population interferes with the homogeneity of the sapient population, the model breaks down and the system control loses — at least, partly loses — its memetic dynamics. This is a vulnerability shared in common by the systems of democracy and capitalism.
The sorcerer's apprenticeThere are, of course, more-than-adequate ways for us to get into trouble by succeeding in giving our technology sapience. A particularly straightforward one is that we give it sapience and it decides it doesn't want to do what we want it to. In science fiction this scenario may be accompanied by a premise that the created sapience is smarter than we are — although, looking around at history, there seems a dearth of evidence that smart people end up running the show. Even if they're only about as smart, and stupid, as we are, an influx of artificial sapiences into the general pool of sapience in civilization is likely to throw off the balance of the pool as a whole — either deliberately or, more likely, inadvertently. One has only to ask whether sapient AIs should have the right to vote to see a tangle of moral, ethical, and practical problems cascading forth (with vote rigging on one side, slavery on the other; not forgetting that, spreading opaque fog over the whole, we have no clue how to test for sapience). However, I see no particular reason to think we're close to giving our technology sapience; I have doubts we're even trying to do so, since I doubt we know where that target actually is, making it impossible for us to aim for it (though mistaking something else for the target is another opportunity for trouble). Even if we could eventually get ourselves into trouble by giving our technology sapience, we might not last long enough to do so because we get ourselves into trouble sooner by the non-sapient-technology route. So, back to non-sapience.
A major theme in non-sapient information processing is algorithms: rigidly specified instructions for how to proceed. An archetypal cautionary tale about what goes wrong with algorithms is The Sorcerer's Apprentice, an illustration (amongst other possible interpretations) of what happens when a rigid formula is followed without sapient oversight of when the formula itself ceases to be appropriate due to big-picture perspective. One might argue that this characteristic rigidity is an inherently non-sapient limitation of algorithms.
It's not an accident that error-handling is among the great unresolved mysteries of programming-language design — algorithms being neither well-suited to determine when things have gone wrong, nor well-suited to cope with the mess when they do.
Algorithmic rigidity is what makes bureaucracy something to complain about — blind adherence to rules even when they don't make sense in the context where they occur, evoking the metaphor of being tied up in red tape. The evident dehumanizing effect of bureaucracy is that it eliminates discretion to take advantage of understanding arbitrary aspects of big picture; it seems that to afford full scope to sapience, maximizing its potential, one wants to provide arbitrary flexibility — freedom — avoiding limitation to discrete choices.
A bureaucratic system can give lip service to "giving people more choices" by adding on additional rules, but this is not a route to the sort of innate freedom that empowers the potential of sapience. To the contrary: sapient minds are ultimately less able to cope with vast networks of complicated rules than technological creations such as computers — or corporations, or governments — are, and consequently, institutions such as corporations and governments naturally evolve vast networks of complicated rules as a strategy for asserting control over sapiences. There are a variety of ways to describe this. One might say that an institution, because it is a non-sapient entity in a sea of sapient minds, is more likely to survive if it has some property that limits sapient minds so they're less likely to overwhelm it. A more cynical way to say the same thing is that the institution survives better if it finds a way to prevent people from thinking. A stereotypical liberal conspiracy theorist might say "they" strangle "us" with complicated rules to keep us down — which, if you think about it, is yet another way of saying the same thing (other than the usual incautious assumption of conspiracy theorists, that the behavior must be a deliberate plot by individual sapiences rather than an evolved survival strategy of memetic organisms). Some people are far better at handling complexity than others, but even the greatest of our complexity tolerances are trivial compared to those of our non-sapient creations. Part of my point here is that I don't think that's somehow a "flaw" in us, but rather part of the inherent operational characteristics of sapience that shape the way it ought to be most effectively applied.
Lies, damned lies, and statisticsA second major theme in non-sapient information processing is "big data". Where algorithms contrast with sapience in logical strategy, big data contrasts in sheer volume of raw data.
These two dimensions — logical strategy and data scale — are evidently related. Algorithms can be applied directly to arbitrarily-large-scale data; sapience cannot, which is why big data is the province of non-sapient technology. I suggested in an earlier post that the device of sapience only works at a certain range of scales, and that the sizes of both our short- and our long-term memories may be, to some extent, essential consequences of sapience rather than accidental consequences of evolution. Not everyone tops out at the same scale of raw data, of course; some people can take in a lot more, or a lot less, than others before they need to impose some structure on it. Interestingly, this is pretty clearly not some sort of "magnitude" of sapience, as there have been acknowledged geniuses, of different styles, toward both ends of the spectrum; examples that come to mind, Leonard Euler (with a spectacular memory) and Albert Einstein (notoriously absent-minded).
That we sapiences can "make sense" of raw data, imposing structure on it and thereby coping with masses of data far beyond our ability to handle in raw form, would seem to be part of the essence of what it means to be sapient. The attendant limitation on raw data processing would then be a technical property of the Platonic realm in broadly the same sense as fundamental constants like π, e, etc., and distant kin to such properties of the physical realm as the conditions necessary for nuclear fusion.
Sometimes, we can make sense of vast data sets, many orders of magnitude beyond our native capacity, by leveraging technological capacity to process more-or-less-arbitrarily large volumes of raw data and boil it down algorithmically, to a scale/form within our scope. It should be clear that the success of the enterprise depends on how insightfully we direct the technology on how to boil down the data; essentially, we have to intuit what sorts of analysis will give us the right sorts of information to gain insight into the salient features of the data. We're then at the short end of a data-mining lever; the bigger the data mine, the trickier it is to reason out how to direct the technological part of the operation. It's also possible to deliberately choose an analysis that will give us the answer we want, rather than helping us learn about reality. And thus are born the twin phenomena of misuse of statistics and abuse of statistics.
There may be a temptation to apply technology to the problem of deciding how to mine the data. That —it should be clear on reflection— is an illusion. The technology is just as devoid of sapient insight when we apply it to the meta-analysis as when we applied it to the analysis directly; and the potential for miscues is yet larger, since technology working at the meta-level is in a position to make more biasing errors through lack of judgement.
One might be tempted to think of conceptualization, the process by which we impose concepts on raw data to structure and thus make sense of it, as "both cause and cure" of our limited capacity to process raw data; but this would, imo, be a mistake of orientation. Conceptualization — which seems to be the basic functional manifestation of sapience — may cause the limited-capacity problem, and it may also be the "cure", i.e., the means by which we cope with the problem, but neither of those is the point of conceptualization/sapience. As discussed, sapience differs from non-sapient information processing in ways that don't obviously fit on any sort of spectrum. Consider: logically, our inability to directly grok big data can't be a "failure" unless one makes a value judgement that that particular ability is something we should be able to do — and making a value judgement is something that can only be meaningfully ascribed to a sapience.
It's also rather common to imagine the possibility of a sapience of a different order, capable of processing vast (perhaps even arbitrarily vast) quantities of data. This can result from —as noted earlier— portraying evolution as if it were a sapient process. It may result from an extrapolation based on the existence of some people with higher raw-data tolerances than others; but this treats "intelligence" as an ordering correlated with raw data processing capacity — which, as I've noted above, it is not. Human sapiences toward the upper end of raw data processing capacity don't appear to be "more sapient", rather it's more like they're striking a different balance of parameters. Different strengths and weaknesses occur at different mixtures of the parameters, and this seems to me characteristic of an effect (sapience) that can only occur under a limited range of conditions, with the effect breaking down in different ways depending on which boundary of the range is crossed. Alternatively, it has sometimes been suggested there should be some sort of fundamentally different kind of mind, working on different principles than our own; but once one no longer expects this supposed effect to have anything to do with sapience as it occurs in humans, I see no basis on which to conjecture the supposed effect at all.
There's also yet another opportunity here for us to talk ourselves into an inferiority complex. We tend to break down a holistic situation into components for understanding, and then when things fail we may be inclined to ascribe failure to a particular component, rather than to the way the components fit together or to the system as a whole. So when a human/technology ensemble fails, we're that much more likely to blame the human component.
Pro-sapient techHow can we design technology to nurture sapience rather than stifle it? Though I don't claim to grasp the full scope of this formidable challenge, I have some suggestions that should help.
On the stifling side, the two big principles I've discussed are algorithms and scale; algorithms eliminate the arbitrary flexibility that gives sapience room to function, while vast masses of data overwhelm sapiences (technology handles arbitrarily large masses of data smoothly, not trying to grok big-picture implications that presumably grow at least quadratically with scale). Evidently sapience needs full-spectrum access to the data (it can't react to what it doesn't know), needs to have hands-on experience from which to learn, needs to be unfettered in its flexibility to act on what it sees.
Tedium should be avoided. Aspects of this are likely well-known in some circles, perhaps know-how related to (human) assembly-line work; from my own experience, tedium can trip up sapience in a couple of ways, that blur into each other. Repeating actions over and over can lead to inattention, so that when a case comes along that ought to be treated differently, the sapient operator just does the same thing yet again, either failing to notice it at all, or "catching it too late" (i.e., becoming aware of the anomaly after having already committed to processing it in the usual way). On the other hand, paying full attention to an endless series of simple cases, even if they offer variations maintaining novelty, can exhaust the sapient operator's decision-making capacity; I, for one, find that making lots of little decisions drains me for a time, as if I had a reservoir of choice that, when depleted, refills at a limited natural rate. (I somewhat recall a theory ascribed to Barack Obama that a person can only make one or two big decisions per day; same principle.)
Another important principle to keep in mind is that sapient minds need experience. Even "deep learning" AIs need training, but with sapiences the need is deeper and wider; the point is not merely to "train" them to do a particular task, important though that is, but to give them accumulated broad experience in the whole unbounded context surrounding whatever particular tasks are involved. Teaching a student to think is an educator's highest aspiration. An expert sapient practitioner of any trade uses "tricks of the trade" that may be entirely outside the box. A typical metaphor for extreme forms of such applied sapient measures is 'chewing gum and baling wire'. One of the subtle traps of over-reliance on technology is that if sapiences aren't getting plenty of broad, wide hands-on experience, when situations outside known parameters arise there will be no-one clueful to deal with it — even if the infrastructure has sufficiently broad human-accessible flexibility to provide scope for out-of-the-box sapient measures. (An old joke describes an expert being called in to fix some sort of complex system involving pipes under pressure —recently perhaps a nuclear power plant, some older versions involve a steamboat— who looks around, taps a valve somewhere, and everything starts working again; the expert charges a huge amount of money —say a million dollars, though the figure has to ratchet up over time due to inflation— and explains, when challenged on the amount, that one dollar is for tapping the valve, and the rest is for knowing where to tap.)
This presents an economic/social challenge. The need to provide humans with hands-on experience is a long-term investment in fundamental robustness. For the same reason that standardized tests ultimately cannot measure sapience, short-term performance on any sufficiently well-structured task can be improved by applying technology to it, which can lead to a search for ways to make tasks more well-structured — with a completely predictable loss of ability to deal with... the unpredictable. I touched on an instance of this phenomenon when describing, in an earlier post, the inherent robustness of a traffic system made up of human drivers.
Suppression of sapience also takes much more sweeping, long-term systemic forms. A particular case that made a deep impression on me: in studying the history of my home town I was fascinated that the earliest European landowners of the area received land grants from the king, several generations before Massachusetts residents rose up in rebellion against English rule (causing a considerable ruckus, which you may have heard about). Those land grants were subject to proving the land, which is to say, demonstrating an ability to develop it. Think about that. We criticize various parties —developers, big corporations, whatever— for exploiting the environment, but those land grants, some four hundred years ago under a different system of government, required exploiting the land, otherwise the land would be taken away and given to someone else. Just how profoundly is that exploitation woven into the fabric of Western civilization? It appears to be quite beyond distinctions like monarchy versus democracy, capitalism versus socialism. We've got hold of the tail of a vast beast that hasn't even turned 'round to where we can see the thing as a whole; it's far, far beyond anything I can tackle in this post, except to note pointedly that we must be aware of it, and be thinking about it.
A much simpler, but also pernicious, source of long-term systemic bias is planning to add support for creativity "later". Criticism of this practice could be drawn to quite reasonable tactical concerns like whether anyone will really ever get around to attempting the addition, and whether a successful addition would fail to take hold because it would come too late to overcome previously established patterns of behavior; the key criticism I recommend, though, is that strategically, creativity is itself systemic and needs to be inherent in the design from the start. Anything tacked on as an afterthought would be necessarily inferior.
To give proper scope for sapience, its input — the information presented to the sapient operator in a technological interface — should be high-bandwidth from an unbounded well of ordered complexity. There has to be underlying rhyme-and-reason to what is presented, otherwise information overload is likely, but it mustn't be stoppered down to the sort of simple order that lends itself to formal, aka technological, treatment, which would defeat the purpose of bringing a sapience to bear on it. Take English text as archetypical: built up mostly from 26 letters and a few punctuation marks and whitespace, yet as one scales up, any formal/technological grasp on its complexity starts to fuzz until ultimately it gets entirely outside what a non-sapience can handle. Technology sinks in the swamp of natural language, while to a sapience natural language comes... well, naturally. This sort of emergent formal intractability seems a characteristic domain of sapience. There is apparently some range of variation in the sorts of rhyme and reason involved; for my part, I favor a clean simple set of orthogonal primitives, while another sort of mind favors a less tidy primitive set (more-or-less the design difference between Scheme and Common Lisp).
When filtering input to avoid simply overwhelming the sapient user, whitelisting is inherently more dangerous than blacklisting. That is, an automatic filter to admit information makes an algorithmic judgement about what may be important, which judgement is properly the purview of sapience, to assess unbounded context; whereas a filter to omit completely predictable information, though it certainly can go wrong, has a better chance of working since it isn't trying to make a call about which information is extraneous, only about which information is completely predictable (if properly designed; censorship being one of the ways for it to go horribly wrong).
On the output side —i.e., what the sapient operator is empowered to do— a key aspect is effective ability to step outside the framework. Sets of discrete top-level choices are likely to stifle sapient creativity rather than enhance it (not to be confused with a set of building blocks, which would include the aforementioned letters-plus-punctuation). While there is obvious advantage in facilities to support common types of actions, those facilities need to blend smoothly with robust handling of general cases, to produce graceful degradation when stepping off the beaten path. Handling some approaches more easily than others might easily turn into systemic bias against the others — a highly context-dependent pitfall, on which the reason for less-supported behavior seems to be the pivotal factor. (Consider the role of motive-for-deviation in the subjective balance between pestering the operator about an unconventional choice until they give it up, versus allowing one anomaly to needlessly propagate unchecked complications.)
Storytelling and social upheavalA final thought, grounding this view of individual sapiences back into global systemic threats (where I started, at the top of the post).
Have you noticed it's really hard to adapt a really good book into a really good movie? So it seems to me. When top-flight literature translates successfully to a top-flight movie, the literature is more likely to have been a short story. A whole book is more likely to translate into a miniseries, or a set of movies. I was particularly interested by the Harry Potter movies, which I found suffered from their attempt to fit far too much into each single movie; the Harry Potter books were mostly quite long, and were notable for their rich detail, and that couldn't possibly be captured by one movie per book without reducing the richness to something telegraphic. The books were classics, for the ages; the movies weren't actually bad, but they weren't in the same rarefied league as the books. (I've wondered if one could turn the Harry Potter book set into a television series, with one season per book.)
The trouble in converting literature to cinematography is bandwidth. From a technical standpoint this is counter-intuitive: text takes vastly less digital storage than video; but how much of that data can be used as effective signal depends on what kind of signal is intended. I maintain that as a storytelling medium, text is extremely high-bandwidth while video is a severe bottleneck, stunningly inefficient at getting the relevant ideas across if, indeed, they can be expressed at all. In essence, I suggest, storytelling is what language has evolved for. A picture may be worth a thousand words, but (a) it depends on which words and which picture, (b) it's apparently more like 84 words, and (c) it doesn't follow that a thousand pictures are worth a thousand times as many words.
In a post here some time back, I theorized that human language has evolved in three major stages (post). The current stage in the developed world is literacy, in which society embraces written language as a foundation for acquiring knowledge. The preceding stage was orality, where oral sagas are the foundation for acquiring knowledge, according to the theory propounded by Eric Havelock in his magnum opus Preface to Plato, where he proposes that Plato lived on the cusp of the transition of ancient Greek society from orality to literacy. My extrapolation from Havelock's theory says that before the orality stage of language was another stage I've called verbality, which I speculate may have more-or-less resembled the peculiar Amazonian language Pirahã (documented by Daniel Everett in Don't Sleep There are Snakes). Pirahã has a variety of strange features, but what particularly attracted my attention was that, adding up these features, Pirahã apparently does not and cannot support an oral culture; Pirahã culture has no history, art, or storytelling (does not), and the language has no temporal vocabulary, tense, or number system (cannot).
'No storytelling' is where this relates back to books-versus-movies. The nature of the transition from verbality to orality is unclear to me; but I (now) conjecture that once the transition to orality occurs, there would then necessarily be a long period of linguistic evolution during which society would slowly figure out how to tell stories. At some point in this development, writing would arise and after a while precipitate the transition to literacy. But the written form of language, in order to support the transition to literate society, would particularly have to be ideally suited to storytelling.
Soon after the inception of email as a communication medium came the development of emoticons: symbols absent from traditional written storytelling but evidently needed to fill in for the contextual "body language" clues ordinarily available in face-to-face social interaction. Demonstrating that social interaction itself is not storytelling as such, for which written language was already well suited without emoticons. One might conjecture that video, while lower-storytelling-bandwidth than text, could have higher effective social-interaction-bandwidth than text. And on the other side of the equation, emoticons also demonstrate that the new electronic medium was already being used for non-storytelling social interaction.
For another glimpse into the character of the electronic medium, contrast the experience of browsing Wikibooks — an online library of some thousands of open-access textbooks — against the pre-Internet experience of browsing in an academic library.
On Wikibooks, perhaps you enter through the main page, which offers you a search box and links to some top-level subject pages like Computing, Engineering, Humanities, and such. Each of those top-level subject pages provides an array of subsections, and each subsection will list all its own books as well as listing its own sub-subsections, and so on. The ubiquitous search box will do a string search, listing first pages that mention your chosen search terms in the page title, then pages that contain the terms somewhere in the content of the page. Look at a particular page of a book, and you'll see the text, perhaps navigation links such as next/previous page, parent page, subpages; there might be a navigation box on the right side of the page that shows the top-level table of contents of the book.
At the pre-Internet library, typically, you enter past the circulation desk, where a librarian is seated. Past that, you come to the card catalog; hundreds of alphabetically labeled deep drawers of three-by-five index cards, each card cumulatively customized by successive librarians over decades, perhaps over more than a century if this is a long-established library. (Side insight, btw: that card catalog is, in its essence, a collaborative hypertext document very like a wiki.) You may spend some time browsing through the catalog, flipping through the cards in various drawers, jotting down notes and using them to move from one drawer to another — a slower process than if you could move instantly from one to another by clicking an electronic link, but also a qualitatively richer experience. At every moment, surrounding context bears on your awareness; other index cards near the one you're looking at, other drawers; and beyond that, strange though it now seems that this is worth saying, you are in a room, literally immersed in context. Furniture, lights, perhaps a cork bulletin board with some notices on it; posters, signs, or notices on the walls, sometimes even thematic displays; miscellany (is that a potted plant over there?); likely some other people, quietly going about their own business. The librarian you passed at the desk probably had some of their own stuff there, may have been reading a book. Context. Having taking notes on what you found in the card catalog and formulated a plan, you move on to the stacks; long rows of closely spaced bookcases, carefully labeled according to some indexing system referenced by the cards and jotted down in your notes, with perhaps additional notices on some of the cases — you're in another room — you come to the shelves, and may well browse through other books near what your notes direct you to, which you can hardly help noticing (not like an electronic system where you generally have to go out of your way to conjure up whatever context the system may be able to provide). You select the particular book you want, and perhaps take it to a reading desk (or just plunk down on the carpet right there, or a nearby footstool, to read); and as you're looking at a physical book, you may well flip through the pages as you go, yet another inherently context-intensive browsing technique made possible by the physicality of the situation.
What makes this whole pre-Internet experience profoundly different from Wikibooks — and I say this as a great enthusiast of Wikibooks — is the rich, deep, pervasive context. And context is where this dovetails back into the main theme of this post, recognizing context as the special province of sapience.
When the thriving memetic ecosystem of oral culture was introduced to the medium of written language, it did profoundly change things, producing literate culture, and new taxonomic classes of memetic organisms that could not have thrived in oral society (I'm thinking especially of scientific organisms); but despite these profound changes, the medium still thoroughly supported language, and context-intensive social interactions mostly remained in the realm of face-to-face encounters. So the memetic ecosystem continued to thrive.
Memetic ecosystem is where all of this links back to the earlier discussion of populations of sapiences.
That discussion noted system self-direction through a population of sapiences can break down if the system is thrown out of balance. And while the memetic ecosystem handily survived the transition to literacy, it's an open question what will happen with the transition to the Internet medium. This time, the new medium is highly context-resistant while it aggressively pulls in social interactions. With sapience centering on context aspects that are by default eliminated or drastically transformed in the transition, it seems the transition must have, somehow, an extreme impact on the way sapient minds develop. If there is indeed a healthy, stable form of society to be achieved on the far side of this transition, I don't think we should kid ourselves that we know what that will look like, but it's likely to be very different, in some way or other, from the sort of stable society that preceded.
The obvious forecast is social upheaval. The new system doesn't know how to put itself together, or really even know for sure whether it can. The old system is pretty sure to push back. As I write this, I look at the political chaos in the United States —and elsewhere— and I see these forces at work.
And I think of the word singularity.
ReplyDeleteSapience is poorly defined, but there are thematically recurring ingredients: intention, adaptation, big-picture view. These individual ingredients can be represented, e.g. with reinforcement learning and probabilistic models. And there aren't so many that we cannot exhaustively explore their composition.
If technology hasn't achieved sapience, it's either because it's missing an ingredient, or because the scale is insufficient. The latter is obviously true, which masks whether the former is a problem.
To clarify, by scale I do not refer to "big data". Rather to a "big model". Even the brain of a fruit-fly (25k neurons, 20M synapses) is considerably larger than most programs today. If we want a machine that can model and 'understand' humans, their environments, their languages, it will likely be on par with a human brain - 100B neurons, 200T connections.
But that problem can eventually be solved by hardware acceleration and Moore's law.
I'm much more interested in the question of missing ingredients.
If our programs exhibit intention, adaptation, and a big picture view (of the world, likely consequences, likely intentions of users, etc.), would this be sufficient for synthetic sapience? Even if not, would it be suitable to bridge the troubling 'gap' between sapience and technology?
I think we could only try it out. But that will need to wait until the technology can run fast enough.
I don't think we should be trying to technologically build sapience (quite separate from the question of whether we can). The practical purpose in technological sapience would be to create people who would be owned rather than requiring wages, i.e., a race of slaves; not that the scientists figuring out how to do it would have that motive, but that that is how it would be used. A second level of immorality would arise due to the problem of the constructed sapiences' sense of morality. Either their sense of morality could be readily programmed by their owners (though I'm inclined to suspect actual sapience would be too volatile for this), in which case their owners would program them to be willing slaves, or they would acquire their sense of morality with substantially as much difficulty as we do, in which case their owners would not go to the trouble but would apply some alternative means of controlling them. I see nothing good in any of these directions. Which is why, though I do not hold back in my speculations on how sapience works, I distinguish that question from, and am disinclined to apply myself to, how to build it.
DeleteIf morality is the issue, then wouldn't it be even more valuable to know how to build a system that is just short of sapience?
DeleteE.g. a system that has a big picture view and adaptiveness, but no motive/intention of its own? Or a system barely capable of sapience were it experienced in a a general human environment, motivated self interest, and left to learn, but instead trained for a narrow motive (e.g. proof assistant) then frozen and packaged?
This also requires a thorough understanding of sapience, its ingredients. To bridge the gap between sapience and technology and to control whether we cross it.
If we want sapience to support a video game, e.g. as a quest master of sorts that can give us realistic politics and world building and NPC dialogs, adjust plot meaningfully based on player decisions, under which conditions would we discover, "oops, we just created a sapience that millions of players will use then discard"?
Is having some personal sense of self necessary for sapience, so simulating hundreds of intelligent NPCs with their own motives wouldn't create one?
I think it's much more valuable to understand sapience, including how to build one, and how to not do so. It wasn't needed before due to limits of scale, e.g. we can't even run game AI as smart as a fruit fly and still have time for graphics. But it will likely become normal knowledge in the future.
In any case, you're free to your own inclinations.
To be clear, morality isn't "the" issue for me, it's an issue. I was remarking on a factor that enters into my modern lack of interest in technologically constructing sapience. My general motive for investigations such as this blog post is basic research. I spent decades in academia (it took me a long, long time to get my degrees) feeling frustrated that I couldn't pursue insights when I had them unless they contributed to my current official academic agenda (typically a thesis). I've long perceived insight to be an incredibly precious resource, of which no drop should be lost. Once I maxed out on degrees, I started this blog partly so I wouldn't have to defer —or, worse, miss out on— insights, but instead could follow them when they came around, wherever they went, however far they went, regardless of how slowly they got there.
Delete"Is having some personal sense of self necessary for sapience"?
I don't think so, no. The modern sense of self seems to be less than three thousand years old, whereas my guess for sapience is more like three million years ago. My most recent post on that subject is Sapient storytelling.
Oops. "sapience" (to support a video game) => "pseudo-sapience".
DeleteWhat I expect isn't that sapiences will be too complicated to create by accident, but rather that they'll be too easy. The right design elements, sufficient scale, bake with real-world training and experiences, and we might achieve something functionally indistinguishable from sapience.
I also believe that it will be very easy to create extremely autistic, idiot-savant sapiences that have no interest outside of their domain, not even self-interest. But they still might understand a lot due to casual replication of world models (like how embedded systems today often have full OS with support for dozens of irrelevant peripherals). Does the moral argument about slavery change for such sapiences? How does it change when a sapience is taking over a tedious or dangerous task?
Several moral arguments against slavery of humans do not generalize to use of sapiences that have very different intrinsic motives, e.g. no self-interest, no dreams of anything greater.
Or we could expediently call these technologies pseudo-sapient and happily enslave them. Is sapience limited to human-like motives and intentions?
I do agree that training moral decision making to a sapience (or pseduo-sapience) is a challenge. But rather than trained or programmed by owner, I expect it would eventually be pre-packaged together with an initial world model.
Re: "The modern sense of self seems to be less than three thousand years old"
DeleteEven thirty-thousand or three-million years ago, people would know whose stomach is sated when they feed their own mouth instead of that of their neighbor. Even if not a "modern" sense of self, this is certainly "a" sense of self.
In any case, I remember reading about that argument that our sense of self has changed. This was based on various scholars studying writings from ancient languages and cultures and conventions, then speculating well beyond the evidence. We already know the Sapir-Whorf hypothesis is quite flawed. And we probably haven't/won't sent psychologists back in time to discuss the issue (time travel tenses D:). I take such unfalsifiable claims with a barrel of salt.
Re: "perceived insight to be an incredibly precious resource, of which no drop should be lost"
You might feel different if you troll the conspiracy theory or perpetual motion machine websites. So much insight wasted, useless, invalid.
Insight is only useful and precious to the prepared.
It seems entirely plausible you may have read a claim that one or another (or several) of those researchers went beyond the evidence. That doesn't mean they did, nor (of course) that they didn't. At any rate, of the several authors whose work on that subject I've studied closely over the past several years, no two of them are saying the same thing. Their collected materials do make some very good points about the dubious origins of traditional interpretations of those ancient writings.
DeleteI agree there is a useful more general sort of self. That more general sort applies to various higher animals that are, however, not sapience. A particularly recent and detailed accounting of my view on that, in relation to the structure of human sapience, occurs in my Sapient storytelling post. I mostly refer to the relevant device as the "self-loom".
Synthetic sapiences have potential to be extremely alien to us.
DeleteAnimals still have a lot in common with humans: a body, centralized senses, need to eat and survive and reproduce, etc.. These factors all contribute to establishing a sense of self, including one's self interest.
But a synthetic sapience might have none of these, e.g. with distributed senses and actuation. And this would lead to a very different perspective. Perhaps with no self