[...] man had always assumed that he was more intelligent than dolphins because he had achieved so much — the wheel, New York, wars and so on — while all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man — for precisely the same reasons.— Douglas Adams, The Hitchhiker's Guide to the Galaxy, Chapter 23.
I have a few things I want to say here about the nature of natural, as opposed to artificial, intelligence. While I'm (as usual) open to all sorts of possibilities, the theory of mind I primarily favor atm has a couple of characteristics that I see are contrary to prominent current lines of thought, so I want to try to explain where I'm coming from, and perhaps something of why.
ContentsThe sapience engine
The sapience engine
Scale
Evolution
Unlikelihood
Sapience
In a nutshell, my theory is that the (correctly functioning) human brain is a sapience engine — a device that manipulates information (or, stored and incoming signals, if you prefer to put it so) in a peculiar way that gives rise to a sapient entity. The property of sapience itself is a characteristic that arises from the peculiar nature of the manipulation being done; the resulting entity is sapient not because of how much information it processes, but because of how it processes it.
This rather simple theory has some interesting consequences.
ScaleThere's an idea enjoying some popularity in AI research these days, that intelligence is just a matter of scale. (Ray Dillinger articulated this view rather well, imho, in a recent blog post, "How Smart is a Smart AI?".) I can see at least four reasons this view has appeal in the current intellectual climate.
Doing computation on a bigger and bigger scale is what we know how to do. Compounding this, those who pursue such a technique are rewarded, both financially and psychologically, for enthusing about the great potential of what they're pursuing. And of course, what we know how to do doesn't get serious competition for attention, because the alternative is stuff we don't know how to do and that doesn't play nearly as well. Better still, by ascribing full sapience to a range of computational power we haven't achieved yet, we absolve ourselves of blame for having not yet achieved full AI. Notwithstanding that just because we know how to do it doesn't mean it has to be able to accomplish what we want it to.
The more complex a computer program gets — the more branching possibilities it encompasses and the bigger the database it draws on — the more effectively it can fool us into seeing its behavior as "like us". (Ray Dillinger's post discusses this.)
The idea that sapience is just a matter of scale appeals to a certain self-loathing undercurrent in modern popular culture. It seems to have become very trendy to praise the cleverness of other species, or of evolution itself, and emphasize how insignificant we supposedly are; in its extreme form this leads to the view that there's nothing at all special about us, we're just another species occupying an ecological niche no more "special" than the niches of various kinds of ants etc. (My subjective impression is that this trend correlates with environmentalism, though even if the correlation is real I'm very wary of reading too much into it. I observe the trend in Derek Bickerton's Adam's Tongue, 2009, which I wouldn't otherwise particularly associate with environmentalism.)
The "scale" idea also gets support from residual elements of the previously popular, opposing view of Homo sapiens as special. Some extraordinary claims are made about what a vast amount of computing power is supposedly possessed by the human brain — evidently supposing that the human brain is actually doing computation, exploiting the computational potential of its many billions of neurons in a computationally efficient way. As opposed to, say, squandering that computational potential in an almost inconceivably inefficient way in order to do something that qualitatively isn't computation. The computational-brain idea also plays on the old "big brain" idea in human evolution, which supposed that the reason we're so vastly smarter than other primates is that our brains are so much bigger. (Terrance Deacon in The Symbolic Species, 1997, debunking the traditional big-brain idea at length, notes its appeal to the simplistic notion of the human brain as a computer.)
I do think scale matters, but I suspect its role is essentially catalytic (Deacon also expresses a catalytic view); and, moreover, I suspect that beyond a certain point, bigger starts to degrade sapience rather than enhance it. I see scale coming into play in two respects. As sketched in my previous post on the subject (here), I conjecture the key device is a non-Cartesian theater, essentially short-term memory. There are two obvious size parameters for adjusting this model: the size of the theater, and the size of the audience. I suspect that with too small an audience, the resultant entity lacks efficacy, while with too large an audience, it lacks coherence. Something similar seems likely to apply to theater size; I don't think the classic "seven plus or minus two" size of human short-term memory is at all arbitrary, nor strongly dependent on other constraints of our wetware (such as audience size).
Note that coherent groups of humans, though they represent collectively a good deal more computational potential than a single human, are generally a lot stupider. Committees — though they can sometimes produce quite good results when well-chaired — are notorious for their poor collective skills; "design by committee" is a running joke. A mob is well-described as a mindless beast. Democracy succeeds not because it necessarily produces brilliant results but because it resists the horrors of more purely centralized forms of government. Wikipedia, the most spectacular effort to date to harness the wisdom of the masses, rather thoroughly lacks wisdom, being prone to the vices of both committees and mobs. (Do I hate Wikipedia? No. I approve deeply of some of the effects it has had on the world, while deploring others; that's a complicated subject for another time.) One might suppose that somehow the individual people in a group are acting a bit like neurons (or some mildly larger unit of brain structure), and one would need a really big group of people before intelligence would start to reemerge, but honestly I doubt it. Once you get past a "group of one", the potential intelligence of a group of people seems to max out at a well-run committee of about six, and I see no reason to think it'll somehow magically reemerge later. Six, remember, is roughly the size of short term memory, and I've both wondered myself, and heard others wonder, if this is because the individuals on the committee each have short term memories of about that size; but as an alternative, I wonder if, just possibly, the optimal committee size is not so much an echo of the size of the non-Cartesian theater as a second example of the same deep phenomenon that led the non-Cartesian theater to have that size in the first place.
EvolutionAs might be guessed from the above, since my last blog post on this subject I've been reading Derek Bickerton's Adam's Tongue (2009) and Terrance Deacon's The Symbolic Species (1997, which was recommended to me by a commenter on my earlier post). Both have a fair amount to say about Noam Chomsky, mostly in the nature of disagreement with Chomsky's notion of a universal language instinct hardwired into the brain.
But it struck me, repeatedly throughout both books, that despite Deacon's disagreements with Chomsky and Bickerton's disagreements with Deacon and Chomsky, all three were in agreement that communication is the essence of the human niche, and sapience is an adjunct to it. I wondered why they thought that, other perhaps than two of them being linguists and therefore inclined to see their own subject in whatever they look at (which could as well explain why I look at the same things and see an algorithm). Because I don't altogether buy into that linguistic assumption. They seem to be dismissing a possibility that imho is worth keeping in play for now.
There's a word I picked up from Deacon: exaptation. Contrasting with adaptation by changing prefix ad- to ex-. The idea is that instead of a species feature developing as an adaptation for a purpose that the species finds beneficial, the feature develops for some other purpose and then, once available, gets exapted for a different purpose. The classic example is feathers, which are so strongly associated with flight now, that it's surprising to find they were apparently exapted to that purpose after starting as an adaptation for something else (likely, for insulation).
So, here's my thought. I've already suggested, in my earlier post, that language is not necessary to sapient thought, though it does often facilitate it and should naturally arise as a consequence of it. What if sapience was exapted for language after originally developing for some other purpose?
For me, the central question for the evolution of human sapience is why it hadn't happened before. One possible answer is, of course, that it had happened before. I'm inclined to think not, though. Why not? Because we're leaving a heck of a big mark on the planet. I'm inclined to doubt that some other sapient species would have been less capable or, collectively, more wise; so it really seems likely to me that if this had happened before we might have noticed. (Could it have happened before without our noticing? Yes, but as I see it Occam's Razor doesn't favor that scenario.)
To elaborate this idea — exaptation of sapience for language — and to put it into perspective with the alternatives suggested by Deacon and Bickerton, I'll need to take a closer look at how an evolutionary path might happen to be extremely unlikely.
UnlikelihoodEvolution works by local search in the space of possible genetic configurations: imagining a drastically different design is something a sapient being might do, not something evolution would do. At any given point in the process, there has to be a currently extant configuration from which a small evolutionary step can reach another successful configuration. Why might a possible target configuration (or a family of related ones, such as "ways of achieving sapience") be unlikely to happen? Two obvious contributing factors would be:
the target is only successful under certain external conditions that rarely hold, so that most of time, even if an extant species were within a short evolutionary step of the target, it wouldn't take that step because it wouldn't be advantageous to do so.
there are few other configurations in the close neighborhood of the target, so that it's unlikely for any extant species to come within a short evolutionary step of the target.
In other words, in order for the evolutionary target to be especially unlikely, we should expect it to be reached by an exaptation of something from another purpose.
(I'm actually unclear on whether or not feathers may be an example of this effect. Clearly they aren't necessary for flight, witness flying insects and bats; but without considerably more biological flight expertise, I couldn't say whether there are technical characteristics of feathered flight that have not been achieved by any other means.)
SapienceThis is one reason I'm doubtful of explaining the rarity of sapience while concentrating on communication. Lots of species communicate; so if sapience were an adaptation for communication, why would it be rare? Bickerton's book proposes a specific niche calling for enhanced communication: high-end scavenging, where bands of humans must cooperate to scavenge the remains of dead megafauna. Possible, yes; but I don't feel compelled by it. The purpose — the niche — doesn't seem all that unlikely to me.
Deacon's book proposes a more subtle, though seemingly closely related, purpose. Though less specific about the exact nature of the niche being occupied — which could be high-end scavenging, or perhaps group hunting — he suggests that in order to exploit the niche, hominins had to work together in large bands containing multiple mated pairs. This is tricky, he says, because in order for these group food-collection expeditions to be of major benefit to the species, those who go on the expeditions must find it in their own genetic self-interest to share the collected food with the stay-at-home nurturers and young. He discusses different ways to bring about the required self-interest motive; but evolution works, remember, in small steps, so not all of these strategies would be available for our ancestors. He suggests that the strategy they adopted for the purpose — the strategy, we may suppose, that was reachable by a small evolutionary step — was to have each mated pair enter into a social contract, essentially a marriage arrangement, in which the female agrees to mate only with a particular male in exchange for receiving food from that male's share. The arrangement holds together so long as they believe each other to be following the rules, and this requires intense communication between them plus sophisticated reasoning about each other's future behavior.
I do find (to the extent I understand them) Deacon's scenario somewhat more plausible than Bickerton's, in that it seems to provide more support for unlikelihood. Under Bickerton, a species tries to exploit a high-end scavenging niche, and the available solution to the coordination problem is proto-language. (He describes various other coordination techniques employed by bees and ants.) Under Deacon, a species tries to exploit a high-end scavenging-or-hunting niche, and the available solution to the cooperation problem is a social contract supported by symbolic thought. In either scenario, the species is presented with an opportunity that it can only exploit with an adaptation. For this to support unlikelihood, the adaptation has to be something that under most circumstances would not have been the easiest small-step solution to the challenge. Under Bickerton, the configuration of the species must make proto-language the closest available solution to the coordination problem. Under Deacon, the configuration of the species must make symbolic thinking the closest available solution to the cooperation problem. This is the sense in which, as I said, I find Deacon's scenario somewhat more plausible.
However, both scenarios seem to me to be missing something important. Both of them are centrally concerned with identifying a use (coordination per Bickerton, cooperation per Deacon) to which the new feature is to be put: they seek to explain the purpose of an adaptation. By my reasoning above, though, either the target of this adaptation should be something that's almost never the best solution for the problem, or the target should only be reachable if, at the moment it's wanted, some unlikely catalyzing factor is already present in the species (thus, available for exaptation). Or, of course, both.
From our end of human evolution, it seems that sapience is pretty much infinitely versatile, and so ought to be a useful adaptation for a wide variety of purposes. While this may be so, when conjecturing it one should keep in mind that if it is so, then sapience should be really difficult to achieve in the first place — because if it were both easy to achieve and useful for almost everything, one would expect it to be a very common development. The more immediately useful it is once achieved, the more difficult we'd expect it to be to achieve in the first place. I see two very plausible hypotheses here:
Sapience itself may be an adaptation for something other than communication. My previous post exploring sapience as a phenomenon (here) already suggested that sapience once achieved would quickly be exapted for communication. My previous posts regarding verbal culture (starting here) suggest that language, once acquired, may take some time (say, a few million years) to develop into a suitable medium for rapid technological development; so the big payoff we perceive from sapience would itself be a delayed exaptation of language, not contributing to its initial motivation. Deacon suggests there are significant costs to sapience, so that its initial adoption has to have a strong immediate benefit.
Sapience may require, for its initial emergence, exaptation of some other relatively unlikely internal feature of the mind. This calls for some deep mulling over, because we don't have at all a firm grasp of the internal construction of sapience; we're actually hoping for clues to the internal construction from studying the evolutionary process, which is what we're being offered here if we can puzzle it out.
Putting these pieces together, I envision a three-step sequence. First, hominin minds develop some internal feature that can be exapted for sapience if that becomes sufficiently advantageous to overcome its costs. Second, an opportunity opens up, whereby hominin communities have a lot to gain from group food-collection (be it scavenging or hunting), but to make it work requires sophisticated thinking about future behavior, leading to development of sapience. The juxtaposition of these first two steps being the prime source of unlikelihood. I place no specific requirement on how sapience is applied to the problem; I merely suppose that sapience (symbolic thinking, as Deacon puts it) makes individuals more able to realize that cooperating is in their own self-interest, and doing so is sufficiently advantageous to outweigh the costs of sapience, therefore genes for sapience come to dominate the gene pool. Third, as sapience becomes sufficiently ubiquitous in the population, it is naturally exapted for language, which then further plays into the group cooperation niche as well as synergizing with sapience more broadly. At this point, I think, the process takes on an internal momentum; over time, our ancestors increasingly exploit the language niche, becoming highly optimized for it, and the benefits of language continue to build until they reach critical mass with the neolithic revolution.
Good One
ReplyDeletehttp://structuraldesignbs.blogspot.com/