Monday, December 26, 2011

Preface to Homer

But of the tree of the knowledge of good and euill, thou ſhalt not eate of it: for in the day that thou eateſt thereof, thou ſhalt ſurely die.
Genesis 2:17 (King James Version)

I'm going to suggest a hypothesis on the evolution of societies, natural languages, and memetic organisms.  Relating human language and human thought to programming is a straightforward exercise left to the student.

Eric Havelock in his magnum opus Preface to Plato (fascinating stuff) describes two profoundly different stages in the evolution of human societies, which he calls oral society and literate society.  Greek society, reckons Havelock, was transforming from oral to literate around the time of Plato; the epics of Homer are characteristic of oral culture.  I suggest that there is a stage of societal evolution before that of oral poetry such as Homer's — and that just as Afghan culture is a surviving example of oral society, the Pirahã culture recently studied in the Amazon is a surviving example of... verbal society.  (I can't bring myself to call it "pre-oral", since language is still spoken; but it does have words, and is missing oration, so "verbal" will do till something better comes along.)

Scientific organisms need a literate environment to survive; religious organisms don't need literacy, and were, I reckon, the giants that roamed the ideosphere in the age of orality.  But in the age before Homer's, religions could not have survived either.  If ever there were an actual event that fits beautifully with the myth of Adam and Eve eating of the fruit of the Tree of Knowledge, the transition from verbal to oral society would be it.

Oral thought

In an oral society (in Havelock's vision — I've also read parts of Walter Ong's Orality and Literacy, which follows Havelock's paradigm), knowledge is preserved through an oral tradition.  The form of the account matters; facts are mutated as convenient to put them in a form that will be successfully replicated over many retellings.  Standard patterns are used.  Repetition is used.  A good story also always has an actor:  things don't just happen, they are done, by somebody, which may favor a polytheistic society, with deities personifying what we (in a literate society) would call "abstract" forces.  One might well end up with some such pattern as

And God said, let there be x.  And there was x.  And God saw the x, and saw that it was good.  And the morning and the evening were the nth day.

And God said, let there be y.  And there was y.  And God saw the y, and saw that it was good.  And the morning and the evening were the n+1st day.

(Note the repetition of each item, repetition of the pattern, and the actor.)  For a more concrete example, here is the start of Thomas Hobbes's 1676 translation of Homer's Iliad:
O goddess sing what woe the discontent
Of Thetis’ son brought to the Greeks; what souls
Of heroes down to Erebus it sent,
Leaving their bodies unto dogs and fowls;
Whilst the two princes of the army strove,
King Agamemnon and Achilles stout.
Notice that things here don't happen, somebody/something does them.  The goddess sings.  The discontent (of Thetis' son) brings, sends, leaves.  The two princes strive.

The things that are acted on are concrete as well; nothing is abstract in our literate sense.

Such oral tradition can be written down, and was written down, without disrupting the orality of the society.  Literate society is what happens when the culture itself embraces writing as a means of preserving knowledge instead of an oral tradition.  Once literacy is assimilated, set patterns are no longer needed, repetition is no longer needed, pervasive actors are no longer needed, and details become reliably stable in a way that simply doesn't happen in oral society — the keepers of an oral tradition are apt to believe they tell a story exactly the same way each time, but only because they and their telling change as one.  When the actors go away, it becomes possible to conceive of abstract entities.  Plato, with his descriptions of shadows on a cave wall, and Ideal Forms, and such, was (Havelock reckoned) trying to explain literate abstraction in a way that might be understood by someone with an oral worldview.

Note that science can't possibly survive in an oral environment.  It relies on an objectively fixed record of phenomena, against which to judge theories; and it centrally studies abstract forces.  Religion, on the other hand, is ideally suited to an oral environment.  I suggest that religious organisms were the dominant taxon of memetic organisms in oral society, and the taxon of scientific organisms evolved once literate society made it possible.  Leading to a classic Darwinian struggle for survival between the two taxa.

Verbal thought

Those who study natural languages have a luxury not afforded to artlangers, those who create languages artistically:  When a natural language crops up that violates the "natural" rules that linguists thought they understood, new rules can be invented to fit.  But, if an artlanger creates a language that violates the rules linguists think they understand, their creation is likely to be ridiculed.  This was the observation of David J. Peterson in an essay in 2007 (he gives a "Smiley award" to a conlang each year, and does so in essays containing some brilliant deep insights into conlanging).

What is it to a linguist if Pirahã exists? "That sounds totally fake," says the skeptic. Says the linguist, "Yeah, doesn't it?" But in a world where Pirahã doesn't exist, imagine the conlanger who created it. "I just made up a language with no temporal vocabulary or tense whatsoever, no number system, and a culture of people who have no oral history, no art, and no appreciation for storytelling. Oh, yeah, and the language can just as easily be whistled, hummed or drummed as spoken. Oh, and the men and women have different phonologies. Oh yeah, and it's spoken in an area with a dominant language, but nobody speaks it, because they think their language is the best. Oh yeah, and it's supposed to be a naturalistic language." Suddenly when someone counters and says, "That sounds totally fake," the conlanger is put on the defensive, because they do have to account for it—in other words, "Yeah, doesn't it?" isn't going to fly.
— David J. Peterson, The 2007 Smiley Award Winner: Teonaht
Which is interesting for a conlanger, but fascinating in light of Havelock's notion of oral society.  That list of features is pretty much explicitly saying the language doesn't and can't support an oral society:  "no oral history, no art, and no appreciation for storytelling" (doesn't), "no temporal vocabulary or tense whatsoever, no number sytem" (can't).  And for a verbal society to survive in a world where the main Darwinian memetic struggle is between literacy and orality, of course it would have to be an extraordinarily compelling instance of verbality — "nobody speaks" the dominant language of the area, "because they think their language is the best."

The fruit of the Tree of Knowledge

The Pirahã were studied by Daniel Everett, who originally approached them with an evangelical mission — the point of such efforts is to learn an isolated people's language, then translate the teachings of Christianity into that language.  Of course this failed miserably with the Pirahã, with their compellingly verbal language and culture (J.R.R. Tolkien criticized the international auxilliary language Esperanto as sterile because it had no mythology behind it (one could argue Esperanto now does have a mythology of a sort (but I digress))).  Within a few years after completing his dissertation, Everett became an atheist.  (Everett's 2008 book on his experiences with the Pirahã is Don't Sleep There are Snakes: Life and Language in the Amazonian Jungle.)

All of which goes to show that the myth of the fruit of the Tree of Knowledge can be handily applied to the transition from verbal to oral society.  However, as I pointed out in my earlier post on memetic organisms, religious teachings are naturally selected for their ambiguity, their ability to be given a wide variety of different interpretations.  The plausibility of an interpretation of the myth is, therefore, pretty much meaningless — the myth is a contextual chamelion, expected to blend plausibly into different belief systems.  But it is interesting to consider how the myth might have evolved.  The early Judaic written tradition is evidently a written record of an originally oral tradition, and an oral tradition mutates events into a good story (i.e., a highly replicable one).  If the verbality conjecture is somewhere in the general neighborhood of the truth, there may once have been a vast cultural upheaval as orality supplanted verbality, perhaps (or perhaps not) on a par with the modern struggle between scientific and religious thinking (a major theme of current geopolitics).  Such an upheaval might be expected to make a lasting impression on the oral societies that emerged; the lasting impression would be a myth; and the myth would be shaped into the forms of orality, with concrete actors and objects.  What sort of myth do you think might result?

[Update:  I further discuss the timing of the verbal/oral transition in a 2015 post Sapience and language.]

Friday, December 9, 2011

The trouble with monads

IT was six men of Indostan
    To learning much inclined,
Who went to see the Elephant
    (Though all of them were blind),
That each by observation
    Might satisfy his mind.
The Blind Men and the Elephant, John Godfrey Saxe
Note:  So far, I've found three different ways of applying this epigraph to the following blog post.
Monadic programming is a device by which pure functional languages are able to handle impure forms of computation (side-effects).  Depending on who you ask, it's either very natural or preposterously esoteric.

I submit there is a systemic problem with monadic programming in its current form.  Superficially, the problem appears technical.  Just beneath that surface, it appears to be about breaking abstraction barriers.  Below that, the abstraction barriers seem to have already been broken by the pure functional programming paradigm.  I'll suggest, broadly, an alternative approach to functional programming that might resurrect the otherwise disabled abstraction barriers.

What monads do for pure FP

A pure function has no side-effects.  Pure functional programming —programming entirely with pure functions— has some advantages for correctness proofs, which is (arguably) all very well as long as the purpose of the program is to produce a final answer.  Producing a final answer is just what a pure function does.  Sometimes, though, impurities are part of the purpose of the program.  I/O is the canonical example:  if the point is to interact with the external world, a program that cannot interact with the external world certainly misses the point.

If the point is to interact with the external world, and you still want to use a pure function to do it, you can write a pure function that (in essence) takes the state of the external world as a parameter, and returns its result together with a consequent state of the external world.  Other forms of impurity can be handled similarly, with a pure function that explicitly provides for the particular variety of impurity; monads are "simply" a general pattern for a broad range of such explicit provisions.

Note, however, that while the monadic program is parameterized by the initial state of the external world, the monad itself is hardcoded into the type signature of the pure function.

What goes wrong

The essential difficulty with this approach is that since the monad is hardcoded into the function's type signature, it also gets hardcoded into clients who wish to call that function.

To illustrate the resulting brittleness, suppose some relatively small function f is used by a large program p, involving many functions, with calls to f at the bottom of many layers of nesting of calls.  Suppose all the functions in p are pure, but later, we decide each call to f should incidentally output some diagnostic message, which makes no difference to the operation of the program but is meant to be observed by a human operator.  That's I/O, and the type signature of f hadn't provided for I/O; so we have to change its type signature by wiring in a suitable I/O monad.  But then, each function that directly calls f has to be changed to recognize the new signature of f, and since the calling function now involves I/O, its type signature too has to change.  And type signatures change all the way up the hierarchy of nested function calls, until the main function of p gets a different type signature.

Every direct or indirect client of f has been forced to provide for the stateful I/O behavior of f.  One could ask, though, why this stateful behavior of f should make any difference at all to those clients.  They don't do any I/O, and if not for this type-signature business they wouldn't care that f does; so why should f's I/O be any of their business?  For them to bother with it at all seems a violation of an abstraction barrier of f.

Actually, this very real abstraction violation was not caused by the introduction of monads into pure functional programming — it was highlighted by that introduction.  The violation had already occurred with the imposition of pure functional programming, which denies each component function the right to practice impurities behind an abstraction barrier while merely presenting a pure appearance to its clients.

The introduction of monads also created the distracting illusion that the clients were the ones responsible for violating the abstraction barrier.  On the contrary, the clients are merely where the symptoms of the violation appear.  The question should not be why the client function cares whether f is internally impure (it doesn't care; its involvement was forced), but rather, who is it who does care, and why?

Bigness

Monads come from a relatively modern branch of mathematics (it's a mere half-century old) called category theory.

A category is a well-behaved family of morphisms between objects of some uniform kind.  The category provides one operation on morphisms:  composition, which is defined only when one morphism ends on the same object where the next morphism starts.  (Technically, a category is its composition operation, in that two different categories may have the same objects and the same morphisms, and still be different if their composition operation is different.)

The canonical example is category Set, which is the family of mathematical functions between sets — with the usual notion of function composition.  That's all possible functions, between all possible sets.  This is typical of the scale at which category theory is brought to bear on computation theory:  a category represents the universe of all possible computations of interest.  The categories involved are then things like all computable pure functions, or all computable functions with a certain kind of side-effect — it should be clearly understood that these categories are big.  Staggeringly, cosmologically, big.

Besides well-behaved families of morphisms between objects of a uniform kind, there are also well-behaved families of morphisms from objects of one uniform kind to objects of another uniform kind.  These families of heterogenous morphisms are called adjunctions.  An adjunction includes, within its structure, a category of homogeneous morphisms within each of the two kinds of objects — called the domain category (from which) and codomain category (to which the heterogeneous morphisms of the adjunction go).  The adjunction also projects objects and morphisms of each category onto the other, projects each of its own heterogeneous morphisms as a homogeneous morphism in each of the categories, and requires various relations to hold in each category between the various projections.

The whole massive adjunction structure can be viewed as a morphism from the domain category to the codomain category — and adjunctions viewed this way are, in fact, composable in a (what else?) very well-behaved way, so that one has a category Adj whose objects are categories and whose morphisms are adjunctions.  If the categories we're interested in are whole universes of computation, and the adjunctions are massive structures relating pairs of universes, the adjunctive category Adj is mind-numbingly vast.  (In its rigorous mathematical treatment, Adj is a large category, which means it's too big to be contained in large categories, which can themselves only contain "small" categories — an arrangement that prevents large categories from containing themselves and thereby avoids Russell's paradox.)

A monad is the result of projecting all the parts of an adjunction onto its domain category — in effect, it is the "shadow" that the adjunction casts in the domain.  This allows the entire relation between the two categories to be viewed within the universe of the domain; and in the categorical view of computation, it allows various forms of impure computation to be viewed within the universe of pure computation.  This was (to my knowledge) the earliest use of monads in relation to computation:  a device for viewing impure computation within the world of pure computation.  A significant limitation in this manner of viewing impure computations is that, although adjunctions are composable, monads in general are not.  Here the "shadow" metaphor works tolerably well:  two unconnected things may appear from their shadows to be connected.  Adjunctions are only composable if the domain of one is the codomain of the other — which is almost certainly not true here, because all our monads have the same domain category (pure computation), while the shadows cast in pure computation all appear to coincide since the distinct codomains have all been collapsed into the domain.

Who is viewing these various forms of computation, through the shadows they cast on the world of pure computation?  Evidently, the programmer — in their role as Creator.  A God's eye view.  Viewing the totality of p through the universe of pure computation is the point of the exercise; the need for all clients of f to accommodate themselves to f's internal use of I/O is an artifact of the programmer's choice of view.

Rethinking the paradigm

So, here are the points we have to work with.
  • A monad represents the physical laws of a computational universe within which functions exist.
  • The monad itself exists within the pure computational universe, rather than within the universe whose laws it represents.  This is why monads are generally uncomposable:  they have forgotten which actual universes they represent, and composition of adjunctions wants that forgotten knowledge.
  • Function signatures reflect these computational laws, but serve two different purposes.  From the client's eye view, a function signature is an interface abstraction; while from the God's eye view (in the pure computational universe), a function signature is the laws under which the function and everything it uses must operate.
To uncouple a function's interface from its laws of computation, we need a function signature that does not take a God's eye view.  The most "purity" one can then ascribe to a function f, looking at the definition of f without the definitions of other functions it calls, is that f uses other functions as if they were pure, and doesn't itself introduce any impurities. From a God's eye view, computational laws then have to pass from function to function, either
  • by synthesis — when function f calls function g, g returns its computational laws along with its result value, and f works out how to knit them all into a coherent behavior, and returns its own knit-together computational laws along with its own result value — or
  • by inheritance — when function f calls function g, f passes in its computation laws along with its parameters to g, and g works out both how to knit them into its own computational laws internally, and how to present itself to f in terms f can understand.
For purposes of abstraction, inheritance ought to be the way to go, because the point of abstraction is to reduce the conceptual burden on the caller, rather than increase it.  Not surprisingly, inheritance also appears to be the more daunting strategy to figure out.

Wednesday, November 30, 2011

Rhyming in US politics

Just a small structural insight into US politics (as advertised).
Republicans in Congress have allowed their agenda to be set by President Obama.

Republicans in Congress are being obstructionist; this shouldn't be a controversial statement, since historically they haven't made a secret of it (though, in something rather like a Catch-22, the insight I'm heading for makes it natural to expect disagreement on this along party lines).  But what one ought to be asking is why.

It's simple, really.  When an administration comes in, the opposition usually aligns itself squarely against the central priority of the new administration.  Although this may be a disagreement that predates the new administration, a sadder scenario —for all parties, and for the electorate— is that the opposition may be in disarray and simply not have any better focus than, well, opposing.  On secondary issues there may be all kinds of cooperation, but by default, not on that central priority.

And here we have a president who was really pretty explicit, before elected, that his central priority is cooperation.  It's the message that got him national attention in the first place:  just because we have disagreements doesn't mean we can't cooperate.

Now, consider how one would go about opposing that priority, and compare it to the current situation.

And, to see the other side of the coin, consider how one would go about pursuing that priority — in the face of opposition to it.

Monday, November 14, 2011

Where do types come from?

[On the phone] There's a man here with some sort of a parasite on his arm, assimilating his flesh at a frightening speed.  I may have to get ahead of it and amputate.  No... I don't know what it is or where it came from.
— Dr. Hallen, The Blob
I took some tolerably advanced math courses in college and graduate school.  My graduate research group of choice was the Theory Umbrella Group (THUG), joint between the math and computer science departments.  But one thing I never encountered in any of those courses, nor that-I-recall even in the THUG talks, was a type.  Sets aplenty, but not types.  Types seem to arise from specifically studying computer science, mathematics proper having no native interest in them.  There are the "types" in Russell and Whitehead's Principia Mathematica, but those don't seem to me to have anything really to do with types as experienced in programming.

Yet, over in the computer science department, we're awash in types.  They're certainly used for reasoning about programs (both practically and theoreticially) — but at some point our reasoning may become more about the types themselves than about the programs they apply to.  Type systems can be strikingly reminiscent of bureaucratic red tape when one is getting tangled up in them.  So, if they aren't a natively mathematical concept, why are they involved in our reasoning at all?  Are they natural to what we're reasoning about (programs), or an unfortunate historical artifact?  From the other side, is reasoning in mathematics simpler because it doesn't use types, or does it not need to use types because what it's reasoning about is simpler?

Representation format

Looking back at the early history of programming, types evidently arose from the need to keep track of what format was being used by a given block of binary data.  If a storage cell was assigned a value using a floating-point numerical representation, and you're trying to treat it as a series of ASCII characters, that's probably because you've lost track of what you meant to be doing.  So we associate format information with each such cell.  Note that we are not, at this point, dealing directly with the abstract entities of mathematics, but with sequences of storage bits, typically fixed-width sequences at that.  Nor does the type even tell us about a sort of mathematical entity that is being stored, because within the worldview presented by our programming language, we aren't storing a mathematical entity, we're representing a data value.  Data values are more abstract than bit sequences, but a lot less abstract than the creatures we'd meet in the math department.  The essential difference, I'd suggest, is that unlike their mathematical cousins, data values carry about with them some of their own representation format, in this case bit-level representation format.

A typical further development in typing is user-defined (which is to say, programmer-defined) types.  Each such type is still stored in a sequence of storage bits, and still tells us how the storage is being used to represent a data value, rather than store a mathematical entity.  There is a significant difference from the earlier form of typing, in that the language will (almost certainly) support a practically infinite number of possible user-defined types, so that the types themselves have somewhat the character of mathematical abstract entities, rather than data values (let alone bit sequences).  If, in fact, mathematics gets much of its character by dealing with its abstract entities unfettered by representational issues (mathematics would deal with representation itself as just another abstract domain), a computer scientist who wants that character will prefer to reason as much as possible about types rather than about data values or storage cells.

Another possible development in typing, orthogonal to user-defined types, is representation-independence, so that the values constrained by types are understood as mathematical entities rather than data values.  The classic example is type bignum, whose values are conceptually mathematical integers.  Emphasis on runtime efficiency tends to heighten awareness of representational issues, so one expects an inverse relation between that emphasis, and likelihood of representation-independent types.  It's not a coincidence that bignums flourish in Lisp.  Note also that a key twist in the statement of the expression problem is the phrase "without recompiling existing code".

Complicated type systems as crutches

Once we have types, since we're accustomed to thinking about programs, we tend to want to endow our type systems with other properties we know from our programming models.  Parametric types.  Dependent types.  Ultimately, first-class types.

I've felt the lure of first-class types myself, because they abandon the pretense that complicated types systems aren't treating types computationally.  There's an incomplete language design in my files wherein a type is an object with two methods, one for determining membership and one for determining sub/supertyping.  That way leads to unbounded complications — the same train of thought has led me more recently to consider tampering with incompleteness of the continuum (cf. Section 8.4.2 of my dissertation; yet another potential blog topic [later did blog on this, here]).  As soon as I envisioned that type system, I could see it was opening the door to a vast world of bizarre tricks that I absolutely didn't want.  I really wanted my types to behave as mathematical sets, with stable membership and transitive subtyping — and if that's what you want, you probably shouldn't try to get there by first giving the methods Turing power and then backing off from it.

But, from the above, I submit that these complicated type systems are incited, to begin with, when we start down the slippery slope by
  • tangling with data values —halfway between the abstract and concrete worlds— instead of abstract mathematical entities, and
  • placing undue emphasis on types, rather than the things they describe.  This we did in the first place, remember, because types were more nearly mathematical; the irony of that is fairly intense.
In contrast to the muddle of complicated typing in computer science, folks over in the math department deal mostly with sets, a lightweight concept that fades comfortably into the background and only has to be attended carefully under fairly extreme circumstances.  Indeed, contrasting types and sets, a major difference between them is that types have object identity — which is itself a borderline representational concept (able to come down on either side of the line), and jibes with experience that sophisticated types become data structures in their own right.  Yes, there are such things as types that don't have object identity; but somehow it seems we've already crossed the Rubicon on that one, and can no longer escape from the idea even in languages that don't endorse it.

Where next?

What we need, it seems, is the lightweight character of mathematical reasoning.  There's more to it than mathematical "purity"; Haskell is fairly pure, but tbh I find it appallingly heavy.  I find no sense of working with simple primitives — it feels to me more like working on a scaffold over an abyss.  In mathematics, there may be several different views of things any one of which could be used as a foundation from which to build the others.  That's essentially perfect abstraction, in that from any one of these levels, you not only get to ignore what's under the hood, but you can't even tell whether there is anything under the hood.  Going from one level to the next leaves no residue of unhidden details:  you could build B from A, C from B, and A from C, and you've really gotten back to A, not some flawed approximation of it that's either more complicated than the original, more brittle than the original, or both.

Making that happen in a language design should involve some subtle shifts in the way data is conceptualized.  That isn't a digression in a discussion of types, because the way we conceptualize data has deep, not to say insidious, effects on the nature of typing.  As for the types themselves, I suggest we abandon the whole notion of types in favor of a lightweight mathematical notion of sets — and avoid using the word "type" as it naturally drags us back toward the conceptual morass of type theory that we need to escape.

Tuesday, November 8, 2011

Allowing and disallowing

Here are two problems in programming language design that are often treated as if they had to be traded off against each other.  I've found it enormously productive to assume that high-level tradeoffs are accidental rather than essential; that is, to assume that if only we find the right vantage to view the problems, we'll see how to have our cake and eat it too.  A good first step toward finding a fresh vantage on a problem is to eliminate unnecessary details and assumptions from the statement of the problem.  So here are spare, general statements of these two problems.
  • Allow maximally versatile ways of doing things, with maximal facility.
  • Disallow undesirable behavior.
I've been accused of promoting unmanageable chaos because my publicly visible work (on Kernel and fexprs) focuses on the first problem with some degree of merry disregard for the second.  So here I'll explain some of my thoughts on the second problem and its relationship to the first.

How difficult are these problems?  One can only guess how long it will actually take to tame a major problem; there's always the chance somebody could find a simple solution tomorrow, or next week.  But based on their history, I'd guess these problems have a half-life of at least half a century.

Why

To clarify my view of these problems, including what I mean by them, it may help to explain why I consider them important.

Allowing is important because exciting, new, and in any and all senses profitable innovations predictably involve doing things that hadn't been predicted.  Software technology needs to grow exponentially, which is a long-term game; in the long term, a programming language either helps programmers imagine and implement unanticipated approaches, or the language will be left in the dust by better languages.  This is a sibling to the long-term importance of basic research.  It's also a cousin to the economic phenomenon of the Long Tail, in which there's substantial total demand for all individually unpopular items in a given category — so that while it would be unprofitable for a traditional store to keep those items in stock, a business can reap profits by offering the whole range of unpopular items if it can avoid incurring overhead per item.

Disallowing is important because, bluntly, we want our programs to work right.  A couple of distinctions immediately arise.
  • Whose version of "right" are we pursuing?  There's "right" as understood by the programmer, and "right" as understood by others.  A dramatic divergence occurs in the case of a malicious programmer.  Of course, protecting against programmer malfeasance is especially challenging to reconcile with the allowing side of the equation.
  • Some things we are directly motivated to disallow, others indirectly.  Direct motivation means that thing would in itself do something we don't want done.  Indirect motivation means that thing would make it harder to prove the program doesn't do something we don't want done.

How

If allowing were a matter of computational freedom, the solution would be to program in machine code.  It's not.  In practice, a tool isn't versatile or facile if it cannot be used at scale.  What we can imagine doing, and what we can then work out how to implement, depends on the worldview provided by the programming language, within which we work, so allowing depends on this worldview.  Nor is the worldview merely a matter of crunching data — it also determines our ability to imagine and implement abstractions within the language — modulating the local worldview, within some broader metaphysics.  Hence my interest in abstractive power (on which I should blog eventually [note: eventually I did]).

How ought we to go about disallowing?  Here are some dimensions of variation between strategies — keeping in mind, we are trying to sort out possible strategies, rather than existing strategies (so not to fall into ruts of traditional thinking).
  • One can approach disallowance either by choosing the contours of the worldview within which the programer works, or by imposing restrictions on the programmer's freedom to operate within the worldview.  The key difference is that if the programmer thinks within the worldview (which should come naturally with a well-crafted worldview), restriction-based disallowance is directly visible, while contour-based disallowance is not.  To directly see contour-based disallowance, you have to step outside the worldview.

    To reuse an example I've suggested elsewhere:  If a Turing Machine is disallowed from writing on a blank cell on the tape, that's a restriction (which, in this case, reduces the model's computational power to that of a linear bounded automaton).  If a Turing Machine's read/write head can move only horizontally, not vertically, that's a contour of the worldview.

  • Enforcement can be hard vs soft.  Hard enforcement means programs are rejected if they do not conform.  Soft enforcement is anything else.  One soft contour approach is the principle I've blogged about under the slogan dangerous things should be difficult to do by accident.  Soft restriction might, for example, take the form of a warning, or a property that could be tested for (either by the programmer or by the program).

  • Timing can be eager vs lazy.  Traditional static typing is hard and eager; traditional dynamic typing is hard and lazy.  Note, eager–lazy is a spectrum rather than a binary choice.  Off hand, I don't see how contour-based disallowance could be lazy (i.e., I'd think laziness would always be directly visible within the worldview); but I wouldn't care to dismiss the possibility.
All of which is pretty straightforward.  There's another dimension I'm less sure how to describe.  I'll call it depthShallow disallowance is based on simple, locally testable criteria.  A flat type system, with a small fixed set of data types that are mutually exclusive, is very shallow.  Deep disallowance is based on more sophisticated criteria that engage context.  A polymorphic function type has a bit of depth to it; a proof system that supports sophsiticated propositions about code behavior is pretty deep.

Shallow vs deep tends to play off simplicity against precision.  Shallow disallowance strategies are simple, therefore easily understood, which makes them more likely to be used correctly and —relatively— less likely to interfere with programmers' ability to imagine new techniques (versatility/facility of allowance).  However, shallow disallowance is a blunt instrument, that cannot take out a narrow or delicately structured case of bad behavior without removing everything around it.  So some designers turn to very deep strategies —fully articulated theorem-proving, in fact— but thereby introduce conceptual complexity, and the conceptual inflexibility that tends to come with it.

Recalling my earlier remark about tradeoffs, the tradeoffs we expect to be accdiental are high-level.  Low-level tradeoffs are apt to be essential.  If you're calculating reaction mass of a rocket, you'd best accept the tradeoff dictated by F=ma.  On the other hand, if you step back and ask what high-level task you want to perform, you may find it can be done without a rocket.  With disallowance depth, deep implies complex, and shallow implies some lack of versatility; there's no getting around those.  But does complex disallowance imply brittleness?  Does it preclude conceptual clarity?

One other factor that's at play here is level of descriptive detail.  If the programming language doesn't specify something, there's no question of whether to disallow some values of it.  If you just say "sort this list", instead of specifying an algorithm for doing so, there's no question —within the language— of whether the algorithm was specified correctly.  On the other hand, at some point someone specified how to sort a list, using some language or other; whatever level of detail a language starts at, you'll want to move up to a higher level later, and not keep respecifying lower-level activities.  That's abstraction again.  Not caring what sort algorithm is used may entail significantly more complexity, under the hood, than requiring a fixed algorithm — and again, we're always going to be passing from one such level to another, and having to decide which details we can hide and how to hide them.  How all that interacts with disallowance depth may be critical:  can we hide complex disallowance beneath abstraction barriers, as we do other forms of complexity?

Merry disregard

You may notice I've had far more to say about how to disallow, than about how to allow.  Allowing is so much more difficult, it's hard to know what to say about it.  Once you've chosen a worldview, you have a framework within which to ask how to exclude what you don't want; but finding new worldviews is, rather by definition, an unstructured activity.

Moreover, thrashing about with specific disallowance tactics may tend to lock you in to worldviews suited to those tactics, when what's needed for truly versatile allowing may be something else entirely.  So I reckon that allowing is logically prior to disallowing.  And my publicly visible work does, indeed, focus on allowing with a certain merry disregard for the complementary problem of disallowing.  Disallowing is never too far from my thoughts; but I don't expect to be able to tackle it properly till I know what sort of allowing worldview it should apply to.

Saturday, June 4, 2011

Primacy of syntax

One of the most influential papers in the history of computer science is Christopher Strachey's 1967 "Fundamental Concepts in Programming" — lecture notes for the International Summer School on Computer Programming, Copenhagen.  Christopher Strachey basically created the academic subject of programming languages, and in those lecture notes you will find many core concepts laid out.  First-class objects.  Left-values and right-values.  Polymorphism.  And so on.

Although Strachey did write a finished version of the lecture notes for publication in the proceedings of the summer school, those proceedings never materialized — so copies of the lecture notes were passed from hand to hand.  (The modern name for such a phenomenon is "going viral", but it's surely been happening since before writing was invented.)  Finally, on the twenty-fifth anniversary of Strachey's untimely death, the finished paper was published.

By that time, though, I'd spent over a decade studying my cherished copy of the raw lecture notes.  The notes consist largely of abbreviated, even cryptic, non-sentences, but their pithy phrases are often more expressive than the polished prose of the published paper — and sometimes the ideas candidly expressed didn't even make it into the published paper.  A particular item from the lecture notes, that has stuck with me from the first moment I read it (about a quarter century ago, by now) is
Basic irrelevance of syntax and primacy of semantics.
This seems to me to neatly capture an attitude toward syntax that became firmly rooted in the late 1960s, and has scarcely been loosened since.  My own passion, though, within the academic subject Strachey created, is abstraction; and from my first explorations of abstraction in the late 1980s, it has seemed to me that abstraction defies the separation between syntax and semantics.  Sometime during the 1988/89 academic year, I opined to a professor that the artificial distinction between syntax and semantics had been retarding the development of abstraction technology for over twenty years — and the professor I said it to laughed, and after a moment distractedly remarked "that's funny" as they headed off to their next appointment.  Nonplussed by that response (I was still rather in awe of professors, in those days), to myself alone I thought, but I wasn't joking.

Abstraction

A source code element modifies the language in which it occurs.  Starting, say, with standard Java, one introduces a new class, and ends up with a different programming language — almost exactly like standard Java, but not quite because it now has this additional class in it.  That is abstraction, building a new language on top of an old one.
Abstraction:  Transformation of one programming language into another by means of facilities available in the former language.
What makes it abstraction, rather than just an arbitrary PL transformation, is that it uses the facilities of the pre-existing programming language.  (For those who prefer to trace their definitions back a few centuries, the key requirement that the new language be latent in the old —that it be drawn out of the old, Latin abstrahere— appears in John Locke's account of abstraction in his An Essay Concerning Human Understanding; the relevant passage is the epigraph at the top of Chapter 1 of the Wizard Book — and is also the epigraph at the top of Chapter 1 of my dissertation.)

An abstractively powerful programming language is, by my reckoning, a language from which one can abstract to a wide variety of other languages.  The precise formalization of that is a subtle thing; and worthy of a blog entry of its own, especially since my formal treatment of it, WPI-CS-TR-08-01, gets rather dense in some parts.  For the current entry, though, we don't need those details; the key point here is that the result of the abstraction is another language.  This is in contrast to the denotational approach to programming languages (which Strachey helped create, BTW, and which is visible in nascent form in the 1967 lecture notes):  denotationally, the programming language is essentially a function which, when applied to a source-code term, produces a semantic value of another sort entirely.
[Note:  I've since written an entry on abstractive power.]
The idea that semantic results are languages is a very powerful one.  The results of computations (and any other embellishment one wants) can be modeled by introducing additional sorts of terms that may occur in languages; the significance of each language is then fully represented by the possible sequences of terms that can follow from it.  But then, one doesn't really need the explicit semantics at all.  Only the sequences of terms matter, and one can define a programming language by a set of sequences of terms.  At that point, one could say that all semantics is being represented as syntax (which is a truism about semantics, anyway), or one could just as well say that semantics has vanished entirely to be replaced with pure syntax.

Lisp and syntax

Lisp has been described as a language with no syntax.  There is a sense in which that's true:  if by "syntax" one means "syntax for representing programs rather than data".  In primordial S-expression Lisp, the only syntax exclusively represents data.  (The way that happened —to remind— was that McCarthy had originally envisioned a second kind of syntax, M-expressions, representing programs, but he'd also described an algorithm for encoding M-expressions as S-expressions.  He expected to have years in which to polish details of the language since writing a compiler was then understood to be such a colossal undertaking, but in the meantime they were hand-coding specific Lisp functions — and then S.R. Russell hand-coded eval, and abruptly they had a working interpreter for S-expression Lisp.)

I believe, by the way, this is how Lisp should be taught to novices:  Teach them S-expression syntax first, set it firmly in their minds that such expressions are data, and only after that begin to teach them about evaluation.  Mike Gennert and I tried this approach with a class back in the spring semester of 1999/2000.  Over the duration of the course, we led the students through writing a Scheme interpreter in Java, starting with a level-0 "REPL" loop that was missing the "E" — it would read in an S-expression, and just write it out without evaluating it.  By the end of the term we'd added in proper tail recursion.  The experiment as a whole wasn't as successful as we'd hoped, because at the moment we tried it, the department's curriculum was in a state of flux, and many of the students didn't already know Java; but we didn't have the sorts of problems I've seen, or heard others describe, due to novice students failing to think in terms of evaluating S-expressions.

The connection to abstraction is immediate and compelling.  Abstraction is all about specifying how syntax will be used for subsequent program code, and the design of Lisp is focused on virtuoso manipulation of the very type of data structure (S-expressions) that is effectively the syntax of program code.  The more power Lisp gives the programmer to control how syntax will be interpreted, the more abstractive power accrues.  Since fexprs greatly expand the programmer's direct control over the interpretation-time behavior of the evaluator (short of the "throw everything out and start over" tactic of running a meta-circular evaluator — a tactic that lacks stability), fexprs should massively increase the (already formidable) abstractive power of Lisp.  That's why the subtitle of my dissertation is $vau: the ultimate abstraction.

Note: Polymorphism

Strachey's 1967 lecture notes are most known —and cited— for coining the term polymorphism.  Therein, he divided polymorphism into two forms:  parametric polymorphism, and ad hoc polymorphism.

The parametric/ad hoc distinction struck me as a logical bipartition of polymorphism; that is, every possible form of polymorphism would necessarily be either parametric or ad hoc.  Evidently Cardelli and Wegner did not interpret Strachey this way; their taxonomy placed "inclusion polymorphism" outside of both parametric and ad hoc.

It also strikes me that the name ad hoc reflects real disapproval.  This goes back to the "basic irrelevance of syntax" remark.  At heart, parametric polymorphism is semantic, ad hoc is syntactic; one might be tempted to call them "semantic polymorphism" and "syntactic polymorphism", or even "good polymorphism" and "bad polymorphism".  There is a close connection between my perception that parametric/ad hoc reflects semantic/syntactic, and my perception that parametric/ad hoc was meant to be exhaustive.

Although I expect parametric polymorphism should have greater abstractive power than ad hoc polymorphism, I don't like biasing terminology.  I'd like to see any formal results stand on their own merits.  So, years ago, I looked for an alternative to "ad hoc".  The best I came up with at the time:  selection polymorphism.

Tuesday, May 17, 2011

Dangerous things should be difficult to do by accident

Although this is a key principle for design in general —scarcely behind supporting what the system is being designed to do— I'm mainly interested in it here as a principle for designing programming languages.

Bicycles

That said, the metaphor I use to ground the principle is from mechanical engineering.

A once-popular bicycle design was the "penny-farthing", with a great big front wheel that the rider essentially sat on top of, and a small rear wheel.  (Why "penny farthing"?  The British penny was a large coin, and the farthing a small one.)  The pedals were directly on the front wheel, and the handlebars were directly over it, turning on a vertical axis.  What's wrong with that?  Obvious problems are that the rider is too far up for their feet to reach the ground, so it's not easy to stop safely; there's a long way to fall; the rider is so far forward that it's easy to fall forward over the front; and it's easy to get one's feet, or clothing, caught in the spokes of the front wheel, especially when turning (as this causes the spokes to move in relation to the rider).  All of which obvious problems are eliminated by the later "safety bicycle" design, which has two smaller wheels with the rider sitting between them, feet well away from the parts that turn when steering, and low enough that the rider can simply plant their feet on the ground when stopped.

The safety bike design also uses a chain to multiply the turning of the pedals into a higher speed than can be achieved with the penny-farthing (and multiplying speed was the reason the front wheel of the penny-farthing was made so big in the first place).

But another thing I find especially interesting about the safety bicycle design is another innovation:  its steering axis —the axis along which the handle bars rotate the front wheel— is well off the vertical.  This angle off the vertical (called the caster) means that the force of gravity, pulling the rider down toward the ground, tends to pull the front wheel toward the straight-forward position.  In fact, the further the handle bars are turned to either side, the more gravity pushes them back out of the turn.  That's inherently stable.  (The fact that, riding at speed, both wheels act as gyroscopes doesn't hurt either.)

So the significant caster of the safety bicycle actively helps the rider to limit turns to intentional turns.  That's the kind of designed-in safety factor a programming language should aspire to (and it's a tough standard to live up to).

Programming languages

Douglas McIlroy, at the 1969 Extensible Languages Symposium, described opposing philosophies of programming language design as "anarchist" and "fascist".  The principle of accident avoidance illuminates both philosophies, and the relation between them.  (McIlroy was not, BTW, playing favorites to either philosophy, so if you think this terminology is harder on one side or the other that may tell you something about your politics. :-).
C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do it blows your whole leg off.
— Bjarne Stroustrup (attributed)
The fascist approach to accident prevention is to simply prohibit dangerous behaviors — which prevents accidents caused by the prohibited behaviors, at the cost of, at least,  (a) forcing the programmer to work around the prohibition and  (b) prohibiting whatever gainful employment, if any, might otherwise be derived by exploiting the prohibited behavior.  (There's a case claimed for the fascist approach based on ability to prove things about programs, but that's a subject for a different post; here I'm considering what may be done deliberately versus what may be done by accident, while that other post would concern what can be done versus what can be proven about it.)  Drawbacks to this arrangement accrue from both (a) and (b), as workarounds in (a) are something else to get wrong and, moreover, when the programmer wants to overcome limitations of (b), this tends to involve subverting the fascist restrictions, rather than working with them, producing an inherently unstable situation (in contrast to the ideal of stability we saw with the significant caster of the safety bicycle).

The anarchist philosophy makes reliance on this design principle more obvious.  You've got more opportunities to do dangerous things, so if there's something in the language design that causes you to do those things when you don't mean to —or that pushes you to mean to do them routinely, multiplying opportunities to do them wrong— that's going to be a highly visible problem with the language.

Most Lisps are pretty anarchic, and certainly my Kernel programming language is, supporting fexprs as it does.  Dangerous things are allowed on general principle; and the whole language is, among other things, an exercise in strongly principled language design, so some explicit principle was clearly needed to keep the anarchy sane.  Dangerous things should be difficult to do by accident was crafted for that sanity.

Which brings me back to something I said I'd post separately (in my earlier post about fexprs).

Hygiene in Kernel

Kernel is statically scoped, which is key to making fexprs manageable.

The static environment of a combiner is the environment of symbol-value bindings in effect at the point (in source code, and in time) where the combiner is defined.  The dynamic environment of a call to a combiner is the environment of symbol-value bindings in effect at the point (in source code and in time) from which the combiner is called.  The local environment of a combiner call is the environment of symbol-value bindings that are used, for that call, to interpret the body of the combiner.  The local environment has some call-specific bindings, of the combiner's formal parameters to arguments (or operands) passed to the call; but then, for other symbols, the local environment refers to a parent environment.  In a statically scoped combiner, the parent is the static environment; in a dynamically scoped combiner, the parent is the dynamic environment.

When a programmer writes a symbol into source code, their understanding of what that symbol refers to tends to be based on what else is declared in the same region of source code.  For the actual meaning of the symbol to coincide with the programmer's expectation, most programming languages use statically scoped applicatives (applicatives meaning, as explained in my earlier post, that the operands are evaluated in the dynamic environment, and the resulting arguments are passed to the underlying combiner).  This behavior —operands interpreted in the dynamic environment, combiner body interpreted in a local child of the static environment— is commonly called good hygiene.

Early Lisp was dynamically scoped.  (How and why that happened, and then for a couple of decades got worse instead of better, are explored in Section 3.3 of my dissertation.)  Under dynamic scope, when one writes a combiner, one doesn't generally know anything about what it will do when called:  none of the non-local symbols have any fixed meaning that can be determined when-and-where the combiner is defined.  Sometimes one actually wants this sort of dynamic behavior; but good hygiene is the "straight forward" behavior that anarchic Kernel gravitates toward as a stable state.

So, what does Kernel do to allow hygiene violations while maintaining a steady stabilizing gravity toward good hygiene?

The primitive constructor of compound operatives, $vau, is statically scoped; in addition to a formal parameter tree that matches the operands of the call, there is an extra environment parameter, a symbol that is bound in the local environment to the dynamic environment of the call.  That makes it possible to violate hygiene — although it is actually commonly used to maintain hygiene, as will be shown.  If you don't want to use the dynamic environment, you can use a special Kernel value, #ignore, in place of the environment parameter to prevent any local binding to the dynamic environment, so it's then impossible to accidentally invoke the dynamic environment.

The usual constructor for compound applicatives, $lambda, can be defined as follows; it simply transforms a combination  ($lambda formals . body)  into  (wrap ($vau formals #ignore . body)).
($define! $lambda
   ($vau (formals . body) env
      (wrap (eval (list* $vau formals #ignore body)
                  env))))

The very existence of $lambda is the first and simplest way Kernel pushes the programmer toward good hygiene:  because $lambda is easier to use than $vau —and because they can't readily be mistaken for each other— the programmer will naturally use $lambda for most purposes, turning out hygienic combiners as a matter of course.  Constructing an operative takes more work.  Constructing an applicative that doesn't ignore its dynamic environment is even more laborious, requiring a composition of $vau with wrap, as in the standard derivation of get-current-environment:
($define! get-current-environment
   (wrap ($vau () e e)))
So good hygiene just takes less work than bad hygiene.

Once the programmer has decided to use $vau, gravitating toward good hygiene gets subtler.  The above derivation of $lambda exemplifies the main tactics.  In order to evaluate the operands in any environment at all, you typically use eval — and eval requires an explicit second argument specifying the environment in which to do the evaluation.  And the most immediately available environment that can be specified is the one for which a local binding has been explicitly provided:  the dynamic environment of the call.

A nuance here is that the expression to be evaluated will typically be cobbled together within the combiner, using some parts of the operand tree together with other elements introduced from elsewhere.  This is the case for our derivation of $lambda, where the target expression has four elements — two parts of the operand tree, formals and body; literal constant #ignore; and $vau.  For these cases, a key stabilizing factor is that the construction is conducted using applicative combinations, in which the parts are specified using symbols.  Since the constructing combinations are applicative, those symbols are evaluated in the local environment during construction — so by the time the constructed expression is evaluated, all dependencies on local or static bindings have already been disposed of.  The additional elements typically introduced this way are, in fact, combiners (and occasional constants), such as in this case $vau; and these elements are atomic, evaluating to themselves, so they behave hygienically when the constructed expression is later evaluated (as I noted in my earlier post under the subheading "fluently doing nothing").

Wednesday, April 6, 2011

Fexpr

Fexpr is a noun.  It's pronounced FEKSper.  A fexpr is a procedure that acts on the syntax of its operands, rather than on the values determined by that syntax.

This is only in the world of Lisp programming languages — not that only Lisps happen to have fexprs, but that having fexprs (and having the related characteristics that make them a really interesting feature) seemingly causes a programming language to be, at some deep level, a dialect of Lisp.  That's a clue to something:  although this acting-on-syntax business sounds superficial, it's a gateway to the deepest nature of the Lisp programming-language model.  Fexprs are at the heart of the rhyming scheme of Lisp.

Data as programs

When Lisp reads a syntax (i.e., source code) expression, it immediately represents the expression as a data structure; or at least, in theory it does.  For fexprs to even make sense, this would have to be true:  a procedure in a program acts on data, so if you're passing the operand syntax expressions to a procedure, those expressions have to be data.  The Lisp evaluator then interprets the syntax expression in data form, and that's the whole of Lisp:  read an expression and evaluate it, read another and evaluate it, and so on.

But Lisp was designed, from the start, specifically for manipulating an especially simple and general kind of data structures, essentially trees (though they can also be viewed as nested lists, hence the name of the language, short for LISt Processing).  And syntax expressions are represented as these same trees that are already Lisp's native data structure.  And, the Lisp evaluator algorithm isn't limited to data that started life as a representation of syntax:  any data value can, in principle, be evaluated.  Which means that a fexpr doesn't have to act on syntax.

The theory of fexprs

One reason it matters that fexprs can act on non-syntax, is because of a notorious theoretical result about fexprs.  Proving programs correct usually makes heavy use of determining whether any two source expressions are interchangeable.  When two source expressions may be operands to a fexpr, though, they won't be interchangeable in general unless they're syntactically identical.  So with fexprs in the language, no two distinct operands are ever universally interchangeable.  This was famously observed by Mitch Wand in a journal article back in 1998, The Theory of Fexprs is Trivial.

But while a fexpr can analyze any syntactic operand down to the operand's component atoms, computed operands are a different matter.  It's almost incidental that some computed data structures are encapsulated, so can't be fully analyzed by fexprs.  The more important point is, even if the structure resulting from computation can be fully analyzed, the process by which it was computed is not subject to analysis.  If a fexpr is given an operand 42, the fexpr can't tell how that operand was arrived at; it might have been specified in source code, or computed by multiplying 6 times 7, or computed in any of infinitely many other possible ways.

So, suppose one sets up a computational calculus, something like lambda-calculus, for describing computation in a Lisp with fexprs.  Source expressions are terms in the calculus, and no two of them are contextually equivalent (i.e., interchangeable as subterms of all larger terms).  But —unless the calculus is constructed pathologically— there are still very many terms in the calculus, representing intermediate states of subcomputations, that are contextually equivalent.

I've developed a calculus like that, by the way.  It's called vau-calculus.

Deep fexprs

We're about to need much better terminology.  The word fexpr is a legacy from the earliest days of Lisp, and procedure is used in the Lisp world with several different meanings.  Here's more systematic terminology, that I expanded from Scheme for use with the Kernel programming language.
A list to be evaluated is a combination; its first element is the operator, and the rest of its elements are operands.  The action designated by the operator is a combiner.  A combiner that acts directly on its operands is an operative.  (Legacy terms: an operative that is a data value is a fexpr, an operative that is not a data value is a special form.)  A combiner that isn't operative is applicative; in that case, the operands are all evaluated, the results of these evaluations are called arguments, and the action is performed on the arguments instead of on the operands.
It might seem that applicative combinations would be more common, and far more varied, than operative combinations.  Explicitly visible operatives in a Lisp program are largely limited to a small set, used to define symbols (in Kernel, mainly $define!  and $let), construct applicatives ($lambda), and do logical branching ($if  and $cond) — about half a dozen operatives, used over and over again.  The riotous variety of programmer-defined combiners are almost all applicative.

But looking closely at the above definition of applicative, it implies that every applicative has an operative hiding inside it.  Once an argument list has been computed, it's just another list of data values — and those values are then acted on directly with no further processing, which is what one does when calling an operative!  Applicative +, which evaluates its operands to arguments and then adds the arguments, has an underlying operative that just adds its operands; and so on.

Vau-calculus

In a computational calculus for fexprs, it's a big advantage to represent each applicative explicitly as a wrapper (to indicate the operands are to be evaluated) around another underlying combiner.  That way, the calculus can formally reason about argument evaluation separately from reasoning about the underlying actions.  Vau-calculus works that way.  The whole calculus turns out to have three parts.  There's one part that only represents tree/list structures, and no computation takes place purely within that part.  There's one part that only deals with computations via fexprs.  And then, linking those two, there's the machinery of evaluation, which is where the wrapper-to-induce-operand-evaluation comes in.

Fascinatingly, of these three parts of vau-calculus, the one that deals only with computations involving fexprs is (give or take) lambda-calculus.  One could reasonably claim —without contradicting Mitch Wand's perfectly valid result, but certainly contrasting with it— that the theory of fexprs is lambda-calculus.

(Vau-calculus seems a likely topic for a future blog entry here.  Meanwhile, if you're really feeling ambitious, the place to look is my dissertation.)
[Note:  I've since blogged on vau-calculus here.]
Kernel

What works for a computational calculus also works for a Lisp language:  represent each applicative as a wrapper around an underlying combiner.  The Kernel programming language does this.  An applicative unwrap  takes an applicative argument and returns the underlying combiner of that argument; and an applicative wrap  takes any combiner at all as an argument, and returns an applicative whose underlying combiner is that argument.

This makes Kernel a powerful tool for programmers to fluently manipulate the operand-evaluation process, just as the analogous device in vau-calculus allows reasoning about operand-evaluation separately from reasoning about the underlying lambda-calculus computations.

Kernel (evaluator)

Here's the central logic of the Kernel evaluator (coded in Kernel, then put in words).
($define! eval
   ($lambda (expr env)
      ($cond ((symbol? expr)  (lookup expr env))
             ((pair? expr)
                (combine (eval (car expr) env)
                         (cdr expr)
                         env))
             (#t  expr))))

($define! combine
   ($lambda (combiner operands env)
      ($if (operative? combiner)
           (operate combiner operands env)
           (combine (unwrap combiner)
                    (map-eval operands env)
                    env))))
To evaluate an expression in an environment:  If it's a symbol, look it up in the environment.  If it's a pair (which is the more general case of a list), evaluate the operator in the environment, and combine  the resulting combiner with the operands in the environment. If it's neither a symbol nor a pair, it evaluates to itself.

To combine a combiner with an operands-object:  If the combiner is operative, cause it to act on the operands-object (and give it the environment, too, since some operatives need that).  If the combiner is applicative, evaluate all the operands in the environment, and recursively call combine  with the underlying combiner of the applicative, and the list of arguments.

Kernel (fluently doing nothing)

When evaluating syntax read directly from a source file, the default case of evaluation —the one explained in boldface— is why a literal constant, such as an integer, evaluates to itself.  What makes it worth boldfacing, though, is that when evaluating computed expressions, that case helps keep environments from bleeding into each other (in Lisp terminology, it helps avoid accidental bad hygiene).  Here's a basic example.

Lisp apply  overrides the usual rule for calling an applicative, by allowing a single arbitrary computation-result to be used in place of the usual list of arguments.  The first argument to apply  is the applicative, and its second argument is the value to be used instead of a list of arguments.  In Kernel, and then in words:
($define! apply
   ($lambda (appv args)
      (eval (cons (unwrap appv) args)
            (make-environment))))
To apply an applicative to an args-object, construct a combination whose operator is the underlying combiner of the applicative, and whose operands-object is the args-object; and then evaluate the constructed combination in a freshly created empty environment.  When the constructed combination is evaluated, its operator evaluates to itself because it's a combiner.  This defaulting operator evaluation doesn't need anything from the environment where the arguments to apply  were evaluated, so the constructed combination can be evaluated in an empty environment — and the environment of the call to apply  doesn't bleed into the call to the constructed combination.

In a standard Kernel environment, (apply list 2) evaluates to 2.

A more impressive illustration is the way $lambda  can be defined hygienically in Kernel using more primitive elements of the language.  I should make that a separate post, though.  The earlier parts of this post deliberately didn't assume Lisp-specific knowledge at all, and in the later parts I've tried to ease into Lisp somewhat gently — but $lambda  gets into a whole nother level of Lisp sophistication (which is what makes it a worthwhile example), so it just feels logically separate.
[Note: I did later post on Kernel hygiene and $lambda, here.]

Wednesday, March 30, 2011

Memetic organisms

When Richard Dawkins coined the word meme (back in 1976, in The Selfish Gene — a must read), I suggest he made one understandable mistake, an oversight that, as far as I can tell, has lingered ever since.  It might even explain why memetics hasn't become a viable field of scientific research.

A meme is roughly an idea that makes copies of itself, which compete with copies of other memes for available resources (basically, human hosts).  When a class of things self-copy and compete, they evolve; Dawkins used the general term replicators for any such things.  Genes are replicators, and he used memes as a second example of replicators, showing that the concept has some generality to it.  All good so far, but he also wrote that this second kind of replicator was "still in its infancy, still drifting clumsily about in its primeval soup".

Um, no.  It's very easy to think that, because the world that memes inhabit —the ideosphere— isn't directly visible to our senses.  Ask yourself, if the memetic equivalent of a tyrannosaurus were (metaphorically) standing right next to you, how would you know?

Memes are, I suggest, nowhere near the primeval-soup stage.  For thousands of years, memetic organisms have roamed the earth, with reproduction (as opposed to replication), death of individual organisms, inactive memes, and something like differentiated organs.  You are surrounded, at this moment, by memetic organisms.

What's an organism?

Some large groups of memes get copied together; the name memeplex  has been been suggested for such groups.  Astrology, say, or algebra.  But a biological organism isn't just a set of genes — it is, as Dawkins put it, a vehicle for genes.  The replicators have evolved the trick of building these vehicles for themselves.  Here are some things to expect of any kind of organisms.

  • Each organism carries a reasonably stable set of replicators, some of which influence the organism's fitness.
  • Organisms reproduce.  A child is created by a process involving one or more parents, from which the child inherits many (most?) of the replicators it carries.
  • Organisms die.  When they do, some of the replicators they carried may be carried on by their descendants.
  • Replicators with the potential to induce certain organism traits may be carried by organisms that don't exhibit those traits.  (Both recessive genes and junk genes spring to mind.)
So, when looking for memetic organisms, we want a class of entities that carry sets of memes; that exhibit reproduction and death, inheriting from their parent(s) and so preserving memes from deceased ancestors; and that carry along some memes across generations in some sort of "inactive" form.

Seeing a memetic organism

In 1994, there was much fanfare about the twenty fifth anniversary of the first moonwalk.  Footage of astronauts on the moon was replayed on television.  In one of these clips, an astronaut stood on the moon and did the classic experiment of dropping a light object and a heavy object to see if the heavy object fell faster.  (A feather and a wrench, I think they were.)  It wasn't well controlled; the point was evidently public education about science, for which it was preceded by a verbal explanation of the experiment — so it taught about scientific method as well as about universal gravitation.  Great stuff.

What left my jaw hanging was that the explanation started with (iirc) "Aristotle said".  Nobody was supposed to believe Aristotle's theory about falling objects, but this guy on the moon was deliberately teaching the general public about what they weren't supposed to believe.  Inherited memes systematically preserved in a sort of "inactive" form.

Thomas Kuhn described some aspects of this species of memetic organisms in 1962, in The Structure of Scientific Revolutions (another must read).  Notably, he described its reproductive process, which is what he called a scientific revolution.  Here's a rough portrait of a paradigm scientific field as a memetic organism.

The meme set carried by the organism includes a mass of theories, some of which contradict each other.  At the center of this mass is a nucleus of theories that are supposed to be believed (part of what Kuhn called a paradigm).  Surrounding the nucleus are theories that are meant to be contrasted with the paradigm and rejected, together with memes about how to conduct the contrast; one might call this surrounding material the co-nucleus.  While the organism thrives, the contrast with the co-nucleus strengthens belief in the nucleus, thus recruiting and retaining members of the organism's scientific community.  When the organism falters (the scientific community loses faith in the paradigm), eventually a new paradigm emerges, forming the nucleus of a new organism, while the new co-nucleus may contain both some nuclear and some co-nuclear material from the parent(s).

During reproduction, fragments may be drawn for the new nucleus (as "inspiration") from pretty much anywhere, even from non-sciences.  It seems that a thriving science may deliberately surround itself with a sort of third ring of memes, outside the co-nucleus and perhaps somewhat loosely coupled with the science itself (symbiotic?), that provide raw material for new co-nuclear or nuclear formations.  This third ring may include alternative, pseudo-, and fringe science; and science fiction, which can provide a venue for scientists within the community to explore new ideas without the ridicule or ostracism that would result if they prematurely proposed the same ideas in a scientific forum.

[I am, btw, not only let down but also rather fascinated, to find my memory of the moonwalk gravity demo does not match the footage I've found on YouTube [link]; besides being a hammer rather than a wrench, this footage doesn't have the explanation, nor the name Aristotle, that waxes so prominent in my recollection.  Either what I'm remembering is dominated by the epiphany I had while watching it rather than what was shown, or, not impossibly, there could be other footage floating around, either from a separate incident or (less expensively) with some sort of dubbed-over narration.]
Second example

Religion seems to be a second species (or genus, or some taxon anyway) of memetic organisms.  I may do even more poorly here, as I'm practically unread on comparative religion, but I'll take a stab at it anyway; hopefully, it will suffice to make the taxon plausible, even if my specific suggestions don't hold up at all.

The best fit for an organism seems to be below the scale usually called a sect, though conceivably somewhat above the scale of a congregation.  Part of the carried memetic material is a large mass I'll call a religious tradition, which may be written or oral.  The tradition is augmented by further memes, which I'll call an interpretation, determining what different parts of the tradition are supposed to mean.  The tradition and interpretation should be able to recruit and retain followers.  When changing societal environment makes those memes less effective, it becomes increasingly likely that the community will either splinter, or adjust its carried meme set, creating a new organism with perhaps some deletions or even additions to the tradition, but especially, changes to the interpretation that make it work better in the societal environment.

What makes for a successful religious organism?  A successful scientific organism features highly persuasive contrast between nucleus and co-nucleus, and there is presumably some of that in the religious case too, practices of other religions preserved as persuasive examples of what not to do; likely the scientific species is partly descended from the religious.  But there is also an interesting implication from the suggested religious model, that over many generations, a religious tradition will evolve to be amenable to a very wide range of interpretations, as this will allow the tradition to facilitate successful reproduction in a wide variety of societal environments.  A successful tradition would therefore be an ambiguous one.

Can memetics become a normal science?

There seem to be two problems with memetic research that have held it back.

One is that efforts in memetics have been dominated for decades by attempts to define what a meme is.  The definition of gene was arrived at after extensive study of biological organisms; so presumably, one should expect extensive study of memetic organisms to be a prerequisite for arriving at a really good definition of meme.  Identifying the organisms is a start.

The other has to do with what Kuhn called normal scientific research.  This is the sort of research that takes place within a paradigm scientific field (a thriving scientific organism, that is).  The paradigm usefully constrains the sorts of questions scientists are to ask and the sorts of answers they are to give (a nuance I didn't even try to capture in my rough portrait of scientific organisms, above).  Kuhn describes such research as "puzzle solving", and its narrow focus is its strength, allowing a very great deal of focused work to be done so that, eventually, flaws lurking in the paradigm become impossible to ignore and a reproductive event is triggered — a scientific revolution, shifting things to another paradigm better describing reality.

But memetics hasn't provided that sort of structure.

There seems to be some potential, in the ideas I've proposed here, to define how memetic organisms are to be identified and analyzed, sufficiently that researchers might proceed methodically to find and study organisms.  In other words, these suggestions might be developed into a functioning paradigm that could guide normal scientific research.

Maybe.

Friday, March 25, 2011

Prosaic first post

A favorite quote of mine is "History doesn't repeat itself, but it rhymes."  Attributed to Mark Twain, though I see Wikiquote says "Twain scholars agree that it sounds like something he would say, but they have been unable to find the actual quote in his writing."  Quote attributions are like that: them as has, gets.

The thing is, it doesn't just work for history.  It [works] for pretty much everything — if you're familiar enough with it to recognize the rhyming scheme.  For example, I've enough smattering of past and present physics to recognize when science fiction has an especially good, or bad, sense of its rhyming scheme.  Vernor Vinge's A Fire Upon the Deep has the most elegantly rhymed fictional physics I've ever encountered; and I've read SFF authors with no ear for physics at all, though I'll not name names.  On the non-fiction side of the same effect, I've long sensed that Albert Einstein's dissatisfaction with quantum mechanics was, at its most primordial, dislike of its rhyme (not to in any way disparage his more specific metaphysical writings on the subject).

So I gradually accumulate evidence, fodder for my intuition, and over years and even decades my intuition slowly learns to recognize rhymes, and starts answering me back with insights — into the rhyming structure of the various subjects of study, hence the blog title.  And the insights more or less gather dust, in my files or even just in my head.  I'm starting a blog to put those insights out in the open where, with luck, maybe one way or another some will be useful to someone besides me.  (If folks find them laughable, well, laughter is good exercise, so that's useful too.)

What have I been studying, that I can blog about?  Well, there's linguistics; both programming languages (within which is my academic expertise), natural and constructed languages, and connections between all three.  Mathematical physics.  A dash of magic.  Memetics, with both religion and science as subs under it.  Politics and economics.  And whatever else I'm forgetting (or haven't thought of yet).