Bicycles
That said, the metaphor I use to ground the principle is from mechanical engineering.
A once-popular bicycle design was the "penny-farthing", with a great big front wheel that the rider essentially sat on top of, and a small rear wheel. (Why "penny farthing"? The British penny was a large coin, and the farthing a small one.) The pedals were directly on the front wheel, and the handlebars were directly over it, turning on a vertical axis. What's wrong with that? Obvious problems are that the rider is too far up for their feet to reach the ground, so it's not easy to stop safely; there's a long way to fall; the rider is so far forward that it's easy to fall forward over the front; and it's easy to get one's feet, or clothing, caught in the spokes of the front wheel, especially when turning (as this causes the spokes to move in relation to the rider). All of which obvious problems are eliminated by the later "safety bicycle" design, which has two smaller wheels with the rider sitting between them, feet well away from the parts that turn when steering, and low enough that the rider can simply plant their feet on the ground when stopped.
The safety bike design also uses a chain to multiply the turning of the pedals into a higher speed than can be achieved with the penny-farthing (and multiplying speed was the reason the front wheel of the penny-farthing was made so big in the first place).
But another thing I find especially interesting about the safety bicycle design is another innovation: its steering axis —the axis along which the handle bars rotate the front wheel— is well off the vertical. This angle off the vertical (called the caster) means that the force of gravity, pulling the rider down toward the ground, tends to pull the front wheel toward the straight-forward position. In fact, the further the handle bars are turned to either side, the more gravity pushes them back out of the turn. That's inherently stable. (The fact that, riding at speed, both wheels act as gyroscopes doesn't hurt either.)
So the significant caster of the safety bicycle actively helps the rider to limit turns to intentional turns. That's the kind of designed-in safety factor a programming language should aspire to (and it's a tough standard to live up to).
Programming languages
Douglas McIlroy, at the 1969 Extensible Languages Symposium, described opposing philosophies of programming language design as "anarchist" and "fascist". The principle of accident avoidance illuminates both philosophies, and the relation between them. (McIlroy was not, BTW, playing favorites to either philosophy, so if you think this terminology is harder on one side or the other that may tell you something about your politics. :-).
C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do it blows your whole leg off.The fascist approach to accident prevention is to simply prohibit dangerous behaviors — which prevents accidents caused by the prohibited behaviors, at the cost of, at least, (a) forcing the programmer to work around the prohibition and (b) prohibiting whatever gainful employment, if any, might otherwise be derived by exploiting the prohibited behavior. (There's a case claimed for the fascist approach based on ability to prove things about programs, but that's a subject for a different post; here I'm considering what may be done deliberately versus what may be done by accident, while that other post would concern what can be done versus what can be proven about it.) Drawbacks to this arrangement accrue from both (a) and (b), as workarounds in (a) are something else to get wrong and, moreover, when the programmer wants to overcome limitations of (b), this tends to involve subverting the fascist restrictions, rather than working with them, producing an inherently unstable situation (in contrast to the ideal of stability we saw with the significant caster of the safety bicycle).
— Bjarne Stroustrup (attributed)
The anarchist philosophy makes reliance on this design principle more obvious. You've got more opportunities to do dangerous things, so if there's something in the language design that causes you to do those things when you don't mean to —or that pushes you to mean to do them routinely, multiplying opportunities to do them wrong— that's going to be a highly visible problem with the language.
Most Lisps are pretty anarchic, and certainly my Kernel programming language is, supporting fexprs as it does. Dangerous things are allowed on general principle; and the whole language is, among other things, an exercise in strongly principled language design, so some explicit principle was clearly needed to keep the anarchy sane. Dangerous things should be difficult to do by accident was crafted for that sanity.
Which brings me back to something I said I'd post separately (in my earlier post about fexprs).
Hygiene in Kernel
Kernel is statically scoped, which is key to making fexprs manageable.
The static environment of a combiner is the environment of symbol-value bindings in effect at the point (in source code, and in time) where the combiner is defined. The dynamic environment of a call to a combiner is the environment of symbol-value bindings in effect at the point (in source code and in time) from which the combiner is called. The local environment of a combiner call is the environment of symbol-value bindings that are used, for that call, to interpret the body of the combiner. The local environment has some call-specific bindings, of the combiner's formal parameters to arguments (or operands) passed to the call; but then, for other symbols, the local environment refers to a parent environment. In a statically scoped combiner, the parent is the static environment; in a dynamically scoped combiner, the parent is the dynamic environment.
When a programmer writes a symbol into source code, their understanding of what that symbol refers to tends to be based on what else is declared in the same region of source code. For the actual meaning of the symbol to coincide with the programmer's expectation, most programming languages use statically scoped applicatives (applicatives meaning, as explained in my earlier post, that the operands are evaluated in the dynamic environment, and the resulting arguments are passed to the underlying combiner). This behavior —operands interpreted in the dynamic environment, combiner body interpreted in a local child of the static environment— is commonly called good hygiene.
Early Lisp was dynamically scoped. (How and why that happened, and then for a couple of decades got worse instead of better, are explored in Section 3.3 of my dissertation.) Under dynamic scope, when one writes a combiner, one doesn't generally know anything about what it will do when called: none of the non-local symbols have any fixed meaning that can be determined when-and-where the combiner is defined. Sometimes one actually wants this sort of dynamic behavior; but good hygiene is the "straight forward" behavior that anarchic Kernel gravitates toward as a stable state.
So, what does Kernel do to allow hygiene violations while maintaining a steady stabilizing gravity toward good hygiene?
The primitive constructor of compound operatives,
$vau
, is statically scoped; in addition to a formal parameter tree that matches the operands of the call, there is an extra environment parameter, a symbol that is bound in the local environment to the dynamic environment of the call. That makes it possible to violate hygiene — although it is actually commonly used to maintain hygiene, as will be shown. If you don't want to use the dynamic environment, you can use a special Kernel value, #ignore
, in place of the environment parameter to prevent any local binding to the dynamic environment, so it's then impossible to accidentally invoke the dynamic environment.The usual constructor for compound applicatives,
$lambda
, can be defined as follows; it simply transforms a combination ($lambda formals . body)
into (wrap ($vau formals #ignore . body))
.($define! $lambda
($vau (formals . body) env
(wrap (eval (list* $vau formals #ignore body)
env))))
The very existence of
$lambda
is the first and simplest way Kernel pushes the programmer toward good hygiene: because $lambda
is easier to use than $vau
—and because they can't readily be mistaken for each other— the programmer will naturally use $lambda
for most purposes, turning out hygienic combiners as a matter of course. Constructing an operative takes more work. Constructing an applicative that doesn't ignore its dynamic environment is even more laborious, requiring a composition of $vau
with wrap
, as in the standard derivation of get-current-environment
:($define! get-current-environment
(wrap ($vau () e e)))
So good hygiene just takes less work than bad hygiene.Once the programmer has decided to use
$vau
, gravitating toward good hygiene gets subtler. The above derivation of $lambda
exemplifies the main tactics. In order to evaluate the operands in any environment at all, you typically use eval
— and eval
requires an explicit second argument specifying the environment in which to do the evaluation. And the most immediately available environment that can be specified is the one for which a local binding has been explicitly provided: the dynamic environment of the call.A nuance here is that the expression to be evaluated will typically be cobbled together within the combiner, using some parts of the operand tree together with other elements introduced from elsewhere. This is the case for our derivation of
$lambda
, where the target expression has four elements — two parts of the operand tree, formals
and body
; literal constant #ignore
; and $vau
. For these cases, a key stabilizing factor is that the construction is conducted using applicative combinations, in which the parts are specified using symbols. Since the constructing combinations are applicative, those symbols are evaluated in the local environment during construction — so by the time the constructed expression is evaluated, all dependencies on local or static bindings have already been disposed of. The additional elements typically introduced this way are, in fact, combiners (and occasional constants), such as in this case $vau
; and these elements are atomic, evaluating to themselves, so they behave hygienically when the constructed expression is later evaluated (as I noted in my earlier post under the subheading "fluently doing nothing").