Tuesday, November 8, 2011

Allowing and disallowing

Here are two problems in programming language design that are often treated as if they had to be traded off against each other.  I've found it enormously productive to assume that high-level tradeoffs are accidental rather than essential; that is, to assume that if only we find the right vantage to view the problems, we'll see how to have our cake and eat it too.  A good first step toward finding a fresh vantage on a problem is to eliminate unnecessary details and assumptions from the statement of the problem.  So here are spare, general statements of these two problems.
  • Allow maximally versatile ways of doing things, with maximal facility.
  • Disallow undesirable behavior.
I've been accused of promoting unmanageable chaos because my publicly visible work (on Kernel and fexprs) focuses on the first problem with some degree of merry disregard for the second.  So here I'll explain some of my thoughts on the second problem and its relationship to the first.

How difficult are these problems?  One can only guess how long it will actually take to tame a major problem; there's always the chance somebody could find a simple solution tomorrow, or next week.  But based on their history, I'd guess these problems have a half-life of at least half a century.

Why

To clarify my view of these problems, including what I mean by them, it may help to explain why I consider them important.

Allowing is important because exciting, new, and in any and all senses profitable innovations predictably involve doing things that hadn't been predicted.  Software technology needs to grow exponentially, which is a long-term game; in the long term, a programming language either helps programmers imagine and implement unanticipated approaches, or the language will be left in the dust by better languages.  This is a sibling to the long-term importance of basic research.  It's also a cousin to the economic phenomenon of the Long Tail, in which there's substantial total demand for all individually unpopular items in a given category — so that while it would be unprofitable for a traditional store to keep those items in stock, a business can reap profits by offering the whole range of unpopular items if it can avoid incurring overhead per item.

Disallowing is important because, bluntly, we want our programs to work right.  A couple of distinctions immediately arise.
  • Whose version of "right" are we pursuing?  There's "right" as understood by the programmer, and "right" as understood by others.  A dramatic divergence occurs in the case of a malicious programmer.  Of course, protecting against programmer malfeasance is especially challenging to reconcile with the allowing side of the equation.
  • Some things we are directly motivated to disallow, others indirectly.  Direct motivation means that thing would in itself do something we don't want done.  Indirect motivation means that thing would make it harder to prove the program doesn't do something we don't want done.

How

If allowing were a matter of computational freedom, the solution would be to program in machine code.  It's not.  In practice, a tool isn't versatile or facile if it cannot be used at scale.  What we can imagine doing, and what we can then work out how to implement, depends on the worldview provided by the programming language, within which we work, so allowing depends on this worldview.  Nor is the worldview merely a matter of crunching data — it also determines our ability to imagine and implement abstractions within the language — modulating the local worldview, within some broader metaphysics.  Hence my interest in abstractive power (on which I should blog eventually [note: eventually I did]).

How ought we to go about disallowing?  Here are some dimensions of variation between strategies — keeping in mind, we are trying to sort out possible strategies, rather than existing strategies (so not to fall into ruts of traditional thinking).
  • One can approach disallowance either by choosing the contours of the worldview within which the programer works, or by imposing restrictions on the programmer's freedom to operate within the worldview.  The key difference is that if the programmer thinks within the worldview (which should come naturally with a well-crafted worldview), restriction-based disallowance is directly visible, while contour-based disallowance is not.  To directly see contour-based disallowance, you have to step outside the worldview.

    To reuse an example I've suggested elsewhere:  If a Turing Machine is disallowed from writing on a blank cell on the tape, that's a restriction (which, in this case, reduces the model's computational power to that of a linear bounded automaton).  If a Turing Machine's read/write head can move only horizontally, not vertically, that's a contour of the worldview.

  • Enforcement can be hard vs soft.  Hard enforcement means programs are rejected if they do not conform.  Soft enforcement is anything else.  One soft contour approach is the principle I've blogged about under the slogan dangerous things should be difficult to do by accident.  Soft restriction might, for example, take the form of a warning, or a property that could be tested for (either by the programmer or by the program).

  • Timing can be eager vs lazy.  Traditional static typing is hard and eager; traditional dynamic typing is hard and lazy.  Note, eager–lazy is a spectrum rather than a binary choice.  Off hand, I don't see how contour-based disallowance could be lazy (i.e., I'd think laziness would always be directly visible within the worldview); but I wouldn't care to dismiss the possibility.
All of which is pretty straightforward.  There's another dimension I'm less sure how to describe.  I'll call it depthShallow disallowance is based on simple, locally testable criteria.  A flat type system, with a small fixed set of data types that are mutually exclusive, is very shallow.  Deep disallowance is based on more sophisticated criteria that engage context.  A polymorphic function type has a bit of depth to it; a proof system that supports sophsiticated propositions about code behavior is pretty deep.

Shallow vs deep tends to play off simplicity against precision.  Shallow disallowance strategies are simple, therefore easily understood, which makes them more likely to be used correctly and —relatively— less likely to interfere with programmers' ability to imagine new techniques (versatility/facility of allowance).  However, shallow disallowance is a blunt instrument, that cannot take out a narrow or delicately structured case of bad behavior without removing everything around it.  So some designers turn to very deep strategies —fully articulated theorem-proving, in fact— but thereby introduce conceptual complexity, and the conceptual inflexibility that tends to come with it.

Recalling my earlier remark about tradeoffs, the tradeoffs we expect to be accdiental are high-level.  Low-level tradeoffs are apt to be essential.  If you're calculating reaction mass of a rocket, you'd best accept the tradeoff dictated by F=ma.  On the other hand, if you step back and ask what high-level task you want to perform, you may find it can be done without a rocket.  With disallowance depth, deep implies complex, and shallow implies some lack of versatility; there's no getting around those.  But does complex disallowance imply brittleness?  Does it preclude conceptual clarity?

One other factor that's at play here is level of descriptive detail.  If the programming language doesn't specify something, there's no question of whether to disallow some values of it.  If you just say "sort this list", instead of specifying an algorithm for doing so, there's no question —within the language— of whether the algorithm was specified correctly.  On the other hand, at some point someone specified how to sort a list, using some language or other; whatever level of detail a language starts at, you'll want to move up to a higher level later, and not keep respecifying lower-level activities.  That's abstraction again.  Not caring what sort algorithm is used may entail significantly more complexity, under the hood, than requiring a fixed algorithm — and again, we're always going to be passing from one such level to another, and having to decide which details we can hide and how to hide them.  How all that interacts with disallowance depth may be critical:  can we hide complex disallowance beneath abstraction barriers, as we do other forms of complexity?

Merry disregard

You may notice I've had far more to say about how to disallow, than about how to allow.  Allowing is so much more difficult, it's hard to know what to say about it.  Once you've chosen a worldview, you have a framework within which to ask how to exclude what you don't want; but finding new worldviews is, rather by definition, an unstructured activity.

Moreover, thrashing about with specific disallowance tactics may tend to lock you in to worldviews suited to those tactics, when what's needed for truly versatile allowing may be something else entirely.  So I reckon that allowing is logically prior to disallowing.  And my publicly visible work does, indeed, focus on allowing with a certain merry disregard for the complementary problem of disallowing.  Disallowing is never too far from my thoughts; but I don't expect to be able to tackle it properly till I know what sort of allowing worldview it should apply to.

8 comments:

  1. The way I've said it is that expressiveness ("allow") is only saying the absolute minimum to do what you want. While safety is doing only what you want and nothing else. So there's a surprising amount of overlap between them.

    ReplyDelete
  2. I would add, expressiveness also includes empowering you to say things you wouldn't otherwise have thought to say at all.  That's where the problems are asymmetric.

    ReplyDelete
  3. I am afraid I think there *is* an essential trade-off here, in a very general way.

    'Undesirable behaviour' has, in general, no sharp, technical definition, and so it cannot, in general, be disallowed by technical means or constraints. One can devise ways to prevent (e.g.) null-derefs, or eliminate even the possibility of their expression in the language. But one cannot automatically check if what you code will produce the effect you intend.

    *That* is the basic essential general problem of programming. It is one of 'clarity', rather than allowance/disallowance quite -- because only you can decide what to allow, and that is sort-of easy, it is not technical, you just decide -- but you can only decide if you can see what you are doing -- hence the importance of clarity.

    This clarity is a psychological matter (not of type-systems, or any kind of automated checking). We might develop technical models of it, but it is ultimately about what is clear to *human* thinking. (Perhaps a limitation on interrelatedness, or distance of effects, would be a factor . . .)

    From that there are two conclusions: 1, it seems there must be an essential unavoidable trade-off, because if power means a kind of density of instruction (as Paul Graham sort-of suggests), the more of it the harder it is to see and understand clearly. 2, we just have to wait and see if the $vau/fexpr approach is good to program with, but it seems to have a 'localised' character (doesn't it?) and that is probably a good sign.

    There is still some matter of *consistency* rather than *correctness* -- i.e. of technical definition and conformance to it that *can* be checked/disallowed typewise/automatically -- but it seems maybe the lesser problem overall . . . although maybe still significant . . .

    (I wrote more pondering on this at: http://www.hxa.name/notes/note-hxa7241-20111113T1241Z.html)

    ReplyDelete
  4. That's not what I mean by "allowing".  What I'm talking about is not about clarity (which is related to soft disallowing), and in general it's not at all easy to decide what to allow — because the things you aren't allowing (as opposed to things you are merely disallowing) are things you haven't even imagined doing.  What's difficult is thinking thoughts that have never been thought before, and that's what I want to allow.

    The tradeoff you're describing seems to me to take place entirely within a single programming paradigm already chosen (so the worst of the allowing phase of design has already passed), and doesn't engage this primordial sense of "allowing".  This is related to why I didn't use the word "expressiveness" (and I guess I should really have pointed this out in my reply to the previous comment) — as that word is commonly used, it's largely about how things can be said, which distracts from the primordial question of what can be conceived at all.

    ReplyDelete
  5. OK, I think what you mean by 'allowing' seems like Wittgenstein's 'showing' rather than 'saying'.

    That could be translated like this: a programming language is like placing a grid over your glasses. That has two obvious features: it captures everything in the world, which is like being Turing Complete, and it is easy to use, which is like having clarity. But there is another sort-of hidden factor: the shape of the grid. It could be rectangular, or hexagonal, or something else -- all would be functionally the same, both Turing Complete and clear, but each would have a different quality. It does not limit what is done, but *forms* it or biases it in some ways.

    Then, designing a programming language is like designing the codec for JPEG (another example of such a 'grid'). Overall, there is no perfect encoding, each has different strengths -- there is always a trade-off. But human perception and what we make images of are singularly important, so optimising for those biases effectively gives the best results.

    So we need some *measure* of the form/bias of languages (a grid for the grid!), otherwise design will not be very possible . . .

    ReplyDelete
  6. The grid metaphor is interesting (puts me in mind of the Principia Discordia, too :-).

    The (modest?) example I suppose I had in mind was my own dissertation, "Fexprs as the basis of Lisp function application". The central idea is that every ordinary Lisp function has a fexpr inside it, with a wrapper around it to induce argument evaluation.  When you learn to look at Lisp programs that way (or so I found), you start seeing possible ways of doing things that really wouldn't have occurred to you before — and you start considering whether to do some of them, and whether to disallow some of them.

    ReplyDelete
  7. I like the characterization of `contour` based disallowance. And I think `shallow` ultimately corresponds to `modular`.

    I do not like your characterization of `allowing`, as though diverse worldviews is the opposite of preventing undesirable behavior. (That seems like opposite sides to entirely different coins.)

    Maybe you should break down `allowing` along multiple dimensions, too:
    * power (what can and cannot be done; authority)
    * versatility (use in a broad variety of contexts and domains)
    * flexibility (adapt existing software to work on a new problem; opposite rigidity)
    * extensibility (add new behaviors to existing systems without modifying their code)
    * diversity (one obvious way to do it vs. many ways to do it)

    As before, many of these may be what you call `eager or lazy` and `hard or soft`. Eager flexibility, for example, means we can adapt code statically but might need to reconfigure and rebuild when the environment changes.

    I think you value `diversity` far more than I would. Supporting multiple world-views isn't nearly so essential, IMO, as supporting multiple problem domains.

    ReplyDelete
  8. You say "Software technology needs to grow exponentially" but I think it only needs to grow with the number of `relationships` - i.e. roughly O(N^2) distinct services for N communicating agents, and typically much less than that.

    Really, we should be aiming in the opposite direction of exponential growth - support the many contexts we have today with orders of magnitude less code than we use today.

    ReplyDelete