- Allow maximally versatile ways of doing things, with maximal facility.
- Disallow undesirable behavior.
How difficult are these problems? One can only guess how long it will actually take to tame a major problem; there's always the chance somebody could find a simple solution tomorrow, or next week. But based on their history, I'd guess these problems have a half-life of at least half a century.
To clarify my view of these problems, including what I mean by them, it may help to explain why I consider them important.
Allowing is important because exciting, new, and in any and all senses profitable innovations predictably involve doing things that hadn't been predicted. Software technology needs to grow exponentially, which is a long-term game; in the long term, a programming language either helps programmers imagine and implement unanticipated approaches, or the language will be left in the dust by better languages. This is a sibling to the long-term importance of basic research. It's also a cousin to the economic phenomenon of the Long Tail, in which there's substantial total demand for all individually unpopular items in a given category — so that while it would be unprofitable for a traditional store to keep those items in stock, a business can reap profits by offering the whole range of unpopular items if it can avoid incurring overhead per item.
Disallowing is important because, bluntly, we want our programs to work right. A couple of distinctions immediately arise.
- Whose version of "right" are we pursuing? There's "right" as understood by the programmer, and "right" as understood by others. A dramatic divergence occurs in the case of a malicious programmer. Of course, protecting against programmer malfeasance is especially challenging to reconcile with the allowing side of the equation.
- Some things we are directly motivated to disallow, others indirectly. Direct motivation means that thing would in itself do something we don't want done. Indirect motivation means that thing would make it harder to prove the program doesn't do something we don't want done.
If allowing were a matter of computational freedom, the solution would be to program in machine code. It's not. In practice, a tool isn't versatile or facile if it cannot be used at scale. What we can imagine doing, and what we can then work out how to implement, depends on the worldview provided by the programming language, within which we work, so allowing depends on this worldview. Nor is the worldview merely a matter of crunching data — it also determines our ability to imagine and implement abstractions within the language — modulating the local worldview, within some broader metaphysics. Hence my interest in abstractive power (on which I should blog eventually).
How ought we to go about disallowing? Here are some dimensions of variation between strategies — keeping in mind, we are trying to sort out possible strategies, rather than existing strategies (so not to fall into ruts of traditional thinking).
- One can approach disallowance either by choosing the contours of the worldview within which the programer works, or by imposing restrictions on the programmer's freedom to operate within the worldview. The key difference is that if the programmer thinks within the worldview (which should come naturally with a well-crafted worldview), restriction-based disallowance is directly visible, while contour-based disallowance is not. To directly see contour-based disallowance, you have to step outside the worldview.
To reuse an example I've suggested elsewhere: If a Turing Machine is disallowed from writing on a blank cell on the tape, that's a restriction (which, in this case, reduces the model's computational power to that of a linear bounded automaton). If a Turing Machine's read/write head can move only horizontally, not vertically, that's a contour of the worldview.
- Enforcement can be hard vs soft. Hard enforcement means programs are rejected if they do not conform. Soft enforcement is anything else. One soft contour approach is the principle I've blogged about under the slogan dangerous things should be difficult to do by accident. Soft restriction might, for example, take the form of a warning, or a property that could be tested for (either by the programmer or by the program).
- Timing can be eager vs lazy. Traditional static typing is hard and eager; traditional dynamic typing is hard and lazy. Note, eager–lazy is a spectrum rather than a binary choice. Off hand, I don't see how contour-based disallowance could be lazy (i.e., I'd think laziness would always be directly visible within the worldview); but I wouldn't care to dismiss the possibility.
Shallow vs deep tends to play off simplicity against precision. Shallow disallowance strategies are simple, therefore easily understood, which makes them more likely to be used correctly and —relatively— less likely to interfere with programmers' ability to imagine new techniques (versatility/facility of allowance). However, shallow disallowance is a blunt instrument, that cannot take out a narrow or delicately structured case of bad behavior without removing everything around it. So some designers turn to very deep strategies —fully articulated theorem-proving, in fact— but thereby introduce conceptual complexity, and the conceptual inflexibility that tends to come with it.
Recalling my earlier remark about tradeoffs, the tradeoffs we expect to be accdiental are high-level. Low-level tradeoffs are apt to be essential. If you're calculating reaction mass of a rocket, you'd best accept the tradeoff dictated by F=ma. On the other hand, if you step back and ask what high-level task you want to perform, you may find it can be done without a rocket. With disallowance depth, deep implies complex, and shallow implies some lack of versatility; there's no getting around those. But does complex disallowance imply brittleness? Does it preclude conceptual clarity?
One other factor that's at play here is level of descriptive detail. If the programming language doesn't specify something, there's no question of whether to disallow some values of it. If you just say "sort this list", instead of specifying an algorithm for doing so, there's no question —within the language— of whether the algorithm was specified correctly. On the other hand, at some point someone specified how to sort a list, using some language or other; whatever level of detail a language starts at, you'll want to move up to a higher level later, and not keep respecifying lower-level activities. That's abstraction again. Not caring what sort algorithm is used may entail significantly more complexity, under the hood, than requiring a fixed algorithm — and again, we're always going to be passing from one such level to another, and having to decide which details we can hide and how to hide them. How all that interacts with disallowance depth may be critical: can we hide complex disallowance beneath abstraction barriers, as we do other forms of complexity?
You may notice I've had far more to say about how to disallow, than about how to allow. Allowing is so much more difficult, it's hard to know what to say about it. Once you've chosen a worldview, you have a framework within which to ask how to exclude what you don't want; but finding new worldviews is, rather by definition, an unstructured activity.
Moreover, thrashing about with specific disallowance tactics may tend to lock you in to worldviews suited to those tactics, when what's needed for truly versatile allowing may be something else entirely. So I reckon that allowing is logically prior to disallowing. And my publicly visible work does, indeed, focus on allowing with a certain merry disregard for the complementary problem of disallowing. Disallowing is never too far from my thoughts; but I don't expect to be able to tackle it properly till I know what sort of allowing worldview it should apply to.