APL is like a diamond. It has a beautiful crystal structure; all of its parts are related in a uniform and elegant way. But if you try to extend this structure in any way — even by adding another diamond — you get an ugly kludge. LISP, on the other hand, is like a ball of mud. You can add any amount of mud to it [...] and it still looks like a ball of mud!— R1RS (1978) page 29, paraphrasing remarks attributed to Joel Moses. (The merits of the attribution are extensively discussed in the bibliography of the HOPL II paper on Lisp by Steele and Gabriel (SIGPLAN Notices 28 no. 3 (March 1993), p. 268), but not in either freely-available on-line version of that paper.)
Modern programming languages try to support extension through regimentation, a trend popularized with the structured languages of the 1970s; but Lisp gets its noted extensibility through lack of regimentation. It's not quite that simple, of course, and the difference is why the mud metaphor works so well. Wikis are even muddier than Lisp, which I believe is one of the keys to their power. The Wikimedia Foundation has been trying lately to make wikis cleaner (with, if may I say so, predictably crippling results); but if one tries instead to introduce a bit of interaction into wiki markup (changing the texture of the mud, as it were), crowd-sourcing a wiki starts to look like a sort of crowd-sourced programming — with some weird differences from traditional programming. In this post I mean to explore wiki-based programming and try to get some sense of how deep the rabbit hole goes.
This is part of a bigger picture. I'm exploring how human minds and computers (more generally, non-sapiences) compare, contrast, and interact. Computers, while themselves not capable of being smart or stupid, can make us smarter or stupider (post). Broadly: They make us smarter when they cater to our convenience, giving full scope to our strengths while augmenting our weaknesses; as I've noted before (e.g. yonder), our strengths are closely related to language, which I reckon is probably why textual programming languages are still popular despite decades of efforts toward visual programming. They make us stupider when we cater to their convenience, which typically causes us to judge ourselves by what they are good at and find ourselves wanting. For the current post, I'll "only" tackle wiki-based programming.
I'll wend my way back to the muddiness/Lisp theme further down; first I need to set the stage with a discussion of the nature of wikis.
ContentsWiki markup
Wiki markup
Wiki philosophy
Nimble math
Brittle programming
Mud
Down the rabbit-hole
Wiki perspective
Technically, a wiki is a collection of pages written in wiki markup, a simple markup language designed for specifying multi-page hypertext documents, characterized by very low-overhead notations, making it exceptionally easy both to read and to write. This, however, is deceptive, because how easy wiki markup is isn't just a function of the ergonomics of its rules of composition, but also of how it's learned in a public wiki community — an instance of the general principle that the technical and social aspects of wikis aren't separable. Here's what I mean.
In wiki markup (I describe wikimedia wiki markup here), to link to another page on the wiki locally called "foo", you'd write [[foo]]
; to link to it from some other text "text" instead of from its own name, [[foo|text]]
. Specify a paragraph break by a blank line; italics ''baz''
, boldface '''bar'''
. Specify section heading "quux" by ==quux==
on its own line, or ===quux===
for a subsection, ====quux====
for a subsubsection, etc.; a bulleted list by putting each item on its own line starting with *
. That's most of wiki markup right there. These rules can be so simple because specifying a hypertext document doesn't usually require specifying a lot of complicated details (unlike, say, the TeX markup language where the point of the exercise is to specify finicky details of typesetting).
But this core of wiki markup is easier that the preceding paragraph makes it sound. Why? Because as a wiki user — at least, on a big pre-existing wiki such as Wikipedia, using a traditional raw-markup-editing wiki interface — you probably don't learn the markup primarily by reading a description like that, though you might happen across one. Rather, you start by making small edits to pages that have already been written by others and, in doing so, you happen to see the markup for other things near the thing you're editing. Here it's key that the markup is so easy to read that this incidental exposure to examples of the markup supports useful learning by osmosis. (It should be clear, from this, that bad long-term consequences would accrue from imposing a traditional WYSIWYG editor on wiki users, because a traditional WYSIWYG editor systematically prevents learning-by-osmosis during editing — because preventing the user from seeing how things are done under-the-hood is the purpose of WYSIWYG.)
Inevitably, there will be occasions when more complicated specifications are needed, and so the rules of wiki markup do extend beyond the core I've described. For example, embedding a picture on the page is done with a more elaborate variant of the link notation. As long as these occasional complications stay below some practical threshold of visual difficulty, though (and as long as they remain sufficiently rare that newcomers can look at the markup and sort out what's going on), the learning-by-osmosis effect continues to apply to them. You may not have, say, tinkered with an image on a page before, but perhaps you've seen examples of that markup around, and even if they weren't completely self-explanatory you probably got some general sense of them, so when the time comes to advance to that level of tinkering yourself, you can figure out most or all of it without too much of the terrible fate of reading a help page. Indeed, you may do it by simply copying an example from elsewhere and making changes based on common sense. Is that cargo-cult programming? Or, cargo-cult markup? Maybe, but the markup for images isn't actually all that complicated, so there probably isn't an awful lot of extra baggage you could end up carrying around — and it's in the nature of the wiki philosophy that each page is perpetually a work in progress, so if you can do well enough as a first approximation, you or others may come along later and improve it. And you may learn still more by watching how others improve what you did.
Btw, my description of help pages as a terrible fate isn't altogether sarcastic. Wikis do make considerable use of help pages, but for a simple reason unrelated to the relative effectiveness of help pages. The wiki community are the ones who possess know-how to perform expert tasks on the wiki, so the way to capture that knowledge is to crowd-source the capture to the wiki community; and naturally the wiki community captures it by doing the one thing wikis are good for: building hypertext documents. However, frankly, informational documentation is not a great way to pass on basic procedural knowledge; the strengths of informational documentation lie elsewhere. Information consumers and information providers alike have jokes about how badly documentation works for casual purposes — from the consumer's perspective, "when all else fails, read the instructions"; from the producer's, "RTFM".
Sometimes things get complicated enough that one needs to extend the markup. For those cases, there's a notation for "template calls", which is to say, macro calls; I'll have more to say about that later.
Wiki philosophyHere's a short-list of key wiki philosophical principles. They more-or-less define, or at least greatly constrain, what it is to be a wiki. I've chosen them with an evident bias toward relevance for the current discussion, and without attempting a comprehensive view of wiki philosophy — although I suspect most principles that haven't made my list are not universal even to the wikimedian sisterhood (which encompasses considerable variation from the familiar Wikipedian pattern).
Each page is perpetually a work in progress; at any given moment it may contain some errors, and may acquire new ones, which might be fixed later. Some pages may have some sort of "completed" state after which changes to them are limited, such as archives of completed discussions; but even archives are merely instances of general wiki pages and, on a sufficiently long time scale, may occasionally be edited for curational purposes. Pages are by and for the People. This has (at least that I want to emphasize) two parts to it.
Page specification is universally grounded in learn-by-osmosis wiki markup. I suspect this is often overlooked in discussions of wiki philosophy because the subject is viewed from the inside, where the larger sweep of history may be invisible. Frankly, looking back over the past five decades or so of the personal-computing revolution from a wide perspective, I find it glaringly obvious that this is the technical-side sine qua non of the success of wikis. |
There is also a meta-principle at work here, deep in the roots of all these principles: a wiki is a human self-organizing system. The principles I've named provide the means for the system to self-organize; cripple them, and the system's dynamic equation is crippled. But this also means that we cannot expect to guide wikis through a conventional top-down approach (which is, btw, another reason why help pages don't work well on a wiki). Only structural rules that guide the self-organization can shape the wiki, and complicated structural rules will predictably bog down the system and create a mess; so core simplicity is the only way to make the wiki concept work.
The underlying wiki software has some particular design goals driven by the philosophical principles.
Graceful degradation. This follows from pages being works in progress; the software platform has to take whatever is thrown at it and make the best use of what's there. This is a point where it matters that the actual markup notations are few and simple: hopefully, most errors in a wiki page will be semantic and won't interfere with the platform's ability to render the result as it was intended to appear. Layout errors should tend to damp out rather than cascade, and it's always easier to fix a layout problem if it results in some sensible rendering in which both the problem and its cause are obvious. Robustness against content errors. Complementary to graceful degradation: while graceful degradation maximizes positive use of the content, robustness minimizes negative consequences. The robustness design goal is driven both by pages being works in progress and by anyone being able to contribute, in that the system needs to be robust both against consequences of things done by mistake and against consequences of things done maliciously. Radical flexibility. Vast, sweeping flexibility; capacity to express anything users can imagine, and scope for their imaginations. This follows from the human touch, the by and for the People nature of wikis; the point of the entire enterprise is to empower the users. To provide inherent flexibility and inherent robustness at the deep level where learning-by-osmosis operates and structural rules guide a self-organizing system, is quite an exercise in integrated design. One is reminded of the design principle advocated (though not entirely followed for some years now) by the Scheme reports, to design "not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary"; except, the principle is made even more subtle because placing numeric bounds on operations, in a prosaic technical sense, is often a desirable measure for robustness against content errors, so one has to find structural ways to enable power and flexibility in the system as a whole that coexist in harmony with selective robustness-driven numeric bounds. Authorization/quality control. This follows from the combination of anyone being able to contribute with need for robustness (both malicious and accidental). A wiki community must be able to choose users who are especially trusted by the community; if it can't do that, it's not a community. Leveraging off that, some changes to the wiki can be restricted to trusted users, and some changes once made may have some sort of lesser status until they've been approved by a trusted user. These two techniques can blur into each other, as a less privileged user can request a change requiring privileges and, depending on how the interface works, the request process might look very similar to making a change subject to later approval. |
As software platform design goals for a wiki, imho there's nothing particularly remarkable, let alone controversial, about these.
Nimble mathShifting gears, consider abstraction in mathematics. In my post some time back on types, I noted
In mathematics, there may be several different views of things any one of which could be used as a foundation from which to build the others. That's essentially perfect abstraction, in that from any one of these levels, you not only get to ignore what's under the hood, but you can't even tell whether there is anything under the hood. Going from one level to the next leaves no residue of unhidden details: you could build B from A, C from B, and A from C, and you've really gotten back to A, not some flawed approximation of it[.]Suppose we have a mathematical structure of some sort; for example, matroids (I had the fun some years back of taking a class on matroids from a professor who was coauthoring a book on the subject). There are a passel of different ways to define matroids, all equivalent but not obviously so (the Wikipedia article uses the lol-worthily fancy term cryptomorphic). Certain theorems about matroids may be easier to prove using one or another definition, but once such a theorem has been proven, it doesn't matter which definition we used when we proved it; the theorem is simply true for matroids regardless of which equivalent definition of matroid is used. Having proven it we completely forget the lower-level stuff we used in the proof, a characteristic of mathematics related to discharging the premise in conditional proof (which, interestingly, seems somehow related to the emergence of antinomies in mathematics, as I remarked in an earlier post).
Generality in mathematics is achieved by removing things; it's very easy to do, just drop some of the assumptions about the structures you're studying. Some of the theorems that used to hold are no longer valid, but those drop out painlessly. There may be a great deal of work involved in figuring which of the lost theorems may be re-proven in the broader setting, with or without weakening their conclusions, but that's just icing on the cake; nothing about the generalization can diminish the pre-existing math, and new results may apply to special cases as facilely as matroid theorems would leap from one definition to another. Specialization works just as neatly, offering new opportunities to prove stronger results from stronger assumptions without diminishing the more general results that apply.
Brittle programmingThere seems to be a natural human impulse to seek out general patterns, and then want to explain them to others. This impulse fits in very well with Terrance Deacon's vision of the human mind in The Symbolic Species (which I touched on tangentially in my post on natural intelligence) as synthesizing high-level symbolic ideas for the purpose of supporting a social contract (hence the biological importance of sharing the idea once acquired). This is a familiar driving force for generalization in mathematics; but I see it also as an important driving force for generalization in programming. We've got a little program to do some task, and we see how to generalize it to do more (cf. this xkcd); and then we try to explain the generalization. If we were explaining it to another human being, we'd probably start to explain our new idea in general terms, "hand-waving". As programmers, though, our primary dialog isn't with another human being, but with a computer (cf. Philip Guo's essay on User Culture Versus Programmer Culture). And computers don't actually understand anything; we have to spell everything out with absolute precision — which is, it seems to me, what makes computers so valuable to us, but also trips us up because in communicating with them we feel compelled to treat them as if they were thinking in the sense we do.
So instead of a nimble general idea that synergizes with any special cases it's compatible with, the general and specific cases mutually enhancing each other's power, we create a rigid framework for specifying exactly how to do a somewhat wider range of tasks, imposing limitations on all the special cases to which it applies. The more generalization we do, the harder it is to cope with the particular cases to which the general system is supposed to apply.
It's understandable that programmers who are also familiar with the expansive flexibility of mathematics might seek to create a programming system in which one only expresses pure mathematics, hoping thereby to escape the restrictions of concrete implementation. Unfortunately, so I take from the above, the effort is misguided because the extreme rigidity of programming doesn't come from the character of what we're saying — it comes from the character of what we're saying it to. If we want to escape the problem of abstraction, we need a strategy to cope with the differential between computation and thought.
MudLisp [...] made me aware that software could be close to executable mathematics.— ACM Fellow Profile of L. Peter Deutsch.
Part of the beauty of the ball-of-mud quote is that there really does seem to be a causal connection between Lisp's use of generic, unencapsulated, largely un-type-checked data structures and its extensibility. The converse implication is that we should expect specialization, encapsulation, and aggressive type-checking of data structures — all strategies extensively pursued in various combinations in modern language paradigms — each to retard language extension.
Each of these three characteristics of Lisp serves to remove some kind of automated restriction on program form, leaving the programmer more responsible for judging what is appropriate — and thereby shifting things away from close conversation with the machine, enabling (though perhaps not actively encouraging) more human engagement. The downside of this is likely obvious to programming veterans: it leaves things more open to human fallibility. Lisp has a reputation as a language for good programmers. (As a remark attributed to Dino Dai Zovi puts it, "We all know that Lisp is the best language around, but in the hands of most it becomes like that scene in Fantasia when Mickey Mouse gets the wand.")
Even automatic garbage collection and bignums can be understood as reducing the depth of the programmer's conversation with the computer.
I am not, just atm, taking sides on whether the ability to define specialized data types makes a language more or less "general" than doing everything with a single amorphous structure (such as S-expressions). If you choose to think of a record type as just one example of the arbitrary logical structures representable using S-expressions, then a minimal Lisp, by not supporting record types, is declining to impose language-level restrictions on the representing S-expressions. If, on the other hand, you choose to think of a cons cell as just one example of a record type (with fields car and cdr), then "limiting" the programmer to a single record type minimizes the complexity of the programmer's forced interaction with the language-level type restrictions. Either way, the minimal-Lisp approach downplays the programmer's conversation with the computer, leaving the door open for more human engagement.
Wikis, of course, are pretty much all about human engagement. While wiki markup minimizes close conversation with the computer, the inseparable wiki social context does actively encourage human engagement.
Down the rabbit-holeAt the top of the post I referred to what happens if one introduces a bit of interaction into wiki markup. But, as with other features of wiki markup, this what-if cannot be separate from the reason one would want to do it. And as always with wikis, the motive is social. The point of the exercise — and this is being tried — is to enhance the wiki community's ability to capture their collective expertise at performing wiki tasks, by working with and enhancing the existing strengths of wikis.
The key enabling insight for this enhancement was that by adding a quite small set of interaction primitives to wiki markup (in the form, rather unavoidably, of a small set of templates), one can transform the whole character of wiki pages from passive to active. Most of this is just two primitives: one for putting text input boxes on a page, and another for putting buttons on the page that, when clicked, take the data in the various input boxes from the page and send it somewhere — to be transformed, used to initialize text boxes on another page, used to customize the appearance of another page, or, ultimately, used to devise an edit to another page. Suddenly it's possible for a set of wiki pages to be, in effect, an interactive wizard for performing some task, limited only by what the wiki community can collectively devise.
Notice the key wiki philosophical principles still in place: each page perpetually a work in progress, by and for the people, with specification universally grounded in learn-by-osmosis wiki markup. It seems reasonable to suppose, therefore, that the design principles for the underlying wiki software, following from those wiki philosophical principles, ought also to remain in place. (In the event, the linked "dialog" facility is, in fact, designed with graceful degradation, robustness, and flexibility in mind and makes tactical use of authorization.)
Consider, though, the nature of the new wiki content enabled by this small addition to the primitive vocabulary of wiki markup. Pages with places to enter data and buttons that then take the entered data and, well, do stuff with it. At the upper end, as mentioned, a collection of these would add up to a crowd-sourced software wizard. The point of the exercise is to bring the wiki workflow to bear on growing these things, to capture the expertise uniquely held by the wiki community; so the wiki philosophical principles and design principles still apply, but now what they're applying to is really starting to look like programming; and whereas all those principles were pretty tame when we were still thinking of the output of the process as hypertext, applying them to a programming task can produce some startling clashes with the way we're accustomed to think about programming — as well as casting the wiki principles themselves in a different light than in non-interactive wikis.
Wikis have, as noted, a very mellow attitude toward errors on a wiki page. Programmers as a rule do not. The earliest generation of programmers seriously expected to write programs that didn't have bugs in them (and I wouldn't bet against them); modern programmers have become far more fatalistic about the occurrences of bugs — but one attitude that has, afaics, never changed is the perception that bugs are something to be stamped out ASAP once discovered. Even if some sort of fault-tolerance is built into a computer system, the gut instinct is still that a bug is an inherent evil, rather than a relatively less desirable state. It's a digital rather than analog, discrete rather than continuous, view. Do wikis want to eliminate errors? Sure; but it doesn't carry the programmer's sense of urgency, of need to exterminate the bug with prejudice. Some part of this is, of course, because a software bug is behavioral, and thus can actually do things so its consequences can spread in a rapid, aggressive way that passive data does not... although errors in Wikipedia are notorious for they way they can spread about the infosphere — at a speed more characteristic of human gossip than computer processing.
The interactive-wiki experiment was expected from the outset to be, in its long-term direction, unplannable. Each individual technical step would be calculated and taken without probing too strenuously what step would follow it. This would be a substantially organic process; imagine a bird adding one twig at a time to a nest, or a painter carefully thinking out each brush stroke... or a wiki page evolving through many small edits.
In other words, the interactivity device —meant to empower a wiki community to grow its own interactive facilities in much the same way the wiki community grows the wiki content— would itself be developed by a process of growth. Here's it's crucial —one wants to say, vital— that the expertise to be captured is uniquely held by the wiki community. This is also why a centralized organization, such as the Wikimedia Foundation, can't possibly provide interactive tools to facilitate the sorts of wiki tasks we're talking about facilitating: the centralized organization doesn't know what needs doing or how to do it, and it would be utterly unworkable to have people petitioning a central authority to please provide the tools to aid these things. The central authority would be clueless about what to do at every single decision point along the way, from the largest to the smallest decision. The indirection would be prohibitively clumsy, which is also why wiki content doesn't go through a central authority: wikis are a successful content delivery medium because when someone is, say, reading a page on Wikipedia and sees a typo, they can just fix it themselves, in what turns out to be a very straightforward markup language, rather than going through a big, high-entry-cost process of petitioning a central authority to please fix it. What it all adds up to, on the bottom line, is the basic principle that wikis are for the People, and that necessarily applies just as much to knowledge of how to do things on the wiki as it does to the provided content.
Hence, that the on-wiki tasks need to be semi-automated, not automated as such: they're subject to concerns similar to drive-by-wire or fly-by-wire, in which Bad Things Happen if the interface is designed in a way that cuts the human operator out of the process. Not all the problems of fly/drive-by-wire apply (I've previously ranted, here and there, on misdesign of fly-by-wire systems); but the human operator needs not only to be directly told some things, but also to be given peripheral information. Sapient control of the task is essential, and in the case of wikis is an important part of the purpose of the whole exercise; and peripheral information is a necessary enabler for sapient control: the stuff the human operator catches in the corner of their eye allows them to recognize when things are going wrong (sometimes through subliminal clues that low-bandwidth interfaces would have excluded, sometimes through contextual understanding that automation lacks); and peripheral information also enables the training-by-osmosis that makes the user more able to do things and recognize problems subliminally and have contextual understanding.
These peculiarities are at the heart of the contrast with conventional programming. Most forms of programming clearly couldn't tolerate the mellow wiki attitude toward bugs; but we're not proposing to do most forms of programming. An on-wiki interactive assistant isn't supposed to go off and do things on its own; that sort of rogue software agent, which is the essence of the conventional programmer's zero-tolerance policy toward bugs, would be full automation, not semi-automation. Here the human operator is supposed to be kept in the loop and well-informed about what is done and why and whatever peripheral factors might motivate occasional exceptions, and be readily empowered to deviate from usual behavior. And when the human operator thinks the assistant could be improved, they should be empowered to change or extend it.
At this point, though, the rabbit hole rather abruptly dives very deep indeed. In the practical experiment, as soon as the interaction primitives were on-line and efforts began to apply them to real tasks, it became evident that using them was fiendishly tricky. It was really hard to keep track of all the details involved, in practice. Some of that, of course, has to be caused by the difficulty of overlaying these interactions on top of a wiki platform that doesn't natively support them very well (a politically necessary compromise, to operate within the social framework of the wikimedian sisterhood); but a lot of it appears to be due to the inherent volatility wiki pages take on when they become interactive. This problem, though, suggests its own solution. We've set out to grow interactive assistants to facilitate on-wiki tasks. The growing of interactive assistants is itself an on-wiki task. What we need is a meta-assistant, a semi-automated assistant to help users with the creation and maintenance of semi-automated assistants.
It might seem as if we're getting separated from the primitive level of wiki markup, but in fact the existence of that base level is a necessary stabilizing factor. Without a simple set of general primitives underneath, defining an organic realm of possibilities for sapient minds to explore, any elaborate high-level interface naturally devolves into a limited range of possibilities anticipated by the designer of the high-level interface (not unlike the difference between a multiple-choice quiz and an essay question; free-form text is where sapient minds can run rings around non-sapient artifacts).
It's not at all easy to design a meta-assistant to aid managing high-level design of assistants while at the same time not stifling the sapient user's flexibility to explore previously unimagined corners of the design space supported by the general primitives. Moreover, while pointedly not limiting the user, the meta-assistant can't help guiding the user, and while this guidance should be generally minimized and smoothed out (lest the nominal flexibility become a sudden dropping off into unsupported low-level coding when one steps off the beaten path), the guidance also has to be carefully chosen to favor properties of assistants that promote success of the overall interactive enterprise; as a short list,
- Avoid favoring negative features such as inflexibility of the resulting assistants.
- Preserve the reasoning behind exceptions, so that users aren't driven to relitigate exceptional decisions over and over until someone makes the officially "unexceptional" choice.
- Show the user, unobnoxiously, what low-level manipulations are performed on their behalf, in some way that nurtures learning-by-osmosis. (On much the same tack, assistants should be designed —somehow or other— to coordinate with graceful degradation when the interactivity facilities aren't available.)
- The shape of the assistants has to coordinate well with the strategy that the meta-assistant uses to aid the user in coping with failures of behavior during assisted operations — as the meta-assistant aids in both detecting operational problems and in adjusting assistants accordingly.
- The entire system has to be coordinated to allow recovery of earlier states of assistants when customizations to an assistant don't work out as intended — which becomes extraordinarily fraught if one contemplates customizations to the meta-assistant (at which point, one might consider drawing inspiration from the Lispish notion of a reflective tower; I'm deeply disappointed, btw, to find nothing on the wikimedia sisterhood about reflective towers; cf. Wand and Friedman's classic paper [pdf]).
Wiki perspective
As noted earlier, the practical experiment is politically (thus, socially) constrained to operate within the available wikimedia platform, using strictly non-core facilities —mostly, JavaScript— to simulate interactive primitives. Despite some consequent awkward rough spots at the interface between interactivity and core platform, a lightweight prototype seemed appropriate to start with because the whole nature of the concept appeared to call for an agile approach — again, growing the facility. However, as a baseline of practical interactive-wiki experience has gradually accumulated, it is by now possible to begin envisioning what an interactive-core wiki platform ought to look like.
Beyond the basic interactive functionality itself, there appear to be two fundamental changes wanted to the texture of the platform interface.
The basic interaction functionality is mostly about moving information around. Canonically, interactive information originates from input fields on the wiki page, each specified in the wiki markup by a template call (sometimes with a default value specified within the call); and interactive information is caused to move by clicking a button, any number of which may occur on the wiki page, each also specified in the wiki markup by a template call. Each input field has a logical name within the page, and each button names the input fields whose data it is sending, along with the "action" to which the information is to be sent. There are specialized actions for actually modifying the wiki —creating or editing a page, or more exotic things like renaming, protecting, or deleting a page— but most buttons just send the information to another page, where two possible things can happen to it. Entirely within the framework of the interactive facility, incoming information to a page can be used to initialize an input field of the receiving page; but, more awkwardly (because it involves the interface between the lightweight interactive extension and the core non-interactive platform), incoming information to a page can be fed into the template facility of the platform. This wants a bit of explanation about wiki templates.
Wiki pages are (for most practical purposes) in a single global namespace, all pages fully visible to each other. A page name can be optionally separated into "fields" using slashes, a la UNIX file names, which is useful in keeping track of intended relationships between pages, but with little-to-no use of relative names: page names are almost always fully qualified. A basic "template call" is delimited on the calling page by double braces ("{{}}") around the name of the callee; the contents of the callee are substituted (in wiki parlance, transcluded) into the calling page at the point of call. To nip potential resource drains (accidental or malicious) in the bud, recursion is simply disallowed, and a fairly small numerical bound is placed on the depth of nested calls. Optionally, a template can be parameterized, with arguments passed through the call, using a pipe character ("|") to separate the arguments from the template name and from each other; arguments may be explicitly named, or unnamed in which case the first unnamed parameter gets the name "1", the second "2", and so on. On the template page, a parameter name delimited by triple curly braces ("{{{}}}") is replaced by the argument with that name when the template is transcluded.
In conventional programming-language terms, wiki template parameters are neither statically nor dynamically scoped; rather, all parameter names are strictly local to the particular template page on which they occur. This is very much in keeping with the principle of minimizing conversation with the machine; the meaning-as-template of a page is unaffected by the content of any other page whatever. Overall, the wiki platform has a subtle flavor of dynamic scoping about it, but not from parameter names, nor from how pages name each other; rather, from how a page names itself. The wiki platform provides a "magic word" —called like a template but performing a system service rather than transcluding another page— for extracting the name of the current page, {{PAGENAME}}
(this'll do for a simple explanation). But here's the kicker: wiki markup {{PAGENAME}}
doesn't generate the name of the page on which the markup occurs, but rather, the name of the page currently being displayed. Thus, when you're writing a template, {{PAGENAME}}
doesn't give you the name of the template you're writing, but the name of whatever page ultimately calls it (which might not even be the name of the page that directly calls the one you're writing). This works out well in practice, perhaps because it focuses you on the act of display, which is after all the proper technical goal of wiki markup. (There was, so I understand, a request long ago from the wikimedia user community for a magic word naming the page on which the markup occurs, but the platform developers declined the request; apparently, though we've already named here a couple of pretty good design reasons not to have such a feature —minimal conversation, and display focus— the developers invoked some sort of efficiency concern.)
Getting incoming parameters from the dialog system to the template system can be done, tediously under-the-hood, by substituting data into the page — when the page is being viewed by means of an "action" (the view action) rather than directly through the wiki platform. This shunting of information back-and-forth between dialog and template is awkward and falls apart on some corner cases. Presumably, if the interactive facility were integrated into the platform, there ought to be just one kind of parameter, and just one kind of page-display so that the same mechanism handles all parameters in all cases. This, however, implies the first of the two texture changes indicated to the platform. The wikimedia API supports a request to the server to typeset a wiki text into html, which is used by the view action when shunting information from dialog parameters to template parameters; but, crucially, the API only supports this typesetting on an unadorned wiki text — that is, a wiki text without parameters.
To properly integrate interactivity into the platform, both the fundamental unit of modular wiki information, and the fundamental unit of wiki activity, have to change, from unadorned to parameterized. The basic act of the platform is then not displaying a page, but displaying a page given a set of arguments. This should apply, properly, to both the API and the user interface. Even without interactivity, it's already awkward to debug a template because one really wants to watch what happens as the template receives arguments, and follow the arguments as they flow downward and results flow back upward through nested template calls; which would require an interface oriented toward the basic unit of a wiki text plus arguments, rather than just a wiki text. Eventually, a debugging interface of this sort may be constructed on top of the dialog facility for wikimedia; in a properly integrated system, it would be basic platform functionality.
The second texture change I'm going to recommend to the platform is less obvious, but I've come to see it as a natural extension of shifting from machine conversation to human.
Wiki template calls are a simple process (recall the absence of complicated scoping rules) well grounded in wiki markup; but even with those advantages, as nested call depth increases, transclusion starts to edge over into computational territory. The Wikimedia Foundation has apparently developed a certain corporate dislike for templates, citing caching difficulties (which they've elected not to try to mitigate as much as they might) and computational expense, and ultimately turning to an overtly computational (and, by my reckoning, philosophically anti-wiki) use of Lua to implement templates under-the-hood.
As a source of inspiration, though, consider transitive closures. Wikibooks (another wikimedia sister of Wikinews and Wikipedia) uses one of these, where the entire collection of about 3000 books are hierarchically arranged in about 400 subjects, and a given book when filed in a subject has to be automatically put in all the ancestors of that subject. Doing this with templates, without recursion it can still be managed by a chain of templates calling each other in sequence (with some provision that if the chain isn't long enough, human operator intervention can be requested to lengthen it); but then, there's also the fixed bound on nesting depth. In practice the fixed bound only supports a tower of six or seven nested subjects. This could, of course, be technically solved the Foundation's way, by replacing the internals of the template with a Lua module which would be free to use either recursion or iteration, at the cost of abandoning some central principles of wiki philosophy; but there's another way. If we had, for each subject, a pre-computed list of all its ancestors, we wouldn't need recursion or iteration to simply file the book in all of them. So, for each subject keep a list of its ancestors tucked away in an annex to the subject page; and let the subject page check its list of ancestors, to make sure it's the same as what you get by merging its parents with its parent's lists of ancestors (which the parents, of course, are responsible for checking); and if the check fails, request human operator intervention to fix it — preferably, with a semi-automated assistant to help. If someone changes the subject hierarchy, recomputing various ancestor lists then needs human beings to sign off on the changes (which is perhaps just as well, since changing the subject hierarchy is a rather significant act that ought to draw some attention).
This dovetails nicely into a likely strategy for avoiding bots —fully automated maintenance-task agents, which mismatch the by and for the People wiki principle— by instead providing for each task a semi-automated assistant and a device for requesting operator intervention. So, what if we staggered all template processing into these sorts of semi-automated steps? This might also dovetail nicely with parameterizing the fundamental unit of modular wiki information, as a deferred template call is just this sort of parameterized modular unit.
There are some interesting puzzles to work out around the edges of this vision. Of particular interest to me is the template call mechanism. On the whole it works remarkably well for its native purpose; one might wonder, though, whether it's possible (and whether it's desirable, which does not go without saying) to handle non-template primitives more cleanly, and whether there is anything meaningful to be applied here from the macro/fexpr distinction. The practical experiment derives an important degree of agility from the fact that new "actions" can be written on the fly in javascript (not routinely, of course, but without the prohibitive political and bureaucratic hurdles of altering the wikimedia platform). Clearly there must be a line drawn somewhere between the province of wiki markup presented to the wiki community, and other languages used to implement the platform; but the philosophy I've described calls for expanding the wiki side of that boundary as far as possible, and if one is rethinking the basic structure of the wiki platform, that might be a good moment to pause and consider what might be done on that front.
As a whole, though, this prospect for an inherently interactive wiki platform seems to me remarkably coherent, sufficiently that I'm very tempted to thrash out the details and make it happen.