Confluence
Big update of many threads coming together: my upcoming Information Architecture Conference talk, a new chapter of The Nature of Software, and what's been eating my brain for the last several weeks.
Stone Tools
Around the cusp of this year, I started thinking a lot more seriously about positioning myself as a “hypermedia-first” practitioner, to the extent that I craft my work product principally on the Web, and then project that into more conventional documents. My own client work is more or less fully migrated over to this mode of operation, but I felt the pinch even more acutely last month when helping a client prepare a grant proposal. My role in this engagement was mainly advisory—not for the grant process itself, that was somebody else’s job—along with picking up some of the overflow in preparing materials.
The submission requirements were decidedly paper-adjacent. Many of the materials had hard word and page limits, and crossing them, we feared, would result in summary disqualification. The experience was like playing a game of Operation—don’t trip the buzzer by letting the Microsoft Word layout algorithm kick the last line of text into a new page, and over the limit.
To complicate matters, the grant consultant worked by shipping Word documents back and forth via e-mail, with Track Changes enabled. My client was also sharing files with Dropbox, drawing up the budget in Excel, storing bibliographic references in Zotero, and gluing it all together by hand. It’s 2023 and the amount of manual labour still involved in getting a few kilobytes of data from one spot to another is truly astonishing.
This is not to chide anybody for their methods, or suggest anybody change how they work. In fact, I am a staunch believer that you should be able to work any way you want to, and it’s the computers that should adapt to you. Rather, what was striking was the disconnectedness within and between the work product. And this is not to say I haven’t worked on projects before that used the exact same materials and methods, but it is perhaps the first since I really started to squint at the situation.
One major contribution to the contrast that really caused me to notice was that this client has for several months been using my decade-old prototype structured argumentation tool to construct rationale for various endeavours, gather evidence for his positions, and reason out an appropriate course of action. A garden-variety business document, by and large, is just a projection of this kind of network structure, set in a sequence which is rhetorically most effective.
When we’re relegated to working with documents—“leaf nodes” in process modelling parlance—the tooling emphasizes the sequence of arguments rather than their content, and when conventional WYSIWYG interfaces like word processors are involved, it’s the typesetting (such that it is) that’s front and centre. Any deeper structure has to be maintained in your head.
This can actually be read as a justification for the grant application consultant to expect to work directly with the documents that will ultimately be shipped, since they are looking at the collected material as a unitary piece of rhetoric, and making sure all the boxes are ticked. That is, by the time they get their hands on it, the substantive content of the application is assumed to be in order. They don’t need to know or care how the sausage is made. What’s unfortunate about this arrangement though is we have to put a considerable amount of manual effort into accommodating them.
While we don’t have to touch the argument that people shouldn’t be expected to know how to use esoteric tools, we can remark on how strange it is that these tools are esoteric in the first place. Contrast a product like Microsoft Word with something like InDesign. The latter, meant for professional book and magazine layouts—versus mere “office productivity”—has had, since its inception, a sophisticated style editor. It also has, in addition to a much more expansive WYSIWYG interface than any word processor dares provide, a no-nonsense text view where you can just write, and when you’re done, typeset your content in one shot. Why this mode of operation was made available for typesetting professionals but not for the rest of us is a bit of a mystery, since (in my experience) it’s easier to implement than ad-hoc formatting. WYSIWYG user interfaces are extraordinarily expensive to develop, and deriving rules after the fact from the structures they create is much harder than starting with rules and adding exceptions to them.
An apt characterization of working hypermedia-first is that it is squarely “post-file”, to the extent that files (qua files) are no longer the authoritative source of information. Rather, files (and file-like objects) are merely an interface into and out of some unspecified (unless you’re the one doing the specifying) networked database. Google Docs functions like this to some extent, but its content model is remarkably simplistic. A Google Doc, while affording easy sharing, collaboration, and change management, has even less structure than Word: no document semantics past basic typographical formatting.
Google Docs also breaks the rule of not requiring people to change how they work—unless Google Docs is already where they do their work.
I would be very interested in doing some experiments that treat files (like Word documents) not as files but as messages—protocol data units. Consider the Word document that comes in as an e-mail attachment from—what some may consider—a technological hold-out. What if the changes it represents were automatically ingested into a hypermedia network? What about a special pseudo-file system on your side, that generates the flattened representation on the fly, ready to attach to outgoing messages?
I am keenly interested in what I will call “subversive shimming”—little patches of infrastructure software that span the gaps between proprietary platforms. A FUSE interface to Dropbox is one such example, of which there appear to be plenty of implementations. FUSE itself is but one of a few possible ways (WebDAV being another) to ease the transition to a post-file ecosystem, by creating a filesystem-like interface on top of whatever cockamamie system you want.
This intersects with my upcoming conference talk
I will be speaking at the Information Architecture Conference in New Orleans which runs from the 28th of March to April Fool’s—my slot being that afternoon. I have so far resisted including anything too jokey in the deck, but in the two weeks between now and my conference, I may change my mind.
This will be my third time speaking at IAC—if you count the one I did at its predecessor, the IA Summit (if I recall, the last one of them), as well as my prerecorded talk in the middle of lockdown 2020.
This year I will be expounding upon The Specificity Gradient, a conceptual framework I came up with some time ago (I want to say 2009 or 2010) but never formally articulated for another decade, when I randomly went to the blackboard one day and dashed off a video about it. I’ve written about it in this newsletter as well.
The idea behind the Specificity Gradient—patterned after the Pace Layers framework popularized✱ by Stewart Brand—is that different categories of concerns within a domain of inquiry change at different rates. For Brand, it ultimately was a model for civilizational change. For me, my focus, as usual, is the process of creating software.
✱ The original conceptual framework was of course called Shearing Layers, and it was created by the architect Frank Duffy to contemplate the different temporal aspects of a building. Brand talked Duffy’s model up in his 1990s book and subsequent documentary, How Buildings Learn, before coming up with his own formulation.
My contribution with the Specificity Gradient, is to emphasize that the decreasing durability (and increasing perishability) of concerns goes hand in hand with increasing detail. The category of artifact in software development with the most detail (and thus the most perishable) is the code itself. Despite this, code is treated as sacrosanct. The argument underpinning The Specificity Gradient is that in the race to get to running code (and therefore a product we can sell), we skip over representations of processes and conceptual structures that are coarser-grained. This is in spite of these representations being much more durable than any piece of code is apt to be. The programming process itself is ironically frustrated due to skipping these intermediate steps, because everything has to be worked out at the highest possible level of detail. Because of this, the knowledge gained is encoded in the program itself, legible only to programmers—the smallest subset of stakeholders—if it is even meaningfully captured at all.
What this has to do with documents can be explained with a food metaphor: In your kitchen, you are likely to have a pantry, a freezer, and a fridge. In the pantry there may be things like dried grains, beans, pasta. These will stay usable for years. Same goes for food in cans and jars, though they go bad quickly once you open them. Food in the freezer might stay good for about six months—assuming no power outages—before getting freezer burn. On and on we can rank ingredients right down to leafy vegetables, or raw meat and fish, that may last a few days in the fridge and will go bad in hours to minutes otherwise.
Preparing a document is like preparing a meal: we assemble all the ingredients—the substantive content—process them somehow, and then try to plate them in an appetizing way. A document is like a meal in another way too, in that a prepared meal is (not always, but often) even more perishable than the most perishable individual ingredient. With some notable exceptions (the Indians really have this one on lock, but really any soups or stews), you wouldn’t want to consume it the next day, let alone the next week.
I don’t know what your leftover situation is, but in my experience most of the time the big selling point is they can be nuked quickly. Anyway, my copy of The Professional Chef insists that any hot food not served right away be immediately chilled in an ice bath before being put in the fridge. (I will confess that I generally let things cool on their own, which would probably flunk me for the titular role.)
Oh, and if it isn’t clear, source code absolutely qualifies as “a document”.
With the Specificity Gradient I’m ultimately asking what if we organized our collective knowledge like we organize ingredients in our kitchens. With respect to documents, we can take advantage of something you can’t do with a meal: reach into it and and freshen up an ingredient or two. Or rather, it’s the other way around: it’s the document that reaches out and gets the freshest content.
This is something you unfortunately can’t pull off in a document-centric regime. Conventional documents have to be demoted in favour of structured hypermedia. The good news is that professional tooling seems to be moving online, which is the right general direction. The meh news is that most of these companies don’t seem to have this fine-grained interoperability on their radar just yet.
The real goal, though, is that the durable material sticks around in such a way that it actually gets reused, and companies start to accumulate it as an asset. As I wrote almost a decade ago (and subsequently articulated in that video), the Specificity Gradient goes like this, with the arrows representing hyperlinks between the strata:
Business goals → user goals,
user goals → user tasks,
user tasks → system tasks,
system tasks → system behaviours,
system behaviours → executable code.
As I likewise mention elsewhere, strategic business objectives are pretty stable. Products—unless you’re Google—tend to stick around for decades. This means artifacts like personas are going to be equally durable. When you get down past user goals to user tasks and start talking about specific ways of doing things, that’s when things start to speed up. We went from three decades of only desktop, to iPhone, tablet, watch, voice assistant, and VR goggles in the 16 years that followed. That said, an abstract process to achieve a user’s goal only needs to be differentiated for each of the channels through which it is carried out (to the extent that a given channel is even appropriate at all), up to and including sharing a chunk of the same code.
And all this is just UX stuff. Contributions from information architecture and content strategy, usually of a more declarative modality, likewise lie at different points on the gradient. A taxonomy or audience model, for instance, will far outlast any individual navigation bar or content audit.
My final remark on this topic is that at least as important as the durability—and therefore reusability—of any of these assets, is the level of detail. Specifically, don’t trouble important decision-makers with too much of it. This, to the best of my knowledge, is something a practitioner has to learn how to intuit. What I’m suggesting is that with a sufficiently addressable knowledge infrastructure, you could compute it. Details below a particular executive’s threshold of interest could likewise be summed up to inform how much work there is to do for any given concern.
So, for those of you who aren’t going to be in New Orleans that weekend—which I assume is most of you—that is what I’ll be talking about.
Chapter 6 of The Nature of Software, Good Shape, is away
This is the first newsletter on my main channel that I’ve sent since shipping the latest chapter of The Nature of Software, my limited-run, subscriber-exclusive, serialized essay, connecting the architect Christopher Alexander’s 2500-page magnum opus, The Nature of Order, to the craft of software development. The latest instalment, Good Shape, is chapter 6 of a 15-part main body of what will eventually be an 18-or-so chapter book.
If you’ve heard of Christopher Alexander, then you’ve probably heard of A Pattern Language and the Design Patterns movement in the software development community that was inspired by it. The original pattern language—to dry it out completely—comprised 253 recipes for solving a particular architecture problem in a given context. Alexander and his colleagues completed that body of work in the 1970s. The Nature of Order, by contrast, represents Alexander’s later work and thinking, and ultimately a much simpler model, consisting of 15 structure-preserving transformations, intended to be carried out recursively in a loop he called “The Fundamental Differentiating Process”. It is a much simpler and more elegant approach than patterns—he even said as much himself—which is why I’m trying to port it to software development.
The other bit of news about this is that to be consistent with my commitment to “hypermedia-first”, I set up a mirror of the archive on the.natureof.software
. At the moment it’s the same as what’s on Buttondown (the newsletter provider) save for a little nicer typesetting. The plan, though, is to create a place for subscribers to annotate and discuss the work, as well as provide structured data that can be reused in other applications.
The side effect is eating my own dog food vis-à-vis the Specificity Gradient.
In contrast to this newsletter, The Nature of Software is exclusive to paid subscribers. Subscriptions go for USD $7 per month, or a discounted and much more easily expense-able $70 per year, which includes access to the.natureof.software
. The introductory chapter is also free to read. Join the club and I’ll see you there.
From breadboard to prototyping platform
If I’m going to make an online community, I’m going to need something to power it. Starting sometime in 2018, I began putting together a “content management meta-system”—really just a haphazard implementation of what I would characterize as “good ideas about content management”. At the top of the list was a durable addressing mechanism, with the ultimate goal of eliminating the 404
. It’s a pretty simple principle: all you have to do is give everything a stable identifier, and then overlay the human-friendly addresses on top. Then you just have to remember every time you change one, and propagate that to all the places you reference it. It really is technically pretty simple; the conundrum is all in what is ultimately the design problem of precisely how you do that.
I use the Swiss Army Knife—what I call it—to generate my (static) website, which has maintained every single URL it has ever exposed since I set it up—at least that incarnation of it—in 2008.
I did this by hand for ten years before creating the Swiss Army Knife; needless to say the task is now much easier.
Durable hyperlinks are mere table stakes though. Also of great importance is the ability to represent structured data in a number of useful ways, as well as that of applying a vocabulary of various transformation functions—examples of which include basic operations like cropping and resizing images. Lastly, I’ll need something to get data into the website, using the website.
I actually have all the pieces—including that last one (it was actually one of the first). It’s just a huge mess in there.
This bill of changes entails graduating the Swiss Army Knife from something I alone run in a development console to an actual open-source product, with a user interface, tests, and documentation. The goal is to make it something that will be run live on a Web server; not quite a framework or CMS, but what I’ll call for now a “prototyping platform”.
The road (or more like bushwhack) ahead for the Swiss Army Knife—which I will probably rename when the inspiration to do so finally hits—involves disentangling five years of hairball that couple it to its role of convenience as a static website generator, so it can re-emerge as a dynamic prototyping platform. This I will be carrying out on stream when I can:
on Twitch: twitch.tv/methodandstructure
and YouTube: youtube.com/@methodandstructure
I stream on both platforms at once, since I hacked together a sneaky stream multiplexer. So take your pick. I’m streaming pretty randomly at the moment but I’m trying it out here and there before settling on a regular time slot. Thursday evenings (EDT) maybe? If you’re interested, leave your preference in a comment.