Conference
Preliminary report from IA Conference 2023, Chapter 7 of The Nature of Software coming soon, and Some Personal News™.
This is my third attempt in as many weeks to get this newsletter out the door. I tried (and apparently failed again) to write something short, because I’m actually gonna reiterate that I don’t love e-mail newsletters as a format: I think they’re clunky and retrograde with poor typesetting opportunities, and it’s way too easy to write way too much into what is ultimately a disposable, kinda degenerate medium. That said, I recognize that this is the way we do things now.
What I want to do is focus on writing hypertext, and reorient this newsletter—eventually—into more of a brief cover sheet for what’s going on, e.g. at my actual website.
This is analogous to what’s going on with my newsletter-to-book project, The Nature of Software, except there the newsletter format is much more clement to entire chapters going out in one chunk, because they’re emphatically not (at least inherently) hypertext. More on this in a moment.
It also doesn’t hurt that I sat on this newsletter because I have some exciting news to share. But first:
IA Conference 2023
The weekend that ended March and started April, I attended (for the purpose of speaking at) my first in-person conference since before the pandemic. It was also my first time leaving Canada—and one of only a handful of trips on an airplane—in the last three years.
The 2023 Information Architecture Conference (née Summit) was held in New Orleans, a place I hadn’t been since the last time the IA Summit was also there, eleven years prior.
Weirdly, the previous—and first—time I went was in the spring of 2001, so maybe I have an 11-year New Orleans cycle I didn’t know about.
The conference itself, if we include its spiritual predecessor, has been going since the turn of the millennium. It is the premiere conference for information architecture—whatever that reduces to nowadays. Indeed, on the docket was a panel discussion about a perennial consternation: whether information architecture is worthy of a job title, or is merely a job to be done.
I missed it, because I was busy overhauling my own talk. My own remark is that information architecture, to the extent that it is something solid enough to point at, has always been concerned with the structuring and organizing of information itself, as a substance in its own right. Information architecture, as I wrote many years ago, is about helping people understand their situations, and find the things they’re searching for.
If you want a structuralist definition, it’s semiotics plus topology. Historically, it’s a concept that transcends all media and technologies, a fusion of library science and actual-building architecture. Information architecture predated computer people, but was co-opted by them and then subsequently narrowed to refer to mundanities such as website navigation, or the hierarchies of e-commerce product categories.
The theme this year was change and resilience, and two topics that featured prominently in the program were artificial intelligence and structured content. The latter of the two is something some of us have been banging on for ages, while the former is of course the hot topic du jour. That said, the hot topic was astutely identified (in particular by Carrie Hane in her keynote) as amenable to direct attention to our perennial concern: structured data/content, after all, is what artificial intelligence produces (or is at least capable of producing), and what it consumes as its training data.
My own talk
My own presentation leaned heavily to the structured content side. As I mentioned last time, it was about a conceptual framework I had developed many years ago for the purpose of bucketing quaternary-sector work product in terms of its durability, or lack thereof. The thesis of what I have been calling the specificity gradient, is that the most detailed work is also the most perishable, and the medium that is the most inherently detailed—and thus perishable—is code.
Lamentably, I was scheduled at the same time as an AI talk, so only eleven people attended. The bright side is that there was ample Q&A, which is always a sign that it was the right eleven people. I also had the disadvantage of being assigned the runt room on a different floor from the rest of the conference, and it’s a known phenomenon that putting a resource on another floor is like putting it an extra hundred metres away. Since my talk was in the afternoon on the last day of the conference, a chunk of my prospective audience had also already left for the airport. A number of people approached me at one point or another and said they wished they had attended my talk, including the AI guy I had been scheduled against. The organizers said it’ll be a few weeks to process the recordings; I’ll be sure to let you know when mine is out.
If you’re dying for video you can watch the thumbnail version if you haven’t seen it already, or you can read the original write-up. If you’re really jonesing, you can read the script I read off my phone.
The Rest of the Conference
To tell you the truth it was a bit of a blur, and I missed some talks for one reason or another that I really wanted to see. Not only did I have a presentation to overhaul (thanks to Adam Polansky and especially his daughter Nelle for some invaluable insight) and I am notoriously big into the hallway track, but also I was not sleeping well the whole time. I’ll post some highlights when the organizers release the videos and I've had a chance to review them.
The Nature of Software Chapter 7: Local Symmetries
This/next week I’m writing chapter 7 of my newsletter-to-book, The Nature of Software. At this point I’m just shy of halfway through what I had originally scoped for this project: an intro chapter, 15 main chapters, and at least one chapter synthesizing the rest.
If you just got here and don’t know what I’m talking about, The Nature of Software is a book-length, serialized essay I’m writing as an attempt to reconcile the four-volume, 2500-page magnum opus of the recently-deceased architect Christopher Alexander, with the craft of software development. The Nature of Order defines a building process in terms of recursive, structure-preserving transformations, using a simple vocabulary of fifteen “fundamental” geometric properties. If you’re familiar with A Pattern Language—and its absolutely ravenous uptake by the software development community—The Nature of Order is what comes many years after design patterns, and, despite being many more pages to read, is ultimately much simpler and more elegant.
I have characterized this project offhand this way: my service to you is I give you a tenth of the reading to do, while also making it explicitly relevant to software.
In each of the main-body chapters of The Nature of Software, I take a look at one of the geometric properties (I’m going in order), summarizing it and abstracting it to its semiotic and/or topological essence, until I can say meaningful things about it from a software perspective. Once I get through all fifteen properties, I’m going to try to distill them into an appropriate number for software development (Alexander was adamant that the number 15 was incidental), and then examine them together in the context of what he called the Fundamental Differentiating Process. The goal is to tap an extraordinarily powerful method of making, designed in every way to create something, incrementally, that is harmonious with its environment.
Alexander referred to this as “healing the earth”. We in software may understand “incremental”, but “harmonious with the environment” could use some work.
This is a subscriber-only publication, for which I am charging USD $7 per month, or a more easily expense-able discounted annual rate of $70. It’s also limited-run; the plan is still to shut it down when I get to the end. That said, as of the most recent issue (Good Shape), I have put up an alternate website for The Nature of Software. Right now it’s just a static site where subscribers can go to read the text in a nicer layout. The plan, however, is to plump it out, so that subscribers can annotate, comment on, and discuss the text, as well as make use of some structured data products in their own projects. I can’t say exactly when that’s going to happen—though it’ll definitely happen incrementally—but I can say that the.natureof.software
is going to be one of the first beneficiaries of the engine I described in the section above.
I should also add, as always, that the introduction is unpaywalled and free to read.
Some Personal News™
I have been selected to be one of the twelve core researchers in the Ethereum Foundation’s Summer of Protocols program, to which I was (informally) invited some weeks prior to apply. This is a tremendous opportunity and I am grateful to the organizers for letting me be a part of it.
Interestingly, SoP is not about Ethereum or even about cryptocurrency per se; rather it’s an attempt by the foundation to develop language around the societal benefits of protocols—in contrast to say, apps or platforms. In fact, the organizers (led by Venkatesh Rao) were very clear that they want to think in terms of the broadest possible conceptualization of the term “protocol”—bigger than computer networks, bigger than “tech” itself. There are people who are going to be philosophizing about protocols, writing fiction about protocols. In fact, as I understand it, I’m going to be the only one of the twelve spending a significant amount of my time writing code.
My Project
What I’ll be working on over the summer is something you’ve already all seen, because I’ve actually been working on it for years. (I wrote about it last time, before I had even applied. I was going to work on this anyway.) The Summer of Protocols is an opportunity to take something that has been perpetually a few weeks from completion, and take the time and resources to wrap it up and put a bow on it. Moreover, it helps me frame it in a protocol-y context, which is always what it was.
I burned two weeks and two drafts already, trying to get a tight summary of the big-picture motivation for this project, before ultimately giving up (we’re workshopping these next week anyway). So you’ll have to settle for the small picture.
There is a certain category of tool—barely a “tool”, really; more like a “hypermedia environment”—that is a thin membrane around a constellation of structured data. It doesn’t do much besides display said structure, and afford adding to it, and navigating throughout. There may be, in addition, some kind of aggregated view, or other kinds of alternate representations, and ways to manipulate those. PKM tools (personal knowledge management, “tools for thought”, “second brain”, etc.) fit into this category. Add collaboration functionality (read: networked, with access control saying who can see or edit what) and some kind of event queue/task scheduling (that is, tasks for the computer to do), and you net most groupware too.
Add a way to chat, and you net most social networks.
The subset within this category that I want to focus on is that of niche tools for professionals. A lot of them I would characterize as “just beyond the reach of a spreadsheet”. The problem with these tools is, if you can make one of these yourself, you can’t really justify the effort without diving headlong into the tool-making business. If you can’t—which is more likely—you have to wait for somebody to come along who is in the tool-making business, to make “an app for that”.
The fashion right now, and for the foreseeable future, is to put these apps in the Cloud™, host the content, and charge a subscription fee for access. Such an arrangement comes with all the typical baggage:
What if there’s an outage?
What if there’s a catastrophic storage failure?
What if they go bankrupt?
What if they kill the product?
What if they kill an essential feature or capability?
What if they hike the price?
What if they change the deal in some other onerous way?
What if they leak and/or sell your confidential information?
What if they get hacked?
What if they do something particularly bad that makes you look bad by association?
What if they get acquired, and the buyer does any of this?
Most of these questions hover around “if the umbilical cord to this company is severed, or otherwise needs to be severed in a hurry, how screwed am I?” (The remaining questions have to do with unwanted third parties accessing your information.) But we could imagine a situation where the relationship is great, the terms are fair, the company is stable and secure, and yet: you still need something done that they don’t do.
SaaS companies, especially the smaller ones, earn a lot of their revenue doing custom functionality for rich customers and then turn around and sell what they just made to other customers. Like that was a surprise; you’d be stupid not to.
The question here is “what’s it gonna take to get the thing done?” We’re in the same position as before. Assume that even operating in the best of faith, what the vendor is going to charge you to prioritize your thing will be astronomical, and even then they’ll be unable to deliver in time to be useful. What’s it gonna take to get it done without them involved?
The answer to this question is contingent on the extent to which you can access your data. You’ll note it reduces to the same problem as above. Whether it’s a simple export function, or a full-fledged API, the real (first) question is always about coverage: can you get 100% of your data out? The next important question is, if you can get your data out intact (big if), can you (where “you” in this case is the actual person who is ultimately going to do something useful with the data) understand what it means?
My Summer of Protocols project, then, is to consider how this class of tools—again, hypermedia environments—might be made, that centres the portability of the informational content over which they operate. Not only is the current state of affairs a serious source of rent-seeking, but it’s an intolerable bottleneck with respect to ordinary people just trying to get things done. The project also reflects my own long-standing disposition: I trust an entity more when trusting it is optional. Protocols over platforms is the mantra after all, though I’m not inherently anti-platform. I just believe companies should compete on things like user experience and customer service—and even price—not taxing serfs and sharecroppers. I actually believe that’s a precarious position that depends on your customer base remaining ignorant, and it makes good business sense to get out in front of that.
The plan goes roughly like this; and with the exception of the first “step”, I anticipate doing concurrently:
Make an engine: From a purely technical perspective, I am confident (through 25 years of experience) that a select handful of carefully-considered interventions are able to completely eliminate a great deal of the overhead involved in creating these kinds of tools. The first job is therefore to take these interventions and bundle them all up into a single executable that can be used in one sense to bootstrap the rest of the project, but also as a reference implementation to be copied to other systems. I have written about this part before, and it is actually quite far along.
Rough in some tools: In developing this specific capability for over a decade now, I have wasted a great deal of that time trying to get people interested by just telling them about it, when I should have been showing them. It turns out that this business is impossible to explain to somebody who doesn’t already have some first-hand experience. I call it a zero-hand-waving policy: you can’t get any traction without a prototype, and it costs about as much to fake one up as it does to just make the real thing, so stop screwing around just make the real thing. Again, this is already quite far along. The goal here is to have something (I’ll be focusing mainly on analysis, planning, and design tools) that people will recognize as useful to their work.
Do outreach: I am inclined here to use my relationships under the user experience design umbrella (information architecture, content strategy…) to solicit interest and feedback in protocol-oriented tooling, mainly because they are prone to understanding the value of the approach and are capable of articulating it to the greater public. I am also keenly interested in approaching a broader subset of both academics (too many species to list here) and professionals (particularly lawyers and architects). I am also interested in engaging public policy watchdogs and think tanks.
Write it up: In addition to the engine itself and the data vocabularies that it animates, I’m going to be treating this project like something I’d do for a client. This means I’ll also be writing:
Design rationale and implementation notes for the engine,
An implementation guide for the vocabularies,
A weekly log/journal of the SoP process, as well as
An overarching narrative arguing for the approach and envisioning its potential trajectory.
I have a million more things I can say about this, but this newsletter is late enough as it is. Part of the deal is I publish contemporaneous updates, as well as work in public whenever it’s appropriate—in all its unfiltered, cantankerous glory:
Take your pick of Twitch,
or YouTube. I stream on both at once.
This also means that from this May 1 to August 31, I will not be able to take any consulting work beyond what I’ve already agreed to do. If, however, you’re looking for somebody to help you with policy or protocol-related things starting in September, please give me a holler.
That’s it for now, next post will be from the inside.