Discover more from The Making of Making Sense
An update on what I've been up to this summer, plus an essay on the public's understanding of digital technology.
The Nature Of Software Proceeds
I am three chapters plus an introduction into The Nature of Software, and as predicted this is shaping up to be a proper book. Since I have racked up several hundred new subscribers here since I last sent one of these newsletters, I should probably explain that The Nature of Software is my attempt to reconcile the later work of the recently-deceased architect Christopher Alexander (of “pattern” fame) with the craft of software development. If you’re in the software industry, you almost certainly know about patterns, but what you may not know is he actually renounced patterns in 1996 because he said he had something better. That better thing is a twelve-pound, four-volume, 2,500-page monster called The Nature of Order. The service I’m offering is try to interpret it in a way useful to software folks while shortening the reading by an order of magnitude.
This is a limited-run, subscriber-only offering with already over a hundred people reading it. I’m asking USD $7 for (I’m aiming for around) two chapters a month. If that’s your thing, head on over and subscribe, or read the introduction for free.
Thanks for reading The Making of Making Sense! Subscribe for free to receive new posts and support my work.
This Thing Is Finally Happening
Demo of the IBIS tool before I changed the visualization.
A reader has approached me to dust off my old IBIS tool so he can use it in a project. An Issue-based information system is a form of structured argumentation invented in the 1960s by the design theorist Horst Rittel and others, for the purpose of solving what he termed “wicked problems”. I became interested in IBIS indirectly by an offhand comment Douglas Engelbart made in a salon event at Google in 2007, because it mapped onto something I was thinking about at the time, which was Christopher Alexander’s doctoral thesis, Notes on the Synthesis of Form, which is also about the meta-problem of breaking down complex problems into a set of simpler problems you can solve.
Worth noting that Alexander and Rittel were colleagues at Berkeley, as well as core members of the Design Methods movement (though Alexander would later distance himself from it).
There are three kinds of entities in IBIS:
Issues: states of affairs in the world that we want to do something about (or otherwise immovable objects we have to steer around).
Positions: what to do about a certain issue.
Arguments: why we should (or should not) adopt a certain position.
IBIS entities are connected by a fixed vocabulary of relations like suggests, supports, and responds to, such that issues, positions and arguments can generate new issues, positions, and arguments, and very quickly you get the characteristic hairball of a semantic network. What’s special about this kind of structure is you can do math to it to find the natural joints at which to cut the hairball into smaller, more manageable hairballs, and with that you have the germ of an extraordinarily powerful resource planning tool.
One particularly valuable role IBIS plays is in forensics, because recording why a particular action was taken at a particular instant, for future reporting to stakeholders, costs about as much effort as sending a tweet.
Rittel implemented his system on index cards, though digitizing it is far from original: there have been several versions of IBIS software, dating back to 1988. My contribution centres around an exchangeable data vocabulary, which I initially penned sometime in 2012. I wrote a prototype Web application a year later and have been sporadically tinkering with it ever since.
Something that must be understood is the purpose of the prototype was not to implement IBIS per se, but rather act as a laboratory for a number of techniques for writing Web apps based on linked data. I needed a completed vocabulary to design the app around, and IBIS was it. It was only once I started writing entries into it that I noticed it was useful, and use it in my projects—although nobody explicitly asked for it until just recently.
(This may have something to do with the increase in interest in “tools for thinking”, which this thing unambiguously is, albeit quite unlike the personal knowledge management tools out there.)
The first order of business is to patch up the prototype so it’s less annoying to use in the near term, which is coming along nicely. Then, it gets its much-deserved rewrite.
Now, there are reasons why this technique hasn’t taken the world by storm, and only some of them can be fixed by better software design. (Fully-exportable data is one of them, which is why I started with it.) IBIS is a precision instrument: the language in the entries has to be written just-so, or the rigour of the system won’t be any help. This requires a non-zero amount of training, so I am lukewarm about IBIS ever becoming a mass-market software product (though that is a problem I would like to have). So in the near term I will be offering (or rather, continue to offer) IBIS as part of my consulting service. The role I am imagining to interface with the person who asked about the tool will be roughly something like project planning and forensic support that eventually tapers off into an ad-hoc SaaS agreement. If you have a project that could use some help planning and organizing this way, I can definitely make time for you.
Software’s Big Rhetorical Problem
I continue to eagerly monitor Bianca Wylie’s (and others’) efforts to raise awareness about the mandatory status of ArriveCAN. This is the app that you must have on your phone if you arrive into Canada from another country. It was introduced earlier in the pandemic, putatively to collect COVID vaccination status information, but the story has since changed to be one about “modernizing” border crossings: an object lesson in computer-mediated bait and switch.
The proximate problem with ArriveCAN is that the app doesn’t do anything a website couldn’t (indeed there is a website for people who for whatever reason can’t use the app), but an app on your phone, specifically, is an open door for the publisher of that app to update it later on to do whatever they want. So you not only have to trust them now, but forever, or at least as long as the app is on your phone (and potentially afterward). This is the case for any app publisher, but when that publisher is a government who makes it the law that you must run this software, I hope I wouldn’t have to explain how much higher the stakes are.
The bog-standard rejoinder to potential surveillance is of course “I have nothing to hide, so why should I worry?” but to me the hazard isn’t Nineteen Eighty Four as much as it is Brazil: allowing functionaries to systematize their processes results in you incurring the harm of getting caught in the gears—whether you deserve it or not.
Oh, as for “nothing to hide”, I like how the information security and privacy advocate Bruce Schneier responds on camera when interviewers ask him that: “Oh really? Then you won’t mind telling me your salary.”
Wylie’s lament is that we (not just Canadians, but virtually everywhere) as a public are not equipped to comprehend the consequences of this sleight of hand, and are thus incapable of organizing any meaningful response. I agree; this is a matter that keeps me up at night. There is something about computers, that invoking them in discourse, public or otherwise, that switches people’s brains off, when they would otherwise be perfectly capable of understanding the underlying political dynamics. Explain a shady government or business practice as if it happened a hundred years ago and everybody gets it. Involve computers (and attendant digital information technology) and watch the IQ points evaporate from the room.
The so-called “tech” industry has done a masterful job of characterizing what it does as being arcane and mystical while simultaneously nerdy and boring. I don’t believe this situation was especially deliberately architected—it is arcane and nerdy, after all—but it’s convenient for the people at the helm.
There is also a noticeable class component to being deliberately incurious about computer-related matters. I recall a meeting a few years ago I was in with the president of a quasi-governmental institution. Our team was there on an information architecture project and this person was palpably pissed that they were speaking to us—they ranted about the fonts on the website while we were there to talk about communication strategy. This person just could not get past the notion that since our work product involves computers, we must be the help. It’s the response one would expect from a king finding himself booked into a conference with the palace janitor.
As my work moves more into the realm of thought, language, and conceptual structures, I think more and more about these situations and what to do about them. The central problem, I believe, is that rigour in thought and language is what computers are all about. Your median powerful person, by contrast, moves through the world relying mainly on carrots and sticks—for most matters, they just simply don’t have to think that hard.
One place powerful people do think hard is around maintaining and increasing power, and all the attendant political dynamics around that. Read Norbert Wiener’s remarks around “the Augustinian versus Manichaean devils” in The Human Use of Human Beings for a concise framing of this dichotomy.
The (now antiquated) programming language COBOL was invented for the purpose of being easily supervised. The idea was that if a computer language is close enough to ordinary English, a manager can read, understand, and critique it. This didn’t pan out, because being a manager is fundamentally about getting other people to do things. SQL is the same story: make a language that even the suits can learn, so they can obtain their own reports. The only problem is, suits don’t learn, they delegate. Why learn how anything works? If you have the means, just pay somebody—and if they don’t give you what you want, punish them.
The problem here is that computers are all about understanding how things work. Programming a computer is literally 100% methodology: You are telling the machine, step by step, in excruciatingly precise detail, how to get something done. If your audience doesn’t care about the how—and moreover considers it beneath them, how are you supposed to do any substantive work for them?
Indeed, they are just as likely not to even want that substantive work: a friend of mine at a top-tier digital agency estimates that four out of five engagements they do reduce to some kind of vanity project for some executive to put on their résumé as they shop for their next employer. In other words, they don’t have to deal with the outcome as long as it looks successful, because they’ll be long gone before anybody notices any unintended consequences. Elected officials experience this incentive as well, which is how we get things like ArriveCAN.
This is a caricature, to be sure—the executives I work with are lovely, thoughtful people who are looking for, among other things, more bargaining power when it comes to their relationships with tech vendors, who continue to expand their influence at an alarming rate.
What makes so-called “tech” companies so stupendously wealthy is that they arbitrage at once the cupidity of decision-makers and the ignorance of the public. The product they sell—unless they’re doing hardware—is some of the least risky, highest-margin capital investment there is. And the bigger they get, the more they can sculpt the narrative around the role of the computer in society.
Once upon a time, you could buy a product, take it home, and consume it—and never once be concerned about whether it will actively betray you. What’s more, is that your relationship with the vendor amounted to an instantaneous financial transaction, and whether you gave them repeat business was up to you. With computers (and by computers I mean anything that runs software, which nowadays is anything that runs on electricity), this is no longer the case: we are increasingly entering into ongoing relationships with vendors who huff our data fumes for marketer cash, or repackage things that, with a little know-how and a modicum of elbow grease, you could get for free.
A family member contacted me recently; they were making a poster for an event and they wanted to put a QR code on it that linked to the registration page. They called me asking who they had to pay to get the ads removed. Perplexed, I responded that QR codes are a standard, there’s free software out the wazoo to generate them, and they shouldn’t have to pay anybody for anything, let alone “remove the ads”. Apparently there is an entire cottage industry in bamboozling people who want to put QR codes on things. I submit that if you squint hard enough, much of “tech” looks a heck of a lot like some version of that: a middleman where, if the ambient comprehension level was just a teensy bit better, none would exist.
(Then I downloaded one of said free QR-code-generating apps and made them a clean one. Also, luckily, the poster hadn’t gone to print yet.)
I suppose one could argue that all forms of professional specialization—doctors, lawyers, accountants, you name it—avail themselves of the ignorance of their patrons. With computers, though, it really is something different. It’s less of a specialization and more of a literacy. I mean, here are these things called computers, they’re everywhere, and they run this substance called software, and barring a particularly intense coronal mass ejection, none of that is going anywhere. So we have to deal with what it means to integrate the existence of software (and software companies) into our organizations, institutions, and lives.
I have written elsewhere that I’m not actually opposed to the subscription model of software per se; rather I believe ongoing relationships are the optimal economic framework for software development, because over time these vendors figure out new ways to represent their problem domain, and fix flaws in extant systems. Not only is engineering pre-software systems universally more expensive, but it is also orders of magnitude less flexible. The tradeoff is that there is an umbilical cord to the mothership, and the question becomes what are the terms of that relationship?
One might be inclined to ask: what are the harms? To be honest, I believe the Brazil (or Nineteen Eighty Four)-esque outcomes are tail risks, albeit severe ones, as tail risks tend to be. Rather, as the software design sage Alan Cooper is (wisely) fond of saying, software has behaviour, and (I add) the problem typically is that by the time you discover behaviour that you deem intolerable (or an incapability of delivering behaviour that you need), you are deep into a relationship that you have zero leverage to ameliorate, and you can’t readily get out of.
Everyday interactions with ordinary people suggest the public still doesn’t seem to truly understand:
Software-driven systems (these days, anything that draws a current) cannot be assumed to be working for you unless you’re the one who wrote the code—and even then that is debatable.
Using software (directly or embedded in said devices) increasingly entails an ongoing information-sharing relationship with the vendor (and often numerous affiliates), whether you are paying money for it or not.
Every tech company is a quasi-natural monopoly unless they take deliberate steps (like only dealing in open data standards) to do otherwise.
Getting out of a relationship with a software vendor in the very least means extricating your informational content, but getting your content out (to the extent that you can) does not necessarily entail that you can get it back in somewhere else.
Content and functionality not in your physical possession is liable to disappear. (And even then…)
The incentive to sell your behavioural data looms large and few entities manage to resist it.
Many companies push the ontological limits of what “your data” even means. (“We never sell your data” translates to “we paid our lawyers a lot of money to be able to say that.”)
Once your behavioural data goes out the back door, you can never get it back.
In the case of behavioural data, I suspect the “so what” attitude was shaken a bit with the various stunts that came out shortly after the overturning of Roe v. Wade in the United States, where journalists spent a few hundred bucks on location data that placed women in Planned Parenthood offices in abortion-banned states. Also newsworthy was that the brokers themselves had preemptively packaged this data in anticipation of the Supreme Court outcome. Law enforcement agencies have gotten quite accustomed to buying behavioural data, as it skirts around the need to obtain a warrant. They aren’t the only customers, either.
All of these points can be summed up by the statement that software is capable of not just letting you down, but actively betraying you in a way that no other medium can. This, on a societal scale, is new. It’s because it is intrinsically tied to a relationship whose terms your counterparty can change, unilaterally, any time they like, and your only—often costly—option is to leave.
Unless the government makes it mandatory, then you can’t leave.
If these dynamics were more commonly understood, I doubt decision-makers (elected or otherwise) would have the support to foist many of their techno-solutions onto their various constituencies, and tech companies would have more trouble convincing them. I also doubt that tech companies would have been able to get so big.
Merely telling the public about this matter is demonstrably ineffective; if it was ever going to work, it would have worked ages ago. If we’re going to educate people, the route is probably going to have to be fiction—and not some kind of cyberpunk sci-fi either, but everyday-life stories, because software is part of everyday life and has been for decades.
Software may be the first truly new medium to come along in a century, but we’ve had a lot of experience with it by now.
As for business and political leaders, I think the strategy should be to find the enlightened ones, and help them eat their competitors’ lunch.
Thanks for reading The Making of Making Sense! Subscribe for free to receive new posts and support my work.