Setting the Tone for an Anti-Platform

In which I discuss a rationale for, and introduce my work on, “permeable” information systems.

It has long been understood that insinuating oneself between people and what they want, while skimming off a piece of the action, is remarkably good business. Indeed, through a certain (cynical) lens, that’s all business ever is.

The optimal position, from the point of view of our straw businessman, is a monopoly—or at least oligopoly—where the insinuation is implicit: you have something people want, for which there is no viable substitute, so they have no choice but to deal with you.

Few have digested this lesson more thoroughly than the networked information service providers of recent decades, who, with few exceptions, nucleate out of a stable of users and/or a warehouse of data. These are natural monopolies even in their formative stages. The name of the game is:

  1. Build a platform,

  2. Use network effects to get users stuck on your platform,

  3. Sell your platform to an even bigger platform (or go public, or both).

The reason why this pattern works so well is that information is fundamentally infungible: you can’t just swap in any old piece of data, information, or content for any other✱; if you need a particular piece of information you have to go and get it. You could say that when information is involved, possession is actually ten tenths of the law.

✱ If this wasn’t the case, we would never have to do things like research or experimentation (not to mention signals intelligence and boring old vanilla espionage); we could just make up whatever we wanted and it would work. I suppose if I wanted to get hair-splitty about the designation, I would say something like infungible up to isomorphism, as certain bits of information you don’t have can often be computed from other bits you do; indeed that’s what computation is. (There are nevertheless situations where the cost of computation makes it worth buying the finished product instead of computing it yourself.)

(I also hope it is clear that I am not talking about copies of the same information, which, paradoxically, are perfectly fungible, on account of being identical.)


The role of the intermediary is, nominally, to act as a trusted source, conduit, or steward of shared informational state. Being the trusted steward of shared informational state is functionally the same as owning it. Platform operators understand this in their bones, which is why they make their fiefdoms easy to join and hard to quit. And they do that by making the information you put into them hard to pry back out.

The problem with this roach-motelesque configuration, if it isn’t obvious, is that relationships—and the parties to them—change over time. What starts out as a mutually beneficial arrangement can quickly degenerate into one that is zero-, or even negative-sum. We have all endured unhealthy relationships at some point or another; we stay in them because something prevents us from leaving. Sometimes it’s a bona fide hostage situation, other times it’s just a matter of (in)convenience. A key aspect of fostering good relationships is being able to get out of the bad ones. Indeed, we could say that one measure of a good relationship is how easily you can extricate yourself from it. It implies you’re staying because you want to, not because you have to.

A business, especially of the “software-as-a-service” variety, could sell itself to your competitor. Or it could go down at a critical moment. Or it could get hacked. Or it could delete some functionality you depend on, or simply not grow✱ in the direction you need it to. Or, it could sell your data out the back door, or jump into bed with a fascist regime. Your available responses depend directly on what it costs you to sever your ties: it is an idle threat to hit them in the pocketbook if you just end up hitting yourself even harder.

✱ A common exhortation to palliate the shortcomings of a platform is to write your own functionality against its API. Alternatively, you can often pay them to customize. Platform vendors love this because they get paid a bundle to develop features they can make even more money selling to other customers. Irrespective of the extent to which these courses of action solve your problem, they have the side effect of investing you even more into the platform.

My longstanding complaint with information platforms—especially the ones where you bring the information—is that so much of each offering is just so utterly commodity, so completely quotidian: everybody has storage and hosting, search, user profiles, messaging, emoji reactions, etc. Only a tiny sliver of each company’s functionality is an idiosyncratic value-add. What makes these products viable—and contributes to so much redundant software—is the bundling of some subset thereof under a single username and password.

Ye Olde Hubbe of Git

GitHub is an excellent piñata for a number of issues I have raised so far. At its core of course is git, the VHS of version control systems—completely free and explicitly designed to run anywhere. On top of this public good is a bunch of bog-standard features like an annotating code browser, diff viewer, bug tracker, wiki, and free Web hosting.

The only thing GitHub does that is truly novel, in my view, is the pull request. This instrumented process dramatically lowers the friction, for all parties involved, of enlisting other people to help improve your software. GitHub’s role is mainly authenticating the proponent of the request, automating the parts that can be automated, and furnishing a user interface for the parts that can’t. Prior to GitHub’s intervention, both sending and accepting code patches, for those without direct access, was a colossal pain in the ass. In open-source git terminology, the plumbing for this functionality was already present, but it took for-profit GitHub to provide a vision for the porcelain.

I reiterate that pull requests are GitHub’s principal value proposition; everything else is gravy—conveniences that could be yoked together from a few scripts or other commodity products and services. (Even pull requests themselves are not unique to GitHub, or even git: both BitBucket and GitLab also have them, because no matter how they are instrumented, pull requests are a fundamental business process of collaborative software development in the era of distributed version control.)

Two years ago, GitHub was acquired by Microsoft. This was predictably intolerable to certain members of the hardcore open-source set, who promptly defected. A second exodus, including a number of GitHub’s own employees, was spurred by the revelation that GitHub was contracting with the notorious Immigration and Customs Enforcement police of the United States Department of Homeland Security. Luckily for the boycotters, there are at least two viable alternatives, but anybody looking to contribute via pull request would have to set up an account on one of whichever competing platform ended up as a destination.

It remains ironic that a piece of infrastructure software like git that was intended to be decentralized, needs a proprietary platform, or hub, to function smoothly.

The pull request is a ripe target for adapting into a non-proprietary pull request protocol that can bridge the various platforms and everything in between, even distributed version control systems themselves. (It occurred to me that it might be patented, but if it is it doesn’t use the phrase “pull request”. At any rate, GitHub’s competitors also do pull requests, and barring all that it would be eminently contestable under prior art.) I would love to be a part of creating something like that, and I have a pretty solid idea of how I would do it (to be discussed some other time).

The point of this argument is not to discourage you from interacting with any particular entity or another. Indeed the entire point is that if you perceive you have a good relationship, you should by all means continue it. Social and economic interdependence are how we extend our capacity and enrich ourselves, and conversely, unchecked solipsism is a route to poverty and ruin.

What I am advocating for, rather, is proactively choosing our partners whenever we can, because our relationships enrich our partners as well as ourselves. If you’re going to enrich somebody else, it might as well be somebody you like. At any rate, you will know if you have the power to choose who you do deal with if you are able to deny the oligarchs your tribute.

The ability to choose our counterparties is the ability to choose not to deal with others, or at least set the terms by which we deal with them. For networked information systems this means:

  • Getting your data off a platform,

  • Getting your data into a useful configuration, such that you could use it on a different platform or no platform at all.

This entails:

  • Designing permeable information systems—anti-platforms,

  • Imagining a business plan other than platform economics.

This is an intentional and completely artificial design constraint, not only for the purpose of material results, but also to challenge some assumptions about doing business on the internet.

Worth mentioning that the IndieWeb people have been on this program for quite some time.

How do you design an anti-platform?

It’s the data semantics, stupid. In the development of online systems, data structure definitions—whether for storage in databases or exchange across the network—tend to be fast and loose, and ultimately subordinate to whatever development goals are currently in play. Routines for moving and converting data to and fro are likewise ad-hoc, to say nothing of how things are named. If this wasn’t the case, then API development would not be such a commonly distinct (and possibly herculean) phase in the growth of a platform, because you’d basically already have one.

The API angle is probably the strongest argument for a platform to reform its development process this way, but considering their incentives, still not a very strong one.

The anti-platform approach is to start with the assumption that every jot of data is going to be addressable, and work backward from this assumption into the concrete implementation. It would mean, among a number of other reforms, publishing the data specifications in both human-readable and machine-readable formats, as well as deriving any internal representations from the published spec (rather than the other way around). This means developing a host of techniques to pull it all off.

I have done quite a bit of this work over the last decade, and over the next little while I’ll be focusing on two projects in particular. If you want to read ahead, be my guest, or you can wait a couple more installments for me to package these up for you:

Your content management system’s content management system

The goal of this project, which has been slowly accumulating since about 2010, is to represent all the data (and metadata) on a website without referencing any particular CMS product. Of course once you pull the data out, there is no need to put it back on the same platform, but I’m more interested in all the useful things one can do with the content while it’s in that state.

And that’s the real goal: I’m trying to be able to analyze and manipulate websites in bulk without having to know or care or think about whatever software product(s) they’re running on top of.

Designing an app around the data rather than the data around the app

I have long lamented that the practice of project management has not evolved much in the past century-and-change since the Gantt chart. What I mean is, sure now we have computers make our Gantt charts, but the underpinning conceptual structure—of what a “project” even is and how you do one—hasn’t changed much since the project was to lay miles of railroad track.

What I’m trying to do here is reconceptualize the notion of a “project” beginning with the legal agreement (actually even before then), and then instrument the process with software where needed. What I absolutely cannot have is the metaproject of tinkering with the project management tooling sucking up all the time for a given project. Designing ever-mutating tooling around a (relatively) consistent data structure was key to this process, and I can’t wait to tell you about it.

What you notice when you prioritize data semantics is that these specifications are remarkably durable, and changes tend to be cumulative. This is because you are now in the standards business with a side hustle in applications, which, over the next few missives, I’m going to try to convince you is a much more chill—and dare I say it, more humane—place to be.