For a fistful of syllables
The Direction of Interoperability
This scene repeats on a regular basis: we'll be in a meeting about standards, competition, interoperability and the room is filled mostly with policy people, understood broadly. Invariably, at some point, some other lost soul who was brought to the meeting as a "technical person" will lean into my ear and ask "what do they mean by 'horizontal' and 'vertical' interoperability?"
Indeed, many people with decades-long careers in defining or developing interoperable systems have no idea what the distinction means. It's a vision of the world that exists primarily in the minds of economists and regulators (notably thanks to the DMA) but that is largely unheard of in, well, interoperability work. A quick search of the W3C and IETF sites finds precious few hits (mostly from incidental contributors — perhaps mostly telco people?) despite archives spanning decades.
So what are they? Horizontal interoperability is interoperability between systems that have the same role (e.g. peers in a network) or at least that operate whereas vertical interoperability is relevant when one entity works in such a way that it runs "on top of" another, in ways that economists would describe as "vertically integrated" if the same firm were to operate both. I have not been able to find a canonical definition for either, and in practice the definitions used in the wild vary significantly.
So should technologists care? Should policymakers revisit their use of the notion?
One red flag is that, in tech policy, the terms are often used to lobby for tech monopolies, as in this report (to give just one example) in which the authors claim that "horizontal interoperability may reduce multihoming" and even that "as interoperability is also possible between gatekeepers, it could even strengthen their position vis-à-vis new entrants by making them more central for users." A very short investigation shows that these worrying claims are supported at best by some armchair musings from non-practitioners and at worst by methodologically jocular papers (like this one in which a lot of serious-sounding words are supported by nothing other than a superficial online survey). I wouldn't go so far as to state with confidence that the distinction was invented to confuse discussions of interoperability, but it certainly seems to be used that way more often than not.
More worryingly, however, it's not so much that the horizontal/vertical distinction in interoperability can never be useful. Simplistic concepts can be useful. The problem is much more that its power to describe reality is so stringently and arbitrarily limited that it can only lead to a paucity of imagined solutions. We need to do better.
Protocols Are Institutional
In an institutional arrangement, there are multiple actors and those actors can be granted specific positions. A position can be any given responsibility (chair of the working group, contributor, Lord Protector of the Realm) such that different rules will apply to an actor depending on the positions they hold. The information that an actor may or may not have access to (or share), the actions they may carry out or be subjected to, the outcomes and payouts of any such action may depend on that position.1 I'm describing this in abstract terms but these are things that we all take for granted without even thinking about it — contributors may make pull requests but only administrators may approve them, the chair's vote is tie-breaking, only moderators can see some messages, etc.
Computer architectures are similar — in fact, computer architectures can be accurately described using institutional grammars.2 I will dig much deeper into this topic in another context, but you may be interested to note that the components of a rule in an institutional grammar and the components of a well-formed testable assertion in A Method for Writing Testable Conformance Requirements, which is a key W3C standard-writing guide, are for all practical intents the same. This is not coincidental (even though it was not intentional).
To give a simple example, in HTTP there are two primary positions, the server and the client a.k.a. browser. (Technically, there are more roles but it doesn't matter for our purposes.) The server has authority to determine what content lives where and whether it can be accessed. It also has the ability to see all clients of a given service (and therefore to serve as a coordination point for instance by collecting all the posts that different people make, for good or evil) whereas clients can only see the server and not one another. This is a simple institutional arrangement.
It gets much more complicated. Just as it is with institutions operated manually by humans, responsibilities nest and overlap, groups can taken on positions such that they participate as actors (e.g. the Board reviews the CEO's performance). There are of course trade-offs involved in creating more complex institutional arrangements, but one aspect matters strongly: the ability to define an elaborate set of positions with distinct responsibilities that are organised with respect to one another and that can offer checks and balances for each other is essential to good governance in general but more specifically to the kind of governance that you expect in a democratic system.
Within this view, the purpose of interoperability is simply to make institutional arrangements possible. Protocol design is the art of defining which positions may be expressed, what rules may be defined (and, importantly, correctly and cheaply enforced), which information may be transmitted or on the contrary controlled, how positions can be assigned and withdrawn and, at least in the case of well-defined protocols, what constitutional options exist to update the rules. Good architecture creates positions that cleanly separate concerns so that they may be governed independently from one another, and then makes sure that those separate positions can interact with one another by assembling a protocol from primitives that support the chosen institutional design.
It's not just that technology is political, it's that politics is the job. And, as brilliantly explained in Laurens Hof's instant-classic The Purpose of Protocols, it's a job which we can either approach with an institutional lens or that we can keep failing at, perpetually surprised that we birth nothing but dystopia.
And this is where the horizontal/vertical distinction falls hard on its face: it is simply too weak an intellectual toolbox with which to bring democracy to the digital sphere. It only recognises two institutional arrangements: one system accessing the resources of another (vertical) and two systems communicating as equals. You couldn't even run a garden club on such a restrictive toolbox.
The New Technologists
When we understand digital architectures and protocols as institutions, we gain the ability to create different institutions. Regulators also gain a greater ability to mandate certain institutional properties from the systems they regulate. We become able to shape digital systems with properties that are more democratic.
People often ask me why I'm so interested in the AT Protocol: it's precisely because it's designed for democracy. The separation of concerns between different potential positions of responsibility in the network (many of which are optional depending on the usage, which allows for a great diversity of fine-tuning and compatible uses with different institutional properties) is suited to the kind of governance that social spaces require, with different positions governed in different ways. What's more, the protocol part (which is relatively small and simple) solves the authority problem by resorting to self-certifying data. If that sounded like gibberish to you: just know that when you get data, whoever you get it from, you can know that you got the right data without having to refer to whoever happens to be hosting it (it's said to be "self-certifying"). This small property makes the separation of powers much easier. (Note for the nerds: this is why I see an arc from DASL to democracy.)
We need more of this kind of approach. The technologists who understood the assignment might want to form a community of practice, and we should do a better job of explaining to people in policy spaces that we have more than just the two tricks.
The digital sphere is still in its early days. It may currently suffer under an authoritarian private government, but that need not be its adult form. We stand at the doorstep of democratic renewal that can take us out of the 18th century and into our better future. If we're willing to use the full toolbox that we have, it's ours to build and to live in.