Andrew Bourke’s Principles of Social Evolution opens on the story of a “scientifically curious protozoan” (protozoans are single-celled organisms) from half a billion years ago, living in a world of unicellular creatures some of which assemble at most into multicellular mats or threads with very limited structure. Our protozoan protagonist builds itself a time machine and dashes off (at 88mph, one presumes) to come visit our world today. Arriving amongst us, it is shocked to discover that massive colonies of billions of eukaryotes not so unlike itself (about of them for any human being) move around all stuck together and act as one big organism with a high degree of internal complexity. They even have a division of labour between cell types! With trembling cilia, our protozoan travels back to its simpler past (where it won’t need roads) before it even gets a chance to notice that things here are even more complex than that: not only do those big bags of cells have stunning internal organisation, but they further assemble into all kinds of higher-level social collectives!
Born as we were into a world of complex multicellularity, we rarely find ourselves quite so dazzled by the likes of animals or plants as our time-travelling unicellular friend, but we should be: somewhere, somehow, through evolutionary iteration, a bunch of individual, independent, single-celled organisms stumbled upon governance principles that made them fitter together. Such “fundamental organizational changes in the history of life”1, known as major evolutionary transitions, had happened before — the eukaryotes that became multicellular are themselves held to be the result of symbiosis, and that’s not even the beginning — and have happened since. For instance, looking at our complex society, a system that demonstrates (for the most part) “a high level of cooperation and a low level of realized conflicts,”2 as the result of a major transition from kinship-based to a more complex form of sociality is not uncommon in the literature.
I want to be very cautious when making observations at the intersection of biology and society, even if this is a part of biology that specifically deals with social groups. My purpose in looking at the Internet through a major transitions lens is that I hypothesise (for reasons detailed below) that the Internet’s role in a (broader) major transition for humans is more than just an analogy, and that looking at it this way informs our understanding of our troubled times, of the risks we face, and of how to address them.
To look at this slightly differently, in The Death and Life of Great American Cities, Jane Jacobs asked us to think about “the kind of problem a city is.” The answer, she convincingly argued, is that a city is a problem of organised complexity. Problems of organised complexity, according to Warren Weaver’s taxonomy which she cites, are more fiendish than problems of simplicity (like relating two variables) or problems of disorganised complexity (that can be attacked with statistical methods). Systems of organised complexity are harder to study but they also produce behaviour that is more robust and more able to tackle harder problems — their components are fitter together (though, I insist on caution, not necessarily in the typical evolutionary sense of “fitter”).
This is far from being an abstract consideration. As cited in a recent New_ Public newsletter, Eli Pariser remarks justly that:
Sometimes it feels like this whole project of wiring up a civilization and getting billions of people to come into contact with each other, is just impossible. But modern cities tell us that it’s possible for millions of people who are really different, sometimes living right on top of each other, not just to not kill each other, but to actually build things together, find new experiences, create beautiful and important infrastructure. And we cannot give up on that promise.
The coordination of astronomical numbers of individual cells to form a functional organism is a seemingly impossible problem of organised complexity that was nevertheless solved by evolution in the major transition to multicellularity. The wiring up of a civilisation of billions of people, which is itself some steps into a major transition towards complex sociality, faces similar questions: How is cooperation established, cheating punished, and conflict attenuated? How does a collective entity form and how does it maintain itself without collapsing? For cells or people alike, these are governance questions — and the evolutionary case demonstrates that they can be solved without a central design, at scale, and in support of staggering complexity. So, what kind of governance makes us fitter together?
Complex Is As Complex Does
So the world is populated by highly complex organisms, and we as a species are in the transitory process of further organising an increasingly complex society. It’s often the case that simpler can be better, so is it really a good thing that we’re making our social organisation more complex?
TL;DR yes. We should systematically be fostering social complexity. Complexity in society is good.
Increased specialisation and intensified cooperation allow us to solve harder problems. Like feeding everyone without running out of planet, giving everyone access to Wikipedia without destroying democracy, curing more diseases for more people despite them being more interconnected, or more generally decreasing violent conflict at all scales. Riding the juggernaut that is the modern world can at times feel intense enough that it may be tempting to want to simplify society. Unfortunately, short of an astounding leap forward in science and governance (such that we can deal with complex issues without being as complex as a collective), simplifying society would also mean losing highly desirable collective capabilities such as advanced medecine not to speak of others yet to come. We’re complex because the real capabilities we ambition to share require it, and our better ambitions — those compatible with equality and sustainability — shouldn’t be sacrificed.
This isn’t an idle speculation. The complexity of a society can and does vary over time. When complexity drops sharply in a society, that is known as collapse3. The idea of societal collapse evokes Mad Max futures but the historical reality of societal collapse (which has happened repeatedly across history) doesn’t have quite as cool a soundtrack. Collapse isn’t inherently bad, in fact it’s adaptive: the system no longer has the capacity to sustain a given degree of complexity and falls back to simpler ways. Simpler ways doesn’t mean shopping less, we don’t get to burn our screens and spend our evenings quoting the classics at the stars. Collapse doesn’t produce simpler individual lives, just simpler social organisation. Were that to happen to us, now, we would not only lose our more interesting digital systems and advanced medical abilities, but also any hope of dealing with the climate catastrophe. It’s too late — way too late — to address it without improving for instance energy production and distribution in ways that rely on high degrees of social coordination and specialisation. In fact, given today’s situation and population, a significant decrease in social organisation would likely lead to increased pollution. We need much more and better coordinated action, not less.
So complexity good, collapse bad. Easy! What’s the catch?
The way in which we manage complexity and essentially improve society’s ability to maintain complex capabilities is through better, more effective governance which is to say through better institutional arrangements (of all kinds, formal or informal, local or global, etc.). The ability of our institutional arrangements to deal with a given degree of difficulty or complexity can be referred to as our institutional capacity. And so the problem we have is that:
- the Internet has made greater institutional capacity possible, but
- it has also made our world more complex in ways that require an increase in institutional capacity to happen and
- it has broken some of our established institutions, actually causing a decrease in institutional capacity, and
- we are not yet using the new governance capabilities that the Internet made possible anyway.
The Internet transition is creating greater complexity which we haven’t developed the means to operate — quite the contrary, the Internet is largely a failed state with much infrastructure at the whims of gangs and their billionaire warlords4. Complexity doesn’t have to vary linearly, and I believe that there is evidence that the Internet has thrown us into a massive increase in societal complexity, happening at a time at which our institutional capacity was already overwhelmed by globalisation (as well as under political attack). In How institutions shaped the last major evolutionary transition to large-scale human societies, the authors list four small transitions that are steps along the major transition from a social organisation based on kinship and personal exchange to large-scale complex human society with impersonal exchange and an advanced division of labour:
- the original human hunter-gatherer niche;
- the origin of language, which facilitated cumulative cultural evolution;
- the shift to sedentary agriculture and hierarchical social organisation;
- the origin of states, when interactions between people who may never meet again become more regular.
The exact list is debatable but I believe that we should now add a fifth step small transition: the Internet transition, which notably includes a shift to post-geographic5 interactions. Even as trade networked distant places over the centuries, much of people’s lives until very recently remained locally anchored, and losing the friction of this local anchoring has a number of effects.
- Locality gives nation-states (or cities) a territory over which sovereignty and jurisdiction can be clearly defined (we speak of the “law of the land”). Historically, this wasn’t always the case but a lot of our institutional infrastructure relies on these expectations and translating that to the Internet has often proved challenging.
- Locality also anchors our communities in shared epistemic frameworks. As humans, our beliefs are not developed individually but rather are strongly grounded in our communities.6 In a community dominated by local interactions, members might overwhelmingly hold beliefs that align well with the consensus of medical experts, but not because they really defer to those experts. It may well be because they defer to their local area doctor who transitively defers to experts. Almost everyone around them defers to that same doctor (or small set thereof), and as a result not doing so would put one at odds with the community. Note that this can be both good (it likely increases support for vaccination) or bad (the medical community does not have a history of enlightenment regarding gender), but it is locally unifying and provides a clear framework for intervention should a crisis require change.
My point is not that we need to return to locality, but that we need to build institutions that are adapted to this new reality so as to develop the institutional capacity that matches the needs of the world we want.
Institutional capacity is, in a very rough sense, a society’s computational power or collective intelligence. (Collective intelligence is a developing domain, see for instance the excellent first issue of new journal Collective Intelligence or the emerging Collective Intelligence Project.) If this definition made you cringe, keep in mind that this is just a blog and I am aware of the fact that I’m light-heartedly papering over the caveats and intricacies of five full-fledged fields of academic study with the same impetuous abandon one might use to scatter a basketful of daffodil petals across a sunny springtime field — but bear with me. For our purposes, institutional capacity, a society’s problem-solving ability, or collective intelligence are all enough of the same thing which we could define as “the difference in performance between what can be achieved by the group of individual agents, and what can be achieved by the individuals on their own, where performance accounts for trade-offs and tensions.”7
Indeed, there are indications that in the past we have dealt with increased complexity by increasing our collective information-processing capabilities8, and that such improvements to society’s information-processing capacity happened through the introduction of new institutions or significant improvements to existing ones9.
Computer people may be surprised by this statement because of the common belief that improvements in information-processing capabilities is the exclusive province of (digital) technology — but it isn’t. Technology and institutions are certainly intermeshed, notably in that some institutional forms are impossible or at least implausibly daunting without some specific technologies (eg. writing or money), and it can be hard to tease out if one created the other. In fact, technology that mediates social interactions (like, you know, the Internet) inherently creates institutions — the only choice is to do so deliberately or haphazardly. Overall, a coordination mechanism that enables collective action such that it has the information-processing capacity to solve ever more difficult problems is properly an institution, whether it is supported by information technology or not.
To sum things up, we’re trying to run a planetary society that needs to solarpunk the fuck out of itself in a hurry on the collective intelligence of an 18th century principality that’s heard of the Enlightenment from some guy at the pub. Succeeding with increased complexity is beneficial — in fact our climate carelessness has put us in a position in which failing to increase cooperation is an existential threat — but requires increased collective intelligence. However, much of the infrastructure we have which sustained our collective intelligence has been thrown into disarray at a time when it was already barely coping. The stakes therefore are to either develop the institutional capacity for transnational, post-geographic governance or to face extinction-class, or at least civilisation-ending, catastrophic collapse.
Well. Hey. No pressure. But we might want to not fuck this one up.
Governance models are often presented as personal preferences, a menu of political options only distinguished from one another by the aesthetic or moral preferences that one may hold. To some degree that may be true in that emphasising different sets of capabilities that make up a fulfilling life can lead to different institutional preferences. But this should not hide the fact that some governance systems are simply less capable at dealing with harder problems, no matter what your preferences are. A society that can better integrate its complexity at all scales can solve more complex problems, from science to logistics, from medicine to justice, from arts to conservation. Any planetary governance worth its salt has to aim for a governance ecosystem that fosters the requisite collective intelligence. We need to develop a sense for what that looks like.
One of the most common future-Earth sci-fi tropes is that of a single unified worldwide government, often simply depicted as a beefed up UN — something bureaucratic, ineffective, and in charge of pretty much everything. And it’s not just sci-fi authors who tend that way. We tend to imagine governance systems as neatly, even naively, nested: countries, provinces, counties, cities, districts… all forming a nice matryoshka pyramid. From an empirical perspective, this simplistic view is incorrect. It also leads us to conflate global and central.
We need effective global cooperation and worldwide is the right scale at which to deal with many of our most pressing problems. The climate crisis is the most obvious example, but is far from being the only one. Olúfẹ́mi Táíwò notes that “the interactions that shape our social world have never respected state borders” and makes a powerful case that a just world can only be built by “a global community thoroughly structured by non-domination.”10 Thomas Piketty has similarly pointed out that issues with how we organise economic, commercial, and property relations operate at the transnational level and that to address them we need to “work out some way of transcending the nation-state.”11 Put differently, a global economy without global institutions isn’t globalisation, it’s just world domination by a few companies.
Where the digital sphere is concerned, our problems are similarly global, as are our insufficient governance bodies. Addressing problems of capture and stewardship of the digital infrastructure that permeates and engineers our societies needs to take place at the same scale at which these problems are being created.
Global, however, does not entail either centralised or unified under a single meta-institution like that sci-fi UN. Without getting into the book-length treatment that bad governance architecture deserves, developing an intuition for what we need to build would benefit from a few paragraphs on why not centralised and then a few more on why not unified.
Centralisation has a bad reputation, but the fact is that it’s a trade-off. “A centralized approach to optimizing coordinating control is possible when a central unit has access to all measurements and can decide for, and direct, all the elements of the ensemble (…). In the centralized case, collective intelligence may be maximized, at least locally, subject to tractability of the optimization problem, physical limitations of the system, and uncertainty about the system and the environment. However, centralization can become costly for large-scale systems and problematic when communication between individuals and the central unit is unreliable or the central unit experiences a failure.”12
One thing that is interesting to note in this trade-off is that the success case for centralisation requires:
- that the situation be treated as an optimisation problem,
- that it be tractable,
- that access to all measurements be possible,
- and that sufficient direction of the governed agents be available.
These conditions do exist in a number of simpler situations (with the added appealing benefit that centralised organisations are easy to design), but every single one of these conditions either doesn’t hold at societal scales or comes with severe side-effects. Neither society nor human life nor natural environments are optimisation problems, and if they were they wouldn’t be tractable. Large-scale data collection is certainly the hallmark of bureaucratic power as we can see from the lumbering megacorporations of the Internet, but such legibility is systematically full of detrimental categorisations and incapable of treating humans with dignity. When it comes to control, at the end of the day, building for everyone but not by and with everyone, irrespective of intentions, is just totalitarianism: morally despicable and bound to eventually fail. “As in all Utopias, the right to have plans of any significance belonged only to the planners in charge,” said Jane Jacobs. To which she added: “Cities have the capability of providing something for everybody only because, and only when, they are created by everybody.”
For anything complex, and particularly for problems that require significant local knowledge, large centralised organisations aren’t just unpleasant — there are reasons to believe that they are less intelligent13 and they have a documented habit of failing to improve the human condition14. If you’ve ever worked at an OKR-driven company, this description of Soviet tekhpromfinplan planning (from the excellent Red Plenty) will feel chillingly familiar. That’s just how centralised cooperation rolls.
Electoral democracy can, and often does, help manage the downsides of centralisation. But we should be very cautious about treating voting like a panacea. Token-based voting can have plutocracy issues, as is well-known, but even people-based voting can share that problem: Julia Cagé showed quite clearly that elections are readily swayed by money, which directly embeds plutocratic tendencies15. Voting has a place, I certainly wouldn’t claim that it shouldn’t be used, but we should refrain from considering it to be required in all instances. Voting does not define democracy, it’s only a tool one might use when there is no better option.
Having dealt with centralisation, we should also examine the belief that governance should be somehow unified, clearly integrated, with specialised bodies all wrapping up neatly into bigger ones like matryoshka dolls. Even though this model requires an overarching top-level organisation, this idea is distinct from centralisation in the sense that smaller units could benefit from real devolved power and strong subsidiarity (the principle according to which central authority should only be in charge of what cannot be done more locally — a necessary principle even if not sufficient). It is the neatness of the hierarchical matryoshka that I think is wrong: the only way to delegate isn’t up. It is wrong, I believe, in the same way that our understanding of intelligence is wrong.
To summarise the comparison too briefly, our folk understanding of intelligence matches a command-and-control unit that produces internal representations from a set of sensors, manipulates those representations, and uses them to direct effectors that act in the world. Rinse, repeat. This representational model is simple enough, and it maps to a relatively cartoonish human anatomy. But it isn’t the only architecture for intelligence and there is no evidence that it is a particularly good one. (There are also reasons to believe that human intelligence doesn’t work that way.)
In a series of papers eventually collected in Cambrian Intelligence, roboticist Rodney Brooks developed a different approach. In his subsumption architecture, a set of layers or modules run asynchronously from one another and no layer can control another. A layer may only replace some of another’s inputs or inhibit some of its outputs. Crucially, there is no representation and in fact no shared global state — no legibility of the world at the top. There is a real form of subsidiarity, but it isn’t vertical — delegation is networked and horizontal. Brooks showed that some simple robotic agents built atop a subsumption architecture were able to carry out tasks that representational agents struggle with.
I’m not suggesting that we can simply paste the shape of a Brooksian subsumption architecture atop planetary governance, but it provides a good analogy for a better way to integrate subsystems. Applying the shape of “Cambrian intelligence” to social collectives looks a lot like “a social system of many decision centers having limited and autonomous prerogatives and operating under an overarching set of rules”16 which is a definition of polycentricity. Polycentric governance involves similarly overlapping and intersecting institutions that act independently from one another, typically with local knowledge (essentially “using the world as its own model,”17 which contrasts with bureaucratic legibility), and interact in ways that are typically more robust and (empirically) more effective. A full discussion of polycentric systems would require a much longer post, but such systems can serve as checks and balances on each other and make much better use of information that pertains to their domain, creating a mesh of interlocking governance systems that are ability to deal with the real world’s complexity. Essentially, we can think of this approach as a form of Cambrian Governance.
I find the “Cambrian” moniker to work well because this is an ecosystem approach to governance. (A real ecosystem — it’s not just a name, as Maria Farell’s excellent Your platform is not an ecosystem argued so well.) It also suggest the potential for a Cambrian explosion in planetary governance.
Cambrian Governance is worldwide governance that is neither centralised nor unified, and in which every institution provides governance (in varying degrees) for every other, in much the same way that in an ecosystem everything is infrastructure for everything else. (Yes, like the Fediverse but a lot more so.) This might begin to give us an intuition for what we want, but a lot remains to colour in. As celebrated philosopher Buffy Summers pointedly asked, Where do we go from here?
Governance for the Ungovernable
Our collective mission today, and we don’t have much of a choice in accepting it, is to make the Internet really happen. The Internet has the potential to be more than a random pile of naive technologies operated by a handful of companies competing to be the best neocolonial reincarnation of the Dutch East India Company. But that potential is not going to happen if we keep operating with a dated toolbox and let the people currently in power design what comes next. We need to develop the governance, and the technical architecture to support it, that matches the problems we need to solve and the wonder we yearn to build.
If complex system are inherently unpredictable and if the right institutions for a planetary civilisation aren’t governable from the top, how can we build governance for the ungovernable?
It takes a blend of understanding the principles we aim for, of having a decent intuition for where we want to go, and of concrete action on specific issues in specific arenas addressed to match these principles and intuition. Put differently, we can take steps to articulate (as I am doing here) what we should know about the world, what our north star(s) should be, and each of us can work on local reforms that drive us in that rough direction. This matches Archana Ahlawat’s articulation of how to approach our predicaments with a blend of ideal and nonideal theories: “Ideal theory is concerned with what ought to be, our most ideal vision for the world, while nonideal theory takes our current world and theorizes immediate steps we can take to improve it. Those focused on ideal theory show how it can be used to formulate an overall long-term strategy that informs the short-term tactics we can take given our nonideal world. However, critics of this approach point out that the ideal may be so radically different from our current society that it is impossible or merely highly improbable that we reach that point.”18 You need both.
You may note that nonideal approaches work well in a Cambrian governance model as they are local, independent, and distributed. They also work well with local variations such that we don’t all have to pursue the same goals, or even completely share the same values, in order to make progress.
As a side note that might deserve its own post: despite it being difficult (and undesirable) to specifically design global outcomes, there is a metric that could plausibly be deployed (with some additional research) to evaluate progress towards better global governance. We can call it Apolito Integrated Information, and without getting into technicalities (read the paper if you’re interested), it can help answer the question of “what is a form of ‘collectivity’ that everywhere locally maximizes individual agency, while making collective emergent structures possible and interesting”19 (which is arguably a measure of polycentricity). Note that while many have justifiably expressed concerns over the anticompetitive behaviour of highly centralised Internet corporations, I am in fact more worried about their anticooperative effects: by taking over infrastructure and shaping it according to mechanisms designed to benefit themselves (or their unexamined ideologies), which prevents distributed agents from coordinating their behaviour usefully. I believe that this could be measured using Apolito’s metric. It matters because anticooperative dynamics prevent collective intelligence.
When a novel field emerges, it can often feel like the people involved with it are breaking out their jargon just to dust, polish, and oil it. But the fact is that the intellectual toolbox with which we intuit how governance ought to work at all scales is getting dated. In some ways it has barely evolved since the Enlightenment and we’ve run that to the ground: we need new thinking. These new thinking tools are there or emerging: capabilities approach, retentive infrastructure (leading to local benefits), heterarchy, polycentricity, Cambrian, the study of collective behaviour as a crisis discipline, networked subsidiarity, analytical democratic theory, ideal/nonideal, self-certifying, Apolito Integrated Information… not to mention my darlings such as infrastructure commons or the application of institutional models of contextual norms to privacy or the Collective Intelligence journal, or the fact that people keep coming up with new conceptual tool like Divya Siddarth & friends or the great folks at the Metagov project. We can assemble concrete, testable, implementable systems from these ideas.
It still feels like we have many separate communities and aren’t speaking the same language, yet, but a broad field of rich notions is emerging — and hope with it.
To summarise, we are traversing an epochal change and we lack the institutional capacity to complete this transformation without imploding. We could well fail, and the consequences of failure at this juncture would be catastrophic. However, we can collectively rise to the challenge and an exciting assemblage of subfields is emerging to help. We can fix the failed state that is the Internet if we approach building tech with institutional principles, and an Internet that delivers on its cooperative promise of deeper, denser institutional capacity is what we need as a planetary civilisation.
We don’t need a worldwide technical U.N. to figure this out. Rather, we need transnational topic-specific governance systems that interact with one another wherever they connect and overlap but that do not control one another, and that exercise subsidiarity to one another as well as to more local institutions. Yes, it will be a glorious mess — a Cambrian mess — but we will be collectively smarter for it.
Governments are struggling to handle this because of decades of underinvestment in institutional infrastructure on top of the friction between territorial boundaries and globally networked governance. The internet’s megacorporations are struggling because they are stuck in dated Engineer King ideologies — Thorstein Veblen’s “Soviet of Technicians” — and are limited in their thinking by the ingrained belief that technology is apolitical. They cannot build the future.
There is no purely technical fix for our predicament — evidently — but for the technologists amongst us focusing on the architectural properties of our technical decisions, on how technical architecture creates or constrains institutional mechanisms, and how technology works with governance is key. To take but one example, the best governance model that is available in a client/server architecture is benevolent dictatorship. No matter how you set things up, the server can ultimately change the rules. That’s a major constraint to work with; it will eventually break most equalitarian governance models and mechanically limit collective intelligence. Peer-to-peer architectures offer a much richer set of institutional roles for agents and for the rules with which they can interact, and therefore provide a much more powerful solution space. It’s worth spending some quality time with them for that reason alone.
Networked technology that mediates so much of our lives is social engineering — which is to say that deciding how it works is politics. If we want any hope for these politics to result in a world worth wanting, we need to build our Internet according to sound institutional principles. The toolbox for that exists, figuring out how to integrate and use it is what’s next. I leave you with Albert Camus (in L’homme révolté):
Par-delà le nihilisme, nous tous, parmi les ruines, préparons une renaissance. Mais peu le savent. [”Beyond nihilism, all of us, amid the ruins, prepare a rebirth. But few know it.”]
I am deeply obligated to Daphne Keller for pointing out that daffodils flower in spring (and a few species in autumn). A previous version had me mistakenly scattering them by the basketful in summer. Heartfelt thanks as well to Max Gendler for bringing to my attention the fact that Ray Bradbury made the case for pressing wine out of daffodils and keeping the dried out flowers for childhood trauma purposes; nevertheless, I scatter my basketfuls fresh. A few days later, Max Gendler returned to recant his previous statement: Ray Bradbury was unfortunately discussing dandelions rather than the fancier daffodils. Daffodils are toxic and therefore probably make for poor wine.