It's not just the cats
The Web Is For User Agency
Do you ever stare at the shadows caressing your ceiling at night and think: what the fuck are computers actually for? No? Well, maybe it's less dumb a question than might seem at first. As Weizenbaum noted in 1976: “I think the computer has from the beginning been a fundamentally conservative force. It has made possible the saving of institutions pretty much as they were, which otherwise might have had to be changed. Superficially, it looks as if [things have] been revolutionised by the computer. But only very superficially.”
You can pile layers of indirection on top of it, and pile indirections we sure have, but at the end of the day a computer is just a tool to automate tedious, clerical, bureaucratic tasks. And how we automate things matters a lot. Frederick Taylor, of Taylorism fame, apparently sought to drive prosperity for all and to abolish classes through engineered labour, the idea being that more efficient work would lead straight to social flourishing. I suspect that the generations of workers who toiled away at highly repetitive menial tasks might have some notes to share about that theory. Before Taylor, the Luddites had a clear enough comprehension of technology and automation to understand the issue.
Whenever something is automated, you lose some control over it. Sometimes that loss of control improves your life because exerting control is work, and sometimes it worsens your life because it reduces your autonomy. Unfortunately, it's not easy to know which is which and, even more unfortunately, there is a strong ideological commitment, particularly in AI circles, to the belief that all automation, any automation is good since it frees you to do other things (though what other things are supposed to be left is never clearly specified).
One way to think about good automation is that it should be an interface to a process afforded to the same agent that was in charge of that process, and that that interface should be "a constraint that deconstrains."1 But that's a pretty abstract way of looking at automation, tying it to evolvability, and frankly I've been sitting with it for weeks and still feel fuzzy about how to use it in practice to design something. Instead, when we're designing new parts of the Web and need to articulate how to make them good even though they will be automating something, I think that we're better served (for now) by a principle that is more rule-of-thumby and directional, but that can nevertheless be grounded in both solid philosophy and established practices that we can borrow from an existing pragmatic field.
That principle is user agency. I take it as my starting point that when we say that we want to build a better Web our guiding star is to improve user agency and that user agency is what the Web is for. As I said in my previous post, at the technological level the Web is constantly changing and we don't have a consensus definition for it. Instead of looking for an impossible tech definition, I see the Web as an ethical (or, really, political) project. Stated more explicitly:
The Web is the set of digital networked technologies that work to increase user agency.
User Agency
Defining the Web as an ethical or worse political project might not prove universally popular. As technologists we are often reluctant to engage with philosophy, a reluctance often expressed by running in the opposite direction all limbs akimbo with an ululating shriek reminiscent of some of the less harmonious works of exorcism. Even those of us who are curious about philosophy rarely seem to let it influence what and how we build. But there are very good reasons to take the time to align on what it is that we're trying to build when we build the Web, and good reasons why this has to import at least a little bit of conceptual engineering from the vast and alien lands of philosophy.
First, people who claim not to practice any philosophical inspection of their actions are just sleepwalking someone else's philosophy. Keynes said it best:
“The ideas of economists and political philosophers, both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed, the world is ruled by little else. Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually slaves of some defunct economist.”
― John Maynard Keynes, The General Theory of Employment, Interest, and Money
Second, in the same way that the more abstract forms of computer science can indeed help us produce better architectures, philosophy can (and should) be applied to the creation of better technology. A quick tour of the biggest problems that we face in tech — governance, sovereignty, speech, epistemic individualism, gatekeeping, user agency, privacy, trust, community — reads like a syllabus for the toughest course in ethics and political philosophy. There is no useful future for the Web that doesn't wrestle with harder problems and social externalities. "Ethics and technology are connected because technologies invite or afford specific patterns of thought, behaviour, and valuing: they open up new possibilities for human action and foreclose or obscure others." (Shannon Vallor, Technology and the Virtues) Technology choices, from low-level infrastructure all the way to the UI, decide what is made salient or silent, hard or easy. They shape what is possible and therefore what people can so much as even think about acting on or about holding one another accountable for.
Third, we have a loose, vernacular notion that we are doing this "for the user" and this is captured in key architectual documents as the Priority of Constituencies, but we could benefit from being a bit more precise about what we mean by "putting users first". As I will argue below, the idea of user agency ties well the capabilities approach, an approach to human welfare that is concrete, focused on real, pragmatic improvements, and that has been designed to operate at scale.
Overall, if we're going to work out how best to develop user agency it seems useful to agree on some basic notions of what we as technologists can do for our users, what capabilities most empower our user, and how to truly ensure that a focus on user agency is a central, sustainable, and capture-resistant component of our technological practice. Three things that are important to share some minimal foundations about are:
- Working towards ethical outcomes doesn't mean relying on vapid grand principles but rather has to be focused on concrete recommendations;
- When considering agency we need to be thinking about real agency rather than theoretical freedoms; and
- Counterintuitively, giving people greater agency sometimes means making decisions for them, and that's okay if it's done properly.
Let's look at these three points in turn.
Capabilities
We're all familiar with vaporware freedoms: you are given a nominal right to do something but the world is architected in such a way to effectively prevent the exercise of that right. Everyone can start a business or own a house! — except no bank exists that will lend to people like you. Users can change the default as much as they want to! — except you know that they won't because the UI makes it prohibitively cumbersome and laborious. Everyone can speak! — except only certain voices get amplified, the vaporware freedom of digital free speech which Rogers Brubaker has captured as: "Gatekeepers may no longer control what gets published, but algorithms control what gets circulated."
Boosted by pervasive surveillance that enables nudging at scale using an arsenal of deceptive patterns (which you could define as agency-reducing technologies), vaporware freedoms are thriving in our digital environment, but they aren't new. Global economic development struggled for decades with comparable issues in which people may have acquired rights they couldn't really put to work or saw economic improvements that didn't translate to more fulfilling lives. In response to this, Martha Nussbaum and Amartya Sen developed a pragmatic understanding of quality-of-life and social justice known as the capabilities approach. I think that it may not be stupid for technologists to look at how others have approached some questions rather than just winging it.
The capabilities approach asks "What each person is able to do and to be?" (Martha Nussbaum, Creating Capabilities). It replaces know-it-all, command-and-control, top-down approaches that create dependence on external aid, ignore local knowledge, and destroy local initiatives. Instead, it supports an unfailing commitment to people's ability to solve their own problems if not prevented from doing so:
With adequate social opportunities, individuals can effectively shape their own destiny and help each other. They need not be seen primarily as passive recipients of the benefits of cunning development programs. There is indeed a strong rationale for recognizing the positive role of free and sustainable agency — and even of constructive impatience.
— Amartya Sen, Development as Freedom
What capabilities translate to in technical terms for those of us who are working towards user agency is a complex, multipronged project, many parts of which we still need to design and assemble: it's the question at the heart of this series on building the next Web. One important architectural aspect of this project is the need to remove external authority from the system so as to prevent chokepoints of capture from emerging, and replacing it with what Jay Graber aptly described as "user-generated authority, enabled by self-certifying web protocols."
User authority doesn't mean that people have to build their own thing. Not everyone can run their own server for the same reason that not everyone can eat exclusively from their own pet organic garden. Not all control is agency; control has to be proportionate to bounded rationality and reasonable cost (including in time) and people have to be supported by powerful infrastructure that does not lock them into any system. More than anything, their interactions with a digital space have to be self-directed — no one needs to hand their life over to Clippy.
Capabilities were designed with development in mind: they are meant to change people's actual lives, not to check theoretical items off a list. They are by nature concrete and implementable. The capabilities framework is a great building block for a Web understood as furthering user agency because, in many ways, capabilities are user agency.
Ethics in the Trenches
If you've read Web (and other) standards, you're at least superficially familiar with RFC2119 ("Key words for use in RFCs to Indicate Requirement Levels") which is famous for defining such terms as MUST
or SHOULD NOT
. But one of the documents that I rank among the best to have been published by the IETF in its nearly-forty-year history is its companion RFC6919 ("Further Key Words for Use in RFCs to Indicate Requirement Levels"), published April 1st 2013, which standardises far more versatile terms like MIGHT
, COULD
, SHOULD CONSIDER
, or the barn-burning MUST (BUT WE KNOW YOU WON'T)
.
Altogether too often, ethical guidance can feel like it was written against RFC6919 instead of RFC 2119. This can be particularly true of ethical tech documents. These seem to mostly be lists of lofty principles that exude a musty scent of meeting room detergent and commit to all manners of good behavior requirements that the legal department is confident you can safely ignore. Picking an ethical principle as the foundation and defining objective of the Web is unlikely to help anyone if it's nothing more than a MAY WISH TO
.
Nothing says that it has to be this way. Focusing on user agency and on user-centric ethics generally doesn't mean that we should get lost in reams of endless abstraction. On the contrary: it means that we must focus on principles that can be implemented. When working on Web standards, we only consider requirements that can be verified with a test to be meaningful — everything else is filler. Being as strict in our ethical standards is challenging, but we can strive for it. We can do that both at a high level, when looking at the broad outline and impact of a piece of technology, as well as in precise detail — both matter.
At a high level, the question to always ask is "in what ways does this technology increase people's agency?" This can take place in different ways, for instance by increasing people's health, supporting their ability for critical reflection, developing meaningful work and play, or improving their participation in choices that govern their environment. The goal is to help each person be the author of their life, which is to say to have authority over their choices.
Technologists trying to maximise user agency often fall into the trap of measuring agency by looking only at time saved (in the same way that they fail to understand what people want to do when they measure time spent). On the surface, the idea seems straightforward: spend less time on one thing, have more time for other things! That would seem to fit our mandate of improving "What each person is able to do and to be". And all other things being equal that can be true, but the devil is in the details: the enjoyment of doing the thing, the value in knowing how to do it, or the authority over outcomes. Even things that many would consider chores aren't always best automated or delegated away: you may not wish to clean your house but you might want a say in the chemicals introduced into your home, about how your things are organised, or over whether your house can be mapped by a robot and data derived from that map sold to the highest bidder. Not all leisure is liberation.
The more detail we have on a piece of technology that may be part of the Web, the more readily we can assess it in very specific ways that capture aspects of improved user agency. In fact, that's something that the Web community has been doing for a long time. Consider:
- The great level of detail that has gone (and continues to go) into specifying how to make the Web and Web content accessible. These guidelines and techniques can, in exceedingly concrete ways, push for a world in which disability does not limit agency.
- An equally-impressive trove of actionable principles can be found in the Internationalization work. This empowers people to use the Web in the languages of their choice. We will never celebrate the work of the Unicode Consortium enough. Bringing all of the world's languages into a unified system of character encoding is a historical achievement that "respects and empowers users".
- It's hard to act freely if you can't act safely, which makes work on security core to the agency project. RFC8890 ("The Internet is for End Users") captures this well when it states that "User agents act as intermediaries between a service and the end user; rather than downloading an executable program from a service that has arbitrary access into the users' system, the user agent only allows limited access to display content and run code in a sandboxed environment. End users are diverse and the ability of a few user agents to represent individual interests properly is imperfect, but this arrangement is an improvement over the alternative — the need to trust a website completely with all information on your system to browse it." This trust is empowering.
- And the same can be said about privacy, which is key to trust as well. Privacy further matters (as discussed in the Privacy Principles) in that it includes the right to decide what identity you present to others in which contexts. Additionally, widespread data collection creates information asymmetries and information asymmetries create power asymmetries. The issue here isn't so much that data might be used to support mind-controlling AI snake oil but rather that it powers more mundane (and far more effective) manipulation techniques such as hypernudging.
These shared foundations for Web technologies (which the W3C refers to as "horizontal review" but they have broader applicability in the Web community beyond standards) are all specific, concrete implementations of the Web's goal of developing user agency — they are about capabilities. We don't habitually think of them as ethical or political goals, but they are: they aren't random things that someone did for fun — they serve a purpose. And they work because they implement ethics that get dirty with the tangible details.
User agency isn't limited to these four things, important as they may be. A great way to build the future of the Web is to work through a gap analysis of the ways in which we could be developing user agency. This could include anything from building better ways to find things your way in search and recommendations, to organize with others and govern our online environment without having to beg a few large companies to listen, or to break out of the straightjacket of apps and tabs.
Additionally, developing ways to ensure that agency is not only possible but to the extent possible mandated by the system also furthers the Web. For that, we need to think beyond the individual.
Agency is Collective
Perhaps counterintuitively, focusing on user agency is not an individualistic position. As Amartya Sen put it, "Individual freedom is quintessentially a social product, and there is a two-way relation between 1) social arrangements to expand individual freedoms and 2) the use of individual freedoms not only to improve the respective lives but also to make the social arrangements more appropriate and effective." The motivation here is clear: increasing agency has to include the ability to influence collective systems as well as effective avenues for collective action. Under this view, the Web is (as per Aurora Apolito) “a form of ‘collectivity’ that everywhere locally maximizes individual agency, while making collective emergent structures possible and interesting.” (For a longer discussion of the importance of collective systems, see The Internet Transition.)
In practical terms, this has consequences for the evolution of Web architecture. Much of the Web that exists today rests on the assumption that users exist on someone's server: essentially as guests on someone else's property. This has bred a default interaction model in which people have no rights other than those generously granted them by the local authority. Ultimately, the best governance model that is available in a client/server architecture is benevolent dictatorship. No matter how you set things up, the server can ultimately change the rules on a whim. In order to protect user agency and to imagine the Web as “a global community thoroughly structured by non-domination” we need to shape Web technology so that it shifts power to people rather than away from them. Doing so requires the return of protocols to the fore so as to push power to the edges and some changes focused on individuals like a drive to replace external authority with self-certifying systems, but also the deployment at scale of cooperative computing, communities under their own rule and their federations, and building subsidiarity into the Web so that more central authorities perform only those tasks which cannot be performed at a more local level. At a purely technical level, it requires a user agent more powerful than what browsers alone can provide, and at the policy level it necessitates a legal framework to prevent user agents from abusing the trust users have to place in them.
I will return to these points in greater detail in future posts, but for the time being suffice it to say: the next evolution of the Web has to further user agency by replacing today's system in which might makes right. This means that the next Web is about both freeing users from the excessive power of server-based authorities and empowering them to organise to govern their world their way.
This post is part of a series on reimagining parts of the Web. You can read the other entries in the series at:
- Building the Next Web
- The Web Is For User Agency
- You're Gonna Need A Bigger Browser
- Web Tiles
- ActivityPub Over ATProto
Acknowledgements
Many thanks to the following excellent people (in alphabetical order) for their invaluable feedback: Amy Guy, Benjamin Goering, Ben Harnett, Blaine Cook, Boris Mann, Brian Kardell, Brooklyn Zelenka, Dave Justice, Dietrich Ayala, Dominique Hazaël-Massieux, Fabrice Desré, Ian Preston, Juan Caballero, Kjetil Kjernsmo, Marcin Rataj, Margaux Vitre, Maria Farrell, and Tess O'Connor. Needless to say, anything dumb and stupid in this article is entirely mine.