Leading the W3C To Its Full Potential
Reinventing W3C Governance
The W3C was founded in 1994 to “lead the Web to its full potential.” The Web has changed a bit in the time since, and the world has changed a bit, too. The Consortium hasn’t remained static through these decades, far from it, and overall it has served the Web decently well (though not always), many warts and all. But the problems that the Web faces today are of a darker, tougher, more structural and encompassing nature than those we worried about in the early years. As a result, the W3C has fallen behind and is not actively addressing the Web’s most threatening crises. To do so, it will need to change.
This is not to say that we need to throw away what we have. The W3C still shepherds technical standards effectively, still convenes a broad and experienced community, and no other organisation has a comparable reach or a culture infused with the values that the Web needs to grow. We need not toss away the thirty-year-old with the bathwater.
Designing governance for the Web feels a bit like a hyperobject. There are entire segments of Web technology that you hadn’t realised exist that people spend their entire careers in — and that’s before you get to how people use the Web. It’s hard to know where to begin. What follows are my notes on the general contour of what an organisation tasked with the stewardship of the Web would mean and how to put it together. I certainly don’t claim to exhaust the topic.
This post was informed by many conversations with people in the Web community, I have stolen ideas all over the place, but this should be treated as a reflection document not endorsed by anyone other than myself. Several readers have asked whether this proposal is modelled on the IETF. It was not a goal of mine and I didn’t explicitly seek to borrow from the IETF, however a number of people I have had conversations with are close to the IETF or like its model, so it definitely was a very significant influence. The IETF might not be perfect but it has shown that this sort of community-driven approach works and that it can be funded.
That’s No Member Organisation
The W3C is customarily defined as a “member organisation” and indeed it has members (of which my employer, The New York Times, is one). Companies and institutions pay to join, in exchange for which they get some member-only rights. Much of the money that pays for the organisation’s staff and its day-to-day operations comes from those fees. In the standards world, this model is often referred to derogatorily as pay-to-play, a model in which decisions are made by those who pay to participate, and others can do little but hope for the best. Is that what we have here?
Member organisations work for their members. They are generally intended to create the conditions for fair cooperation between different companies such that they may jointly manage a project that they all care about. Typically, members will all have significant skin in the game whereas non-members won’t, or negligibly so. Pay-to-play models aren’t necessarily bad: there are plenty of cases where, to some approximation, those who should have voice in the process and those who are both able and incentivised to contribute financially are the same people. Yet when it comes to the Web, we all have skin in the game. Is the W3C really a member organisation? Should it be?
While the W3C has indeed been historically organised around membership and financed by members, it’s not clear that it has wanted to be a member organisation for a long time. In many ways, the Consortium has bent backwards seven ways to Sunday over the years to avoid being a member organisation. Some examples that spring to mind:
- All standards are produced under a royalty-free policy so that the work of the members may be used freely by all.
- Almost all of the work is conducted in public, and not just in view of the public but in such a way that people in the public can (and do) participate actively in the work, for instance in GitHub issues or on mailing lists.
- All of the work is grounded in a principled approach (too long undocumented) that puts people first, those who build things using web technology second, those who implement web standards third, and only then those who make standards in fourth position, who come ahead solely of theoretical purity. (This is known as the Priority of Constituencies, people over authors over implementers over specifiers over theoretical purity.) Such a principled approach does not mean that all decisions have always perfectly aligned with these values, but as the principles become increasingly well documented that is improving.
- All of the technical work, including the approval of standards, the creation of groups, or the selection of chairs, is ultimately a decision of the Director, who is for the most part untouchable by the membership.
- The W3C Team, while certainly accountable to the membership, operates with a lot of executive leeway.
- All work is subject to horizontal review (more on this later) which is member-backed but could not possibly be changed unilaterally by the membership because the domains covered by horizontal review (accessibility, internationalisation, security, privacy) are supported by solid legitimacy in the broad Web community (and in some cases in legal regimes).
- The Process mandates that groups must address substantial feedback from any member of the public. A document cannot become a standard with unaddressed substantial feedback.
- Almost all standards are now published under open, forkable document licenses.
- There is a specific process for groups to include Invited Experts so that the most active members of the public can be included. (But in practice this process isn’t used much because groups are open enough for the expectations of most participants from the public.)
- There are hundreds of Community Groups which anyone on the Internet can spin up without a fee and that regularly produce high-impact work.
- In fact, the member-driven nature of W3C became so diluted over time that they had to introduce Business Groups to bring back the occasional pay-to-play vertical in the rare cases in which it has felt useful.
This is not a short list and if I thought about it I could probably find more. It wasn’t always that way. I remember a time before most of the above was the case, when the W3C was indeed a member organisation — but you can’t work for the Web if you don’t work with the Web. Over the years, the W3C community layered fix atop fix to progressively address the vexations and limitations of being a member organisation. Three decades in, the W3C looks a lot more like a public interest organisation financed by a generous group of donors who call themselves “members.”
I will get into alternative models later, but for the time being suffice it to note: as a standards community, we might wish to take a hint from our own behaviour. If we’re trying so hard to make the Consortium not act like a member organisation, it might be best to solve the issue at its root and not define it as a member organisation.
Values and Horizontal Review
I am regularly amazed at how reticent technologists can be about discussing values. As I’ve said elsewhere, “you went into tech because you didn’t like politics; now you have two problems.”
Values aren’t some kind of weird mystical creatures that float in some aethereal realm of symphonic tut-tutting from which they give us headaches, clammy hands, and uncomfortable conversations with angry humanities majors. Values are just the heuristics with which we solve the harder problems.
Some problems admit of a single optimal solution, or at the very least they have a single variable to optimise in only one direction. I call these “dumb problems.” Don’t get me wrong: many dumb problems are hard to solve, some quite famously so. But they’re dumb problems in the sense that you never need to stop and decide where to go: you simply damn those torpedoes and forge ahead come what may. And then you have “wicked problems.” Wicked problems don’t have an optimal direction. If you’re lucky they might have a well-defined Pareto-frontier, but they might not (or at least not with a dimensionality you can apprehend). Not all wicked problems are hard, in fact an experienced developer will solve some wicked problems without much thought simply because they know how to navigate them. Put differently, wicked problems involve trade-offs. And how do you select the trade-off to make? You use values.
When we say that engineering is about trade-offs, we’re saying that engineers solve their hardest problems using values (which they call “heuristics” because everyone’s entitled to be fancy some). In implementing a system, you might need to decide between an option that provides people with the best experience, another that delivers the greatest value to the shareholders, and yet a third one that makes the control centre blinkenlights dance in the prettiest way. You will probably want to pick some position between the first two options (depending on where you lean). You might not want to hire the engineer who prefers the third, or at least not for long. But one thing is certain: if you give the above trade-off to someone who only ratholes on dumb problems, say optimising some performance metric, then you’re going to have a bad time no matter what.
Standards are messy; the Web is a huge and complex beast with billions of people doing all kinds of people things. Making standards involves making a lot of trade-offs. Even something as simple as GPC — a standard which transmits 1 bit over the wire — involved making trade-offs. And you’re better off if unrelated standards make similar trade-offs as one another because they’re going to end up having to work together in the wild. When I hear someone saying that standards should only involve the “neutral interoperability of 1s and 0s,” I can’t help but see that as a “tell me you’ve never shipped a standard without telling me you’ve never shipped a standard” moment.
So we need values — heuristics if you insist on being fancy — because the work is wicked problem work. But asking a gang of geeks to come up with values from scratch is a bullet train to vapid generalities of little practical use. “Do No Harm!” they might say. Mmmmmkay?
We must do better and, in fact, even though few have noticed, we have done better. A value is worth something if it’s there to help you when the rubber hits the road and starts hydroplaning. Sure, you’ll need a handful of high-level lofty values as reminders, if only because there’s always a vocal guy (it’s always a guy) who thinks it’s just outrageous to put people before profits. But mostly you want Values You Can Use. And we have those! The set of practical values that we put to work in developing the Web is captured in the documents that are used in the process of horizontal review: the accessibility checklist and its supporting documents, the internationalisation short checklist and accompanying standards, the extensive work on security and the growing work on privacy, and the TAG’s documents for an ethical Web and design principles. They are somewhat scattered and we need a lot more of these, but a solid, credible, legitimate foundation exists.
An important step that the W3C can take is to identify what is missing from horizontal review. One area in which the Consortium could benefit from having more clearly specified principles is in its own governance. The Advisory Board and the Process CG (in a reformed form) could document the practical values and principles that have been going into their work, with the view that governance itself should be subject to wide review as it evolves.
(I should note that the W3C’s Advisory Board has done some work on a values-based approach which it calls Vision. My personal view is that staying at a very abstract level as they have and not tying it to a process that people have to deal with when they create standards means it is unlikely to have an effect. I also tried to engage with a view towards making the vision more actionable but a year later nothing has happened — the AB is rightly busy with other things on which I believe they can have a greater impact.)
Stewardship and Scope
The Web cannot survive without a governance structure to support and sustain it. WIthout that, it will be gradually enclosed and smothered — as we can readily see today. There needs to be an organisation, legitimate and influential, tasked with its stewardship.
The W3C has not shown itself equal to the Web’s needs in stewardship. I write here on the assumption that the W3C will turn this around because it is the organisation best placed to pivot in that direction. But should it decide not to, someone else must step in.
Stewardship for something this complex and critical is no small ask. It requires a duty of care that gives short shrift to convenience and habitual arrangements. An organisation tasked with stewardship of the Web does not get to pick which problems it deals with and which it doesn’t: the problems of the Web are its problems.
The W3C has over the years become exceedingly focused on the relatively small component of the Web that might be called the “HTML SDK.” The HTML SDK is important and useful, but it can only deal with a limited subset of the Web’s problems. As a result, today’s most pressing issues have been left to run rampant. Examples of untackled issues abound. Search is broken: innovators cannot break in, the typical experience quality is plummeting, indexing is no longer a mutually-beneficial arrangement, search apps are poised to replace browsers. Social is in disarray. Browsers compete on default installs and aggressive promotion by their makers rather than on quality, and betraying their user’s trust — which used to sit beyond the pale — is increasingly common. In-app browsers increasingly break the Web for both people and publishers. We still lack a clear protocol for pluggable interoperable components to share revenue. I could go on, but my criterion for scope is simple: any behaviour of the Web as a system that, were it to be described in a standards, would not pass wide review should be considered to be a problem and taken on as part of the Consortium’s scope.
A common objection to taking on problems of a different nature from that which have typically been tackled so far is that not all of these can be solved with the exclusive application of simple, voluntary technical standards. That is correct, but the solution to that problem is to develop new tools for our toolbox rather than ignore the issues because we didn’t bring the right equipment. That would be equivalent to the famous French Shadok saying according to which “S’il n’y a pas de solution, c’est qu’il n’y a pas de problème” (”If there is no solution then there is no problem.”). Other tools which we could use (no doubt amongst many) include regulatory recommendations and more explicit governance of Web infrastructure.
Governing the Web is a massive task of which the W3C should be the steward, but this stewardship should operate in polycentric partnership with many other parties, notably states. Both architectural regulation (through technical standards) and legal regulation are needed, but more importantly if they are to be maximally effective, they should be coordinated.
In the past, the W3C has considered creating a Policy Group; I think that’s a mistake. There’s a lot of policy going on, and assuming that all of policy could be dealt with in a single group while technology gets to be split over dozens of distinct groups is at best unrealistic.
Instead, thinking about legal regulation should, when necessary, be considered part and parcel of the specific issue at hand that is being addressed by a given group. The Consortium cannot enact a law any more than it can implement a technical standard in a popular product. For a group to issue regulatory recommendations, it will have in a sense to work with legislators in a manner similar to how it would work with implementers (which means that the system only works if they are somehow involved, though not necessarily in the same way). In no case does this entail a transition of sovereignty: the value of a regulatory recommendation stands on the consensus that supports it, the expertise that went into it, the exacting horizontal review which it underwent, and of course the trust that given legislators place in it by adopting it in some fashion.
I can think of several ways in which involving public actors and agencies would be beneficial:
- In very general terms, I think that having an active liaison between technical and legal regulation would be an improvement for all. Both worlds impact each other and both have different types of legitimacy. We should be working together, or at least actively aware of each other.
- Strategic participation from specific agencies could also go a long way. For instance, I think it's pretty clear that most European Data Protection Agencies don’t have the budget to enforce the GDPR and likely never will. I don't mean this at all as a criticism of the work they do — in fact some are trying some pretty creative things — but there's only so much you can do without a decent budget. If I read this dataset right, the CNIL (France’s data regulator) has about €0.35 per French citizen. Contrast that with the fact that Google pays Apple something like $15 per user just to have their search engine be the default — and other DPAs have it worse. Meanwhile, Apple and Google are ramping up their own private (and at times self-serving) implementation of what privacy means. In the W3C, we have started building consensus on Privacy Principles that rely on a state-of-the-art understanding of privacy and that put the data subject first. (See also my post on Principled Privacy.) These can help make enforcement more of a reality everywhere privacy regulation applies, notably by offering a well-grounded understanding of privacy by default and by design (GDPR Art 25), or by guaranteeing purpose limitations and data minimisation. This could benefit from being the subject of an ongoing discussion with for instance the EDPB or the CPPA.
- Encouraging participation from public research bodies would be a positive outcome as well. That's already happening to some extent, but more would help.
- Governmental support is often mentioned in the context of funding. This could be useful, but I am very ambivalent as it would have to be no-strings-attached. I have some follow-up posts about more specific governance for various parts of the Web, and some of those require regulatory support so that the system can finance itself. I think that helping us making Web/Internet governance self-sustaining would in the end create more value than direct funding.
This is delicate work that requires treading cautiously, but the difficulty should not be a reason to stall. Some early work is already the product of lightweight cooperation to properly articulate architectural and legal regulation (GPC, rights reservations); we can keep carefully building in that direction.
The Web is multiple layers and types of infrastructure mashed into one. Infrastructure is “shared means to many ends.”1 It doesn’t magically happen, evidently, but rather needs to be provided, and the manner of its provision matters a lot.
Control over infrastructure confers a lot of power, and the extent of this power is significantly worsened when the infrastructure provider can extract data about infrastructure users. The role of the W3C should be to ensure that Web infrastructure does get provided (ie. that it is economically viable for various actors to provide it, either in a market setting or through some other means) and that providing infrastructure cannot be leveraged as power over infrastructure users.
It’s important to note that preventing the concentration of power is not about (architectural) decentralisation. Architectural decentralisation can certainly help, but it isn’t enough: as explained in Capture Resistance, linking on the Web is technically decentralised, but that was not enough to protect the Web from being centralised by Google. What matters here is the active checks and balances on excessive power. Architectural decentralisation can be a component of that, but the institutional toolbox is bigger than that. A centralised architecture under collective control can provide highly effective checks and balances, for instance.
The W3C taking explicit responsibility for the shared infrastructure that constitutes the Web does not mean in any way directly owning all of that infrastructure, for instance taking over all hosting and cloud services. That would be... a lot, and also totally unnecessary.
The current approach that the W3C takes is to look at the Web for things that could be managed through voluntary standards, and then ignore the rest. That’s backwards. The approach that the Web needs is for the W3C to look at the problems that need solving by considering the Web as infrastructure in need of governance. Some of that infrastructure (a very significant chunk in fact) is best managed by defining voluntary standards and letting the market sort things out. When that works, there is no reason to do more. But at times, stronger institutional arrangements are called for. This can seem daunting, but institutional rules are essentially stronger standards and are built from many of the same components. These do not (typically) eliminate markets or private provision, but they do constrain providers more than voluntary standards do, if and when necessary.
I won’t define every single institutional arrangement that we could make use of or deploy; that would be long and tedious (and should be decided through consensus). But the literature on commons is full of options that we can draw inspiration from — and in many cases the Web community already has some knowledge of how to build these collective systems, we just haven’t thought of them in terms of institutions or commons yet. Doing so is powerful because it helps locate tooling and experience that others can share.
(I will be following up with suggestions for some specific infrastructure parts that I recommend building governance for. I will link them here as I publish them; it’s worth noting that those will have an impact on the domains and funding considerations I provide below.)
The Web is a lot of infrastructural parts plugged together, and each part requires its own governance, but how shall the overall hyperobject be choreographed together? We turn next to the umbrella governance that binds it all together.
Governance of the W3C
Note that what I am outlining below is an example of what could be done and of a general direction of travel, the idea being to show that a different approach is feasible (and preferable). This section is not a process document, designing a complete governance model would require a lot more detail. But hopefully it provides enough detail to inform discussions and decision-making.
A Community-Driven Organisation
You might have read the earlier section criticising the notion that the W3C is (or should be) a member organisation and thought “well okay but so what else?” The answer is (reams of text later) a community-driven organisation.
The Web has a huge community and is the shared property of all humankind. There is no question in my mind that it should be owned and operated by its people.
The difficult question of course is “how?” Membership provides two things: a way to decide who has voice and a funding model. Both are needed. But we can’t give voice to however many billion people have a stake in the Web (and we’d be in trouble if we could) nor can we tax them to support the work. I will return to funding later, but first let’s look at who has voice and how voice is structured into governance.
Who Has Voice?
The difficulty in establishing who has voice is to find a proper balance between being broadly inclusive so as to be legitimate and reactive to real-world issues on one hand, and being resilient in the face of unscrupulous actors with an unethical agenda (as happened previously to the DNT group or more recently to the Prebid community).
With this in mind, my draft blueprint is to have both a polity, which is broad and is the electorate that votes on proposals, and a nomination committee (NomCom), which is deliberately more restrictive, evolves more slowly, and is in charge of making the proposals that the polity votes on.
The polity is comprised of everyone who inside of the previous year:
- pushed a commit to the
mainbranch of an official work item;
- opened an issue in an official work item that wasn’t closed as somehow invalid;
- made five comments on issues on official work items that weren’t downvoted as non-constructive (eg. “me too!” or repeating resolved issues);
- wrote five emails to official groups that weren’t flagged as spam or non-constructive; or
- is part of the NomCom.
The above is not necessarily complete or the exact right formula — the goal is to capture active contributors in a broad sense while not opening the process up to highjacking by well-funded parties (as with DNT) or trolls.
In addition to the above selection criteria, polity members are encouraged to endorse one another (as constructive members, not necessarily endorsing the views), and both the list of polity members as well as the endorsement graph are public. The goal is to be able to use the graph structure to detect anomalous behaviour (eg. graph components that only endorse one another and participate in inauthentic ways) and to empower the AB to invalidate polity members where necessary to protect the integrity of the electoral process.
The NomCom includes:
- everyone who has chaired an official group or edited an official work item in the previous ten years;
- everyone who has been a part of the W3C Team in the previous ten years;
- everyone who has sat on the TAG or AB (for life); and
- everyone who has attended two of the last four TPACs for at least three days each (including online meetings, but only when TPAC is exclusively online).
Are excluded from either the polity or NomCom any who have been the subject of a code of conduct violation in the preceding year, as well as anyone who has been permanently banned for repeated or severe offences.
If necessary the NomCom could be reduced in size by picking a random set of volunteers from those eligible under the above criteria.
Domains and the Umbrella
One issue with the current W3C is that its scope is so large that on any given topic only few members participate. While the unity of the Web as a project needs to be maintained so that we can benefit from architectural coherence and integration, having everyone potentially opine on everything above the group level is asking a lot.
Having some degree of vertical slicing into domains would be desirable, however we should avoid at all cost the creation of a monster that is just the accretion of uncorrelated sub-organisations. Striking a balance here is both difficult and key.
In all cases, we maintain the overall umbrella organisation as owning all the domains and as operating the TAG, the AB, and all horizontal review in ways that are strictly enforced across all domain work. The architectural coherence of the Web that makes it work at the scale and with the complex diversity that it can support is powered by its values which are embodied in the horizontal view.
Establishing a domain should be rare; it is perfectly fine for many groups not to be in a domain. Domains should only be established when they manage a significant piece of infrastructure in a relatively active way and when they can be financially self-sustaining. The purpose of domains is to develop governance models that still report up to the umbrella organisation but that might require significant additional local rules and some degree of autonomy.
Examples of domains include advertising, which could be managing a significant chunk of advertising infrastructure, or interoperable discovery, which could manage browser funding drawn from search (as it is today) but without the significant issues that have broken Web search today.
I know very well that W3C old-timers will be extremely suspicious of the idea of domain (you know who you are). They should be: this comes from the W3C’s tradition of seeking new members by pushing into new areas that may or may not be all that ready for the Web. There are also examples of standards-on-demand organisations that do have value (in my mind) but do not deliver a coherent overall project. I understand the reticence. However, the Web is too big to just be one big village meeting (so to speak). One important property that we need to develop more of is polycentricity. A more polycentric organisation will be more resilient and robust in the face of hostile groups attempting to take over, while also being more reactive to change in the world. (Let’s not forget that browser vendors are the ones who insisted against all evidence that mobile was not the Web, a disconnection from reality for which we are still paying the price.)
When domains are established, they should have representation on the TAG. They need not have representation on the AB because the AB is primarily in charge of the umbrella’s non-technical aspects.
The Technical Architecture Group (TAG)
The TAG is tasked with the architectural coherence of the Web. It is in charge of all the technical work of the Consortium, assuming a role that blends today’s TAG with some of the Director’s responsibilities. TAG members have fiduciary duties to the values of the Web, notably duties of loyalty, care, and good faith.
The TAG is elected by the polity based on candidates selected by the NomCom. The TAG has nine elected members when the number of domains is even and ten when it is odd (with each domain sending an additional member which it picks whichever way it wants to). Elected TAG terms last two years and are staggered every year; there are no term limits.
With every new TAG (so every year), the NomCom picks a member of the TAG to be W3C Chair. They can change their choice every year, but some degree of continuity is encouraged.
The W3C Chair leads the TAG, convenes TPAC, delegates their power to group chairs, and generally serves as the face of the W3C. The TAG makes decisions by consensus, and failing that by simple quorate majority. TAG decisions can be appealed to the W3C Chair, who can rule with the TAG, against the TAG, or return the decision to the TAG for further work. If the W3C Chair and TAG disagree, the TAG can overturn that decision based on a quorate 70% supermajority (rounded to the nearest number of votes depending on TAG composition).
The TAG charters working groups based on input from the NomCom. The TAG can delegate some of its work to task forces or individuals.
The Advisory Board
The is in charge of the non-technical aspects of the organisation, serving as a Board of Directors that provides oversight over the Executive Director and the Team, and that is responsible for the effective application of the Process and the integrity of the electoral system. AB members have fiduciary duties to the organisation, notably duties of loyalty, care, and good faith.
The AB is elected by the polity based on candidates selected by the NomCom. It has 9 elected members with two-year terms staggered every year and no term limits. No individual can be elected to the TAG and AB simultaneously. Domains are not represented on the AB and are expected to operate their own board.
The AB manages:
- the Executive Director and the terms under which the ED can assemble the Team;
- the organisation’s budget and debt;
- ensuring the proper application of the Process and electoral system, as well as the enforcement of the code of conduct;
- general organisational policies and procedures; and
- the overall accountability, transparency, and legitimacy of the Consortium with respect to the broader community, and of the Team to Consortium participants.
The AB works with the Process & Governance Review Group to shepherd the Process forward.
Horizontal Review Groups
Over the years, several groups have come to take on a more important place in the W3C’s work. Without the built-in (and temporary) legitimacy that the Consortium used to get from having its Director be the Web’s inventor, it becomes particularly important to provide a strong foundation of legitimacy for the W3C work — stronger than today’s. This legitimacy comes from the coherence that a values-based approach enables in the Web’s technical architecture and in its institutional foundations (as all institutions require values). Rather than being some lofty abstract principles, the values are embodied in a strong horizontal review process that all standards must eventually pass through. I propose to further enshrine the role of these groups as Review Groups.
Review Groups include:
- Internationalisation. In charge of reviewing internationalisation and managing the I18N documents.
- Accessibility. In charge of reviewing accessibility and managing the a11y documents. This group is not the whole of WAI, but rather the branch of WAI that supports other groups in their accessibility work.
- Privacy. Reviews privacy, as an iteration on the PING group.
- Security. Reviews security, probably as a novel embodiment of WebAppSec.
- TAG. The TAG reviews for architectural coherence, and (given the load) may delegate its review work to a dedicated task force.
- Process & Governance. This new review group is tasked with ensuring that any governance mechanism included in a standard, including Regulatory Recommendations, aligns with the governance principles that guide the Consortium. These principles will need to be produced in a manner consistent with the rest of the Review documents.
Ideally these groups will collaborate to align the style of their documents and questionnaires, as well as on the production of checker tools where applicable, and to make them jointly available at a convenient location.
The Team implements the activities of the Consortium in a way that aligns with the guidance provided by the AB and broader community. There is significant discretion as to how it is organised by the Executive Director, so long as it aligns with the AB’s instructions.
The membership-based governance model that prevails today (and its accountability shortcomings) actively prevents other sources of financing. Why help fund an organisation that looks like it will serve its members first and Big Tech first amongst those? No matter how much the individual representatives of those companies might sincerely seek to be independent from their employers, if that independence is not guaranteed by the process for all intents and purposes it’s a pinky promise.
Informal discussions seem to indicate that, given a clear and ambitious vision of the problems that need to be tackled and an effective, credible governance model, there could be funding opportunities from non-profit organisations as well as from the I* (ISOC, ICANN, etc.).
There are reasons to believe that the W3C could probably operate on a smaller budget than today’s, which would also help with this issue.
How Do We Get There?
So you’ve now read well over 5,000 words on potential novel governance for the W3C, and as you’ll have noticed it’s barely an outline. Is this just for nerding out on Internet governance or could it be for real? I think the latter.
Discussions about W3C reform are just now taking place (amongst the membership), in the context of transitioning from the current baroque legal structure (you don’t want to know) to a more regular entity. They include the long-discussed Director-Free mode of operation in which the organisation works without the benefit of a “benevolent dictator” model (a term which Tim justifiably dislikes), various ideas for more robust funding but from corporate sources, a new board of directors, etc. I’ve been thinking about these issues for a while and discussing them with fellow travellers, and I am increasingly convinced that the member model which is assumed throughout those proposals is a shaky foundation atop which to build and the source of most of the issues people are working to address.
(I’d like to point out that the work that the current AB and allies are doing is nothing short of heroic, even if I disagree with the approach.)
So I do think that reform is possible and that a good time is now. This document is of course imperfect (I will updated it as necessary) but I do believe there is a path forward. Things that you can do if you’re interested:
- Send me feedback! Discussing this on Twitter may be best so that others may join; if there is a need for longer discussions feel free to email me. Given enough interest a repo for issues could work.
- If you don’t have feedback but would like this to work, say so and spread the word.
- Do you think that you might consider funding such a governance model? Please do reach out, testing out commitments matters.
- Likewise, if you believe that you would be interested in the regulation aspects (Regulatory Recommendations), please reach out as well. This bridge between legal and architectural regulation is an important component in ensuring that the Web can be effectively governed.
- If you are curious about building institutions, there’s a ton to read. A good article-length introduction to the commons is The Miracle of the Commons, and a good general audience book-length overview of Elinor Ostrom’s work and what commons can be used for (even in spaaaaace) is The Uncommon Knowledge of Elinor Ostrom: Essential Lessons for Collective Action. A more complex dive into institutions is Understanding Institutional Diversity. Beyond that, there is a wealth of academic work, notably from the Governing Knowledge Commons framework. I have a draft post with more details on this.
And that’s it — let’s go!
City map illustration generated using this beautiful tool made by @anvaka.