Waiting for the singularity
With all these Web@25, W3C@20, WHAT@10, and of course Robin@37 worldwide celebrations it seems like as good a time as any to take a few minutes for musings about where we’ll be ten years from now. The content here is not meant to be particularly serious (or anywhere near complete), but I operate under conservative assumptions, for instance that the world doesn’t get overrun by an evil AI in the meantime. I’m also not interested in every other small detail. These are some of my opinions, and while I very, very much welcome opinionated disagreement I certainly don’t mind that part of them may feel outrageous to some — nor how wrong they turn out to be.
So. This is 2024.
The notion of a distinction between native and Web applications is but a distant memory, joining Flash, door-slam popups, “Web 2.0”, and table-based layout in the dustbin of history. This blurring of the app is due to the joint operation of multiple converging factors. Deep integration of Web Intents throughout the stack means that users flow between what in 2014 were called “pages” and “apps” without noticing the difference, in effect layering a new type of richer linking atop the Web. The URL bar has largely disappeared for many non-geek users, which further enhances the sentiment of seamlessness. Additionally, large consumer-electronics conglomerates have also finally aligned the entirety of their product lines on Web-based systems, leading to further integration.
An additional factor that helps with this blurring of the apps (not to be confused with the Blurring of the Apes, a delightfully post-apocalyptic way of getting blindly drunk), but that is well-worth being looked at on its own is the rise of the single-page application. In effect, the vast majority of web sites, even those that are mostly content-oriented, are now single-page apps (SPAs).
This happened for multiple reasons. SPAs are a natural architecture for very applicative content, but they are in fact also a good choice for content. Most content web sites require some surrounding template that provides context, navigation, and a variety of classic services. Having to reproduce these templates on the server and transmit the same bytes over and over again is wasteful and saving yourself from having anything to do with HTML on the server-side is a boon. There was therefore for a long time a desire for developers to be able to handle the templating and content insertion entirely on the client-side, but it was stymied by the fact that search engine crawlers had initially been unable to spider such content — and everyone wants to be indexed. But when crawlers grew the ability to process SPAs, this limitation vanished, opening the dams and quickly upgrading much of the Web to SPA-based authoring.
There was an amusing revival “PHP Conference” last year, but it largely flopped because not even those who remember those days really believe we ever developed anything like that.
A side-effect of the rise of SPAs was a massive, seemingly overnight growth in the JSON universe. It is through this vernacular process of hugely uncoordinated publishing of JSON data that the Web of Data finally happened. Virtually none of this data relied on the RDF data model or even on JSON-LD (even in its simpler subset that does not turn it into RDF/JSON) as the publishers were for the most part unaware that such options were available to them, let alone understood how they might help or work.
With this the Web initially filled with a proliferation of many, many JSON formats describing all manners of things. This remains the case to this day, but over time people started aligning on common ad hoc vocabularies for some of the more reusable aspects. Additionally, this wealth of JSON data made it desirable to be able to use JSON end-to-end in the stack easily. This created the need for Web Schema, the bastard spawn of JSON Schema and the data model captured by HTML forms. This alignment has allowed Web Schema to be immediately useful to developers in that it can generate useful forms, validate the data on both client and server sides naturally, and produce useful documentation where needed. Many small, community-driven JSON-based standards have taken over the world.
Another side-effect from the SPA take-over is how common Service Workers and frameworks built on top of them have become.
In turn, Service Workers have become loadable prior to any content, leading to a proliferation of tailored but interoperable packaging solutions and ending three decades of angsty soul-searching on the topic. Interestingly, the same ability to pre-process arbitrary content before it is loaded applies to audio and video, which given the performance now available have become essentially JS self-decodable formats, putting an end to the seemingly endless stranglehold of the codec IP mafia.
This boon to packaging formats was instrumental in finalising the great blurring and today items such as books or music albums are indistinguishable from applications or web sites. DRM stopped being a serious issue when universal proof of purchase became an option, and quickly applied to all sold digital content.
Motivation for the universal proof of purchase and its payment system requirement finally produced the impetus to solve the login problem. Unsurprisingly, it was addressed by simply decoupling it from the useless identity-related requirements that had plagued all previous attempts in that space. A browser logs in by providing the service with a specific crypto-blob that is entirely identity-opaque, and is synchronised across the user’s browsers and devices with browser-sync (an idea initially shunned by implementers but imposed by the EU in order to protect consumers from being locked into a given browser or device vendor).
In the late 10s someone implemented a complete XSL-FO processor in pure CSS. It was fun to build but not just that: the publishing industry switched to full Web standards and drank champagne.
We are now so surrounded by Web addressable devices that managing their interactions has become a common problem. A whole industry of applications built atop Network Service Discovery make it possible to control how your camera can tell your car to produce the best amount and colour of lighting for a dusk shot and a host of other vital inter-device interactions (e.g. http://nodered.org/). We have some common HTTP-based operations that can be used to address all manners of devices, and while some are still too limited to interact directly with a format as heavy as JSON they can make use of the unexpected marriage of JSON, Web Schema, and EXI to produce tiny messages that match their energy needs and processing abilities.
As part of this, the TV industry eventually cottoned on to the fact that producing industry-specific profiles of general-purpose Web technology and trying to deploy the result in walled gardens that only matched requirements from traditional, entrenched players was a wonderful way to make oneself wide open for brutal disruption from the outside. (About twenty years ago, a lot of the mobile industry, and notably a company called “Nokia”, found out about this the hard way.) As a result, they switched to a fully-Web stack with an open model and what used to be a machine for serially producing couch potatoes revolutionised itself as a hub for home intelligence.
At some point someone underwent the epiphany that enabling developers to produce high performance content is at least as important as the raw performance of the runtime itself. This finally led the DOM to catch up to jQuery in usability, making it possible to live without it and thereby remove its loading impact.
Also from the epiphany department, it at some point dawned on the scientific community that producing vaster amounts of knowledge than in all of prior history combined and then storing most of it in PDF or other image formats was tantamount to building an interstellar ship complete with FTL drive only for the fun of running it into the Moon because the fireworks look pretty. Linked with greater support for MathML, a specialised dialect of HTML eventually surfaced for scientific publishing and was recently made the default format on the arXiv.
I’d mention HTML, but that’s just buried in the foundational layers these days. Sure enough you can do some nicer things than in 2014 and a lot has evolved since, particularly since the old kitchen-sink specification was split up into manageable chunks and opened up for community maintenance. But that also engendered the distinction between stable foundational layers and more innovative parts that gradually stopped being called “HTML” since much of the time they had little to do with markup. You’ve got bidirectional bindings and forms are now much richer with extensible primitives, even sporting some date integration that’s sensible. SVG can be freely mixed in everywhere and serviceable support for editing freed up time for developers to make operational transformations the default, basic, expected behaviour for all applications that involve even a modicum of collaboration.
Back in 2014, before everything was on the Web, it was common for people involved in the creation of Web technology — and notably standards — to look at industries yet to be disrupted by the Web and wonder, with perhaps a hint of smugness, how soon they would “get it” and allow themselves to be properly “disrupted by the Web”.
It is both ironic and heartening in retrospect to reminisce on such musings since, in 2014, no organisation involved in the creation of Web standards fully abided by the definition of “open” that the Web community held back then (let alone today’s). While the intervening years have surely seen their occasional tension, it is great to see once again how nothing stops an idea whose time has come. ∎