Improved privacy for children is sorely needed, but designing it so that it is broadly applicable and enforceable without either creating risk for children or encouraging businesses to find ways to not know that children are using their services is hard. I think that there might be a way forward, however!
As I’ve mentioned before, I’m an avid reader of L.M. Sacasas’s Convivial Society which I feel offers some of the best thinking on technology while always remaining clear and accessible, with no sensationalism and an abiding humanity. So when Mike wrote in the latest instalment (III.9) that he was offering seven theses “less cautiously formulated” with the thought that they might elicit some responses, I found jumping in hard to resist.
Towards the end of December, news enthusiasts can relish the awe-inspiring annual spectacle of Year in Review pieces migrating majestically across the media landscape. In the privacy ecosystem, a similarly-choreographed opining ballet takes place every year around May 25 as we collectively propound theories as to why the GDPR hasn’t solved the climate emergency and made us all good at karaoke just yet.
In this digital hellscape of ours, what is it that we talk about when we talk about privacy? We talk about power. Concentrations of data are concentrations of power, or, as the freshly-minted first public draft of the W3C’s Privacy Principles states, “asymmetries of information and of automation create commanding asymmetries of power.” That’s the problem to which privacy is the solution.
The W3C is nearing its 30th year of existence, and the Consortium’s community is working on reforming it. All of the proposals made to date assume that it will remain a membership organisation — I have a different suggestion.
For the longest time, word was that the Web would free the world; now, it seems as if most of the world wants to liberate the Web. I believe that we can tackle decentralisation and antitrust more effectively by developing a pragmatic and concrete approach to capture resistance inspired by the methods of cybersecurity.
If you've spent any amount of time discussing reforms to improve privacy online, you've likely encountered the Big Knob Theory. Like Covid it comes in variants, but its core tenet can be summarised thus: there exists (metaphorically) a Big Knob that can either be turned towards "privacy" or towards "competition" — but it's very much a zero-sum game and you can't have both. It's a popular position; but is it true?
There has been ample debate in some tech circles as to just how much of a privacy war is really being waged. My personal sense is that it's not so much of a war as it is a reality check. It has become very painfully obvious that the same old simple solutions don't work — and some people are up in arms that reality is being inconvenient to them.
The Global Privacy Control is making steady progress towards adoption. As a global signal supported by browsers, it's a natural question to ask what it means under regimes such as the GDPR. Here's my personal take.
Working behind the scenes in the news media sector, it has become increasingly clear to me that tech and the internet as they operate today are causing structural damage to our collective institutions that runs deeper than seems to be understood. But these changes can be hard to characterise. If you take advertising revenue from high-quality contexts and use it to subsidise conspiracy theories, it's pretty obvious that nothing good will happen — but we barely have an understanding of the data economy sufficient to put a critique of this transfer on solid footing. If you move editorial decisions about what information people get to see first from tens of thousands of career editors working with methods that are highly diversified in biases, culture, skill, or politics the world around to a tiny number of algorithms that are everywhere uniform, what effect can you expect? Reputational and market-driven accountability will be removed, which is evidently bad, but the massive simplification of this ecosystem seems very likely to have deep-running consequences of its own — but how do you begin proving that? A new paper in PNAS leads the way forward.