If you've spent any amount of time discussing reforms to improve privacy online, you've likely encountered the Big Knob Theory. Like Covid it comes in variants, but its core tenet can be summarised thus: there exists (metaphorically) a Big Knob that can either be turned towards "privacy" or towards "competition" — but it's very much a zero-sum game and you can't have both. It's a popular position; but is it true?
There has been ample debate in some tech circles as to just how much of a privacy war is really being waged. My personal sense is that it's not so much of a war as it is a reality check. It has become very painfully obvious that the same old simple solutions don't work — and some people are up in arms that reality is being inconvenient to them.
Working behind the scenes in the news media sector, it has become increasingly clear to me that tech and the internet as they operate today are causing structural damage to our collective institutions that runs deeper than seems to be understood. But these changes can be hard to characterise. If you take advertising revenue from high-quality contexts and use it to subsidise conspiracy theories, it's pretty obvious that nothing good will happen — but we barely have an understanding of the data economy sufficient to put a critique of this transfer on solid footing. If you move editorial decisions about what information people get to see first from tens of thousands of career editors working with methods that are highly diversified in biases, culture, skill, or politics the world around to a tiny number of algorithms that are everywhere uniform, what effect can you expect? Reputational and market-driven accountability will be removed, which is evidently bad, but the massive simplification of this ecosystem seems very likely to have deep-running consequences of its own — but how do you begin proving that? A new paper in PNAS leads the way forward.