There has been ample debate in some tech circles as to just how much of a privacy war is really being waged. My personal sense is that it's not so much of a war as it is a reality check. It has become very painfully obvious that the same old simple solutions don't work — and some people are up in arms that reality is being inconvenient to them.
The Global Privacy Control is making steady progress towards adoption. As a global signal supported by browsers, it's a natural question to ask what it means under regimes such as the GDPR. Here's my personal take.
Working behind the scenes in the news media sector, it has become increasingly clear to me that tech and the internet as they operate today are causing structural damage to our collective institutions that runs deeper than seems to be understood. But these changes can be hard to characterise. If you take advertising revenue from high-quality contexts and use it to subsidise conspiracy theories, it's pretty obvious that nothing good will happen — but we barely have an understanding of the data economy sufficient to put a critique of this transfer on solid footing. If you move editorial decisions about what information people get to see first from tens of thousands of career editors working with methods that are highly diversified in biases, culture, skill, or politics the world around to a tiny number of algorithms that are everywhere uniform, what effect can you expect? Reputational and market-driven accountability will be removed, which is evidently bad, but the massive simplification of this ecosystem seems very likely to have deep-running consequences of its own — but how do you begin proving that? A new paper in PNAS leads the way forward.
The British competition regulator (the CMA) just released a draft agreement with Google relating to the "Privacy Sandbox". I take a quick look at it through the lens of enabling better standards and stronger cooperation between the world of standards and policy.
It's time to lift the coronavirus travel ban — ideally entirely but failing that at least for vaccinated non-immigrant visas.
Consenting to sharing your personal data with a third party is not a problem that the Internet invented for us: social scientists have been struggling with the issue in their experiments for decades. Macartan Humphreys’s “Reflections on the Ethics of Social Experimentation” that provides a framework within which to consider the consent of populations being studied lists no fewer than eight consent strategies and the ethical considerations that surround them. True to style, the Internet just came along and made things bigger.
With so much public doomsaying over A.I., optimistic views on the matter make for a welcome contribution to a healthier debate on the topic. There is, however, a gap between optimism and unwarranted confidence into which Mr. Pande's recent Artificial Intelligence’s ‘Black Box’ Is Nothing to Fear seems to have fallen.
La Celle-Saint-Cloud. Je me réveille et, déjà, je ricane.
Je ne voulais pas traiter ce sujet. En politique française, le voile et plus généralement tout ce qui touche à l’Islam est un chiffon rouge qui sert à droite comme à gauche à détourner l’attention de l’absence totale de projet politique, et le feuilleton de l’été sur le burkini n’y fait pas exception. Malheureusement c’est un sujet qui ne semble vouloir tarir au point que les primaires LR se résument à un concours de celle ou celui qui pissera le plus longtemps contre une mosquée. À force, quelques précisions s’imposent.
By and large, it’s one of those teapot squabbles that the intersection of Twitter and Open Source can easily make more heated than enlightening. My interest here isn’t to pick a side and point fingers thither and there, if only because it’s pretty hard to point fingers and type at the same time. Rather, I wonder if the heat could not be harnessed to cook up something useful.