by 0naptoon
A dystopian future waits us, a web where you have to be certified to access the internet.
The plan is simple.
- You get a certification by giving your personal data, a digital identity.
- You log in into your browser that supports this certification
- Website will allow access if you have this certification else they block you or have limited access.
How google will push this
- Block uncertified access to its websites, youtube, gmail, google
- Cut access to google ads for any website that doesn’t support the DRM
The other media giants will soon follow google lead. They will lobby for laws to force the DRM with excuses to protect people from AI, foreign interfering.
A hard battle on an uphill mountain but you need to do your part to block this by informing other people. Is the only way to fight this and get the word out.
via vivaldi:
Google seems to love creating specifications that are terrible for the open web and it feels like they find a way to create a new one every few months. This time, we have come across some controversy caused by a new Web Environment Integrity spec that Google seems to be working on.
At this time, I could not find any official message from Google about this spec, so it is possible that it is just the work of some misguided engineer at the company that has no backing from higher up, but it seems to be work that has gone on for more than a year, and the resulting spec is so toxic to the open Web that at this point, Google needs to at least give some explanation as to how it could go so far.
What is Web Environment Integrity? It is simply dangerous.
The spec in question, which is described at https://github.com/RupertBenWiser/Web-Environment-Integrity/blob/main/explainer.md, is called Web Environment Integrity. The idea of it is as simple as it is dangerous. It would provide websites with an API telling them whether the browser and the platform it is running on that is currently in use is trusted by an authoritative third party (called an attester). The details are nebulous, but the goal seems to be to prevent “fake” interactions with websites of all kinds. While this seems like a noble motivation, and the use cases listed seem very reasonable, the solution proposed is absolutely terrible and has already been equated with DRM for websites, with all that it implies.
It is also interesting to note that the first use case listed is about ensuring that interactions with ads are genuine. While this is not problematic on the surface, it certainly hints at the idea that Google is willing to use any means of bolstering its advertising platform, regardless of the potential harm to the users of the web.
Despite the text mentioning the incredible risk of excluding vendors (read, other browsers), it only makes a lukewarm attempt at addressing the issue and ends up without any real solution.
So, what is the issue?
Simply, if an entity has the power of deciding which browsers are trusted and which are not, there is no guarantee that they will trust any given browser. Any new browser would by default not be trusted until they have somehow demonstrated that they are trustworthy, to the discretion of the attesters. Also, anyone stuck running on legacy software where this spec is not supported would eventually be excluded from the web.
To make matters worse, the primary example given of an attester is Google Play on Android. This means Google decides which browser is trustworthy on its own platform. I do not see how they can be expected to be impartial.
On Windows, they would probably defer to Microsoft via the Windows Store, and on Mac, they would defer to Apple. So, we can expect that at least Edge and Safari are going to be trusted. Any other browser will be left to the good graces of those three companies.
Of course, you can note one glaring omission in the previous paragraph. What of Linux? Well, that is the big question. Will Linux be completely excluded from browsing the web? Or will Canonical become the decider by virtue of controlling the snaps package repositories? Who knows. But it’s not looking good for Linux.
This alone would be bad enough, but it gets worse. The spec hints heavily that one aim is to ensure that real people are interacting with the website. It does not clarify in any way how it aims to do that, so we are left with some big questions about how it will achieve this.
Will behavioral data be used to see if the user behaves in a human-like fashion? Will this data be presented to the attesters? Will accessibility tools that rely on automating input to the browser cause it to become untrusted? Will it affect extensions? The spec does currently specify a carveout for browser modifications and extensions, but those can make automating interactions with a website trivial. So, either the spec is useless or restrictions will eventually be applied there too. It would otherwise be trivial for an attacker to bypass the whole thing.