Indefensible: The W3C Says Companies Should Get to Decide When and How Security Researchers Reveal Defects in Browsers
The World Wide Web Consortium has just signaled its intention to deliberately create legal jeopardy for security researchers who reveal defects in its members' products, unless the security researchers get the approval of its members prior to revealing the embarrassing mistakes those members have made in creating their products. It's a move that will put literally billions of people at risk as researchers are chilled from investigating and publishing on browsers that follow W3C standards.
It is indefensible.
When the W3C embarked on its plan to create a standardized DRM system for video on the World Wide Web, EFF told them it was a bad idea, pointing out that such a system could be covered under Section 1201 of the DMCA, which provides for criminal and civil penalties for people who tamper with DRM, even for legitimate purposes, including security disclosures, accessibility adaptation for people with disabilities, and making innovative, competitive products and services (almost every other country has its own version of this law).
The W3C told us that they were only concerned with the technological dimension of the work, not the legal ones -- if the problem was the DMCA, we should do something about the DMCA (we are).
But the W3C has a tried-and-true method for resolving conflicts between open standards and technology law. In the W3C's earliest days, it wrestled with the question of software patents, and whether to allow its members to assert patents over the standards they were creating. In the end, the W3C became an open standardization trailblazer: it formulated a patent policy that required its members to surrender the right to invoke their patents in lawsuits as a condition of participating in the W3C process. It was a brilliant move, and it made the W3C the premier standards body for the web.
We proposed that the W3C should extend this existing policy to cover the world's DRM laws. We suggested that W3C members should have to surrender their DMCA 1201 rights, making legally binding promises not to use DRM law to attack security researchers, technologists adapting browsers for disabled people, and innovative new entrants to the market.
This proposal has picked up steam. Hundreds of security researchers have endorsed it, as have dozens of W3C members, from leading research institutions like Eindhoven, Oxford and Lawrence Berkeley Labs to leading nonprofits that work for disabled people, like the UK's Royal National Institute for Blind People, Vision Australia, Braillenet in France, and Benetech in the USA; and browser vendors like Brave and cryptocurrency companies like Ethereum. This measure has also been integrated into the leading definition of an "open standard."
But last weekend, the W3C signalled that it would ignore all of these concerns, and instead embrace and extend the legal encumbrances created by its DRM work, creating a parallel working group that would develop "voluntary guidelines" for its members to employ when deciding whether to use the legal rights the W3C has created for them with EME to silence security researchers.
Companies can and should develop bug bounty programs and other ways to work with the security community, but there's a difference between companies being able to say, "We think you should disclose our bugs in this way," and "Do it our way or we'll sue."
Under almost every circumstance in almost every country, true facts about defects in products are always lawful to disclose. No one -- especially not the companies involved -- gets to dictate to security researchers, product reviewers and users when and how they can discuss mistakes that manufacturers have made. Security facts, like most facts, should be legal to talk about, even if they make companies unhappy.
By its own admission, the W3C did not set out to create a legal weapon that would give companies the unheard-of power to force silence upon security researchers who have discovered critical flaws in products we all entrust with our financial details, personal conversations, legal and medical information, and control over our cameras and microphones.
Considered separately from DRM standardization, this new project would be most welcome. The W3C is just the sort of place where we'd like to see best practices guidelines for offering incentives to use managed disclosure processes.
But in creating a DRM standard, the W3C has opted to codify and reinforce the legal weaponization of DRM law, rather than dismantling it. Bad enough that the W3C has summarily dismissed the concerns of new entrants into the browser market and organizations that provide access to disabled people -- but in the case of security concerns, they've gone even further. When it comes to security concerns, the W3C has departed from the technological standards business to become legal arms dealers.
We at EFF call on the W3C to reconvene its earlier negotiations to defang the legal consequences of its DRM work, and in so doing to transform its security disclosure work from a weapon to a normative guideline. It's one thing to help companies figure out how to make an attractive offer to the researchers who investigate browser security, but it's another thing altogether to standardize browser technology that empowers companies to sue the researchers who decline to take them up on the offer.