I've read the document and think it's a very good start, but needs a small amount of work. On Tue, Jan 28, 2014 at 05:42:17PM -0500, Meredith Whittaker wrote:
I would suggest removing the target audience -- here started as law enforcement and governments -- and dedicating this work more broadly to anyone who's interested in this topic and would like a basic understanding
so, this paragraph in the draft was one that I like very much because it is forgotten too often and helps focus the document. It also frames expectations on the side of the raeder.
to content. This expands the document a bit, but I think presents a clearer
Speaking of that: 27 pages is _huge_, I'd hope the final result had no more than, say, 10.
to prevent access to child pornography (the canonical example), and to silence political speech and quiet debate that threatens those in power, &c. Insofar as this is a document focused on the means, not the ends, speculating on "good" vs. "bad" modes of filtering/blocking, even implicitly, leads quickly to our having to justify one or another ethical viewpoints, and I think confuses the clarity of the document.
seconded. Also, the actors (LEA in this case) could better be left away. It's not important for the method who's asking/ordering/threatening/volunteering.
more detail? Do we want to expand on specific modes of blocking (DPI/filtering boxes, and their similarities and differences, for example)?
The draft could benefit from a terminology pass sooner than later. We've already has some debate about "authoritative servers", where that was meant to be registrations at the second (or third, for that matter) level. I'd also suggest to skip the part about domain name "takedowns". It has similar side effects but is really different from filtering. Speaking of side effects, the language chosen in that section sounds tentative and defensive to me ("may", "could be"). While applicable in an academic debate, the other party is absolutely undoubtful about their doing the Right Thing. While at it, I'd not support the myth that DNSSEC and suppressing DNS responses are incompatible. While the changes applied are either detected and suppressed by the validating resolver or injected at that very place (again, the ISP), the result usually is either you receive the government enhanced response with the seal of the validator or you get an error response, in which case the end user still can't access the site. Careful with the risk assessments: DNS blocking techniques may be used to defeat cybercrime too, by blocking those domain names which are dedicated to frauds, phishing or malware distribution (viruses, trojans, #). If users decide to change their device configuration and use public open resolvers to access (over-) blocked content any local anti-cybercrime activity is vanished. Sounds either like an encouragement to restrict/regulate access to alternative resolution mechanisms or like a "then don't do that". Finally, again on wording, apologies, we should not call the mechanisms "content filtering" because all they do is fiddling with levels of indirection that mediate access to the content rather than the content itself. To that extent, referring to the DNS as the "phone book" has helped quite a couple of times. People understand that de-listing does not make the number inaccessible. YMMV. I thought CENTR had something, but all I can dig out is <https://www.centr.org/domain-name-system>. -Peter PS: my contribution to the bikeshed part of the debate: for diversity, but especially in a European context, we could use somthing other than the COM gTLD in the examples. That's even more important for those parts that I suggested to skip, since it will emphasize that ICANN is not in the game in many cases. PPS: thanks, Pier Carlo, for taking the initiative!