"DNS Vulnerabilities" paper hits the mainstream
Emin Gun Sirer's paper/presentation at RIPE52 has been picked up by the BBC: http://news.bbc.co.uk/1/hi/technology/4954208.stm Any thoughts on how to respond to that?
--On den 30 april 2006 08.46.25 +0100 Jim Reid <Jim@rfc1035.com> wrote:
Emin Gun Sirer's paper/presentation at RIPE52 has been picked up by the BBC:
http://news.bbc.co.uk/1/hi/technology/4954208.stm
Any thoughts on how to respond to that?
Not having read the BBC article, I think something along "Yes, we know. Emin's work points out some of the far-gone consequences of not paying attention. We are, however pretty convinced that: 1. The mentioned examples are extremes. Most of the namespace is in considerably better order. 2. DNS has historically been a neglected part of the quality control most web site operators perform. It simply is so redundant and ubiquitous that it not is seen as a critical part. 3. The ultimate fix for this is DNSSEC." ...or so. -- Måns Nilsson Systems Specialist +46 70 681 7204 cell KTHNOC +46 8 790 6518 office MN1334-RIPE Inside, I'm already SOBBING!
http://news.bbc.co.uk/1/hi/technology/4954208.stm
Any thoughts on how to respond to that?
Not having read the BBC article, I think something along
"Yes, we know. Emin's work points out some of the far-gone consequences of not paying attention. We are, however pretty convinced that: ...
i think this proposed response is a fine one. if anybody here want to use the public.oarci.net CMS as a publication point for such a response, just sing out. it's possible that a longer treatment would be useful, in which case i can recommend that it be writ as an IETF BCP in the dnsop WG.
Hi,
Emin Gun Sirer's paper/presentation at RIPE52 has been picked up by the BBC:
http://news.bbc.co.uk/1/hi/technology/4954208.stm
Any thoughts on how to respond to that?
Maybe someone 'official' should contact the BBC and try to cool this down a bit. People might get scared :) It is good that attention is given to the risks of badly secured DNS servers, but scaring the public like this... - Sander
On Apr 30, 2006, at 19:25, Sander Steffann wrote:
Maybe someone 'official' should contact the BBC and try to cool this down a bit.
FWIW I have contacted the BBC asking them to present a more balanced report. I doubt anything will come of that. Even if the BBC does publish a correction it will be an uphill battle to explain the details of how DNS actually works to a puzzled BBC journalist with a deadline to meet. Niall O'Reilly said he posted something through the "have your say" feature of the BBC web site. Perhaps if others on this list did likewise....
People might get scared :) It is good that attention is given to the risks of badly secured DNS servers, but scaring the public like this...
Indeed. Though personally speaking, I don't accept Sirer's methodolody let alone his concluions about "vulnerabilities" or badly secured name servers. Which doesn't for a moment mean the DNS has no vulnerablities or badly secured servers. These do of course exist. Just not in the way Emil Gun Sirer has suggested.
Hi, On Apr 30, 2006, at 19:25, Jim Reid wrote:
On Apr 30, 2006, at 19:25, Sander Steffann wrote: [...]
People might get scared :) It is good that attention is given to the risks of badly secured DNS servers, but scaring the public like this...
Indeed. Though personally speaking, I don't accept Sirer's methodolody let alone his concluions about "vulnerabilities" or badly secured name servers. Which doesn't for a moment mean the DNS has no vulnerablities or badly secured servers. These do of course exist. Just not in the way Emil Gun Sirer has suggested.
I completely agree. I already mentioned it to some people at RIPE-52, but I forgot to mention it here. It's also on slashdot: [Perils of DNS at RIPE-52] http://it.slashdot.org/article.pl?sid=06/04/26/1247240 - Sander
On 30 Apr 2006, at 20:02, Jim Reid wrote:
Niall O'Reilly said he posted something through the "have your say" feature of the BBC web site.
FYI, here's what I posted. Although the observations described at http://news.bbc.co.uk/1/hi/ technology/4954208.stm are interesting and raise important issues, their relation to the conclusions made appears to be at best only tenuous. Internet experts are far from convinced of the rigour of Prof. Sirer's logic. It is disappointing to see BBC so ready to report just one side of a story. /Niall
On Apr 30, 2006, at 5:40 PM, Niall O'Reilly wrote:
On 30 Apr 2006, at 20:02, Jim Reid wrote:
Niall O'Reilly said he posted something through the "have your say" feature of the BBC web site.
FYI, here's what I posted.
Although the observations described at http://news.bbc.co.uk/1/hi/ technology/4954208.stm are interesting and raise important issues, their relation to the conclusions made appears to be at best only tenuous. Internet experts are far from convinced of the rigour of Prof. Sirer's logic. It is disappointing to see BBC so ready to report just one side of a story.
I'm not disagreeing with anybody here, I'm not exactly sure what I personally believe at this time and I hope that this discussion helps me make up my mind. But, I keep thinking back on an old security tool that I used to use and the implications of it's design on this issue. Many years ago Dan Farmer wrote a tool to audit the security of a system called COPS. This was built on the idea that the security of a system can be no greater than the security of it's weakest link. How can the "security of the DNS system" be considered as any better than the security of the parent servers? This is the basis for the CoDoNS investigation. Using an example from the paper. If the FBI has a delegated server that can be easily hijacked, then this would mean that a significant number of queries for information in the "fbi.gov" domain could be subverted with invalid info. This is a security issue and it is not an issue under the direct control of the FBI (except for their decision to base their operation on a third party service). Isn't this the same type of security issue evaluated with COPS? Isn't this just an issue of cascading of trust? Why should one situation be considered acceptable while another is unacceptable? Please understand, I am not convinced that CoDoNS is any improvement to the existing DNS system. I am still trying to develop my own informed opinion rather regurgitating than what Cornel or the BBC says. Bill Larson
On May 1, 2006, at 01:15, Bill Larson wrote:
How can the "security of the DNS system" be considered as any better than the security of the parent servers?
Because the parent is not usually authoritative for its children. Sure, the parent could insert bogus delegation info: a fake glue or NS record. But this is little different from a slave server for the child that tells lies about the zone. If anything, a lying slave is probably much worse because the cache poisoning heuristics in a decent implementation will give more credence to what an authoritative child has to say than a non-authoritative parent.
Using an example from the paper. If the FBI has a delegated server that can be easily hijacked, then this would mean that a significant number of queries for information in the "fbi.gov" domain could be subverted with invalid info. This is a security issue and it is not an issue under the direct control of the FBI (except for their decision to base their operation on a third party service).
One would hope that if someone outsources DNS service to a third party, that will be subject to a contract which includes performance levels, problem escalation, response to security incidents as well as criminal or civil penalties for non-compliance. I'd get those safeguards buying a cup of coffee, so why not when buying DNS service?
Isn't this the same type of security issue evaluated with COPS?
I don't think so.
* Jim Reid:
Any thoughts on how to respond to that?
It's one of those PR attacks. Potentially very costly, but since no particular product or company is targeted, no real harm is done this time.
Any thoughts on how to respond to that?
http://www.secret-wg.org/Secret-Archive/RIPE52-SWG_files/Slide0003.gif :-) --Olaf
Jim Reid wrote on 04/30/2006 09:46:25 AM:
Emin Gun Sirer's paper/presentation at RIPE52 has been picked up by the BBC:
http://news.bbc.co.uk/1/hi/technology/4954208.stm
Any thoughts on how to respond to that?
I respond to this on different fora, and some of asked me to iterate it here, so here goes: I saw this presentation of the study and results at ripe, and it was more marketing than science. First off, the survey mentioned tracked dependencies on servers which had names not in the delegated zone nor its own (out of bailiwick). Some dependency graphs showed more the 600 nodes. The survey sorted names by node and went on by saying 'the higher the dependency, the more vulnerable a name'. The conclusions in the end were twofold. 1: the old wisdom of having more server dependency is bad. And 2: A new form of DNS is needed. Add 1). The 'old wisdom' was imho: more authoritative servers for a single name, less points of failure, etc,etc. What it does _not_ mean (i.e. grove mistake by the authors) is that resolution graphs should be long and wide (i.e. net resides on ns1.com, com resides on ns1.org, org resides on ns1.edu, etc, etc) Meanwhile, caching was never mentioned. The big message was that somebody who abuses a vulnerability in one of those 600 nodes, would 0wnz (sic) the name, while in my point of view, a hacker would own some part of the resolution graph, depending on where this vulnerable node hangs in the tree, and not automagically the entire name. To add some sugar to this, the presentation went on to show that 17 percent of the tested servers had 'known' vulnerabilities, which then related to 45 % if the names being triviable hijackable, though no accurate methodology was given. The authors made the mistake of confusing protocol with implementation. Dependency is not equal to vulnerability. In the process, some high profile name-servers at berkeley were mentioned, and it was suggested that their operators were not professionals, and that they did not understand the high dependencies. The authors of the paper (that resulted in this presentation) came to the conclusion that server dependencies using out-of-bailiwick servers in DNS is a bad thing, and hence, there was a new kind of DNS needed, discarding the obvious solution of recommending in-bailiwick glue. Add. 2) It turned out that this 'new DNS' was already defined: some form of DNS using distributed hash tables: beehive codons. And ofcourse, at rival Berkeley, there is a similar project: CHORD. Conclusion: This was no less than a marketing talk. They have a solution, and they need to sell it. In order for it to look good, make the old solution and the competition look bad. A marketing study. Not science. Nothing original and nothing new (DJB warned about this: http://cr.yp.to/djbdns/notes.html 'gluelessness' ) . Scare tactics at most. Meanwhile, the codons server set itself has issues. It responds to responses. A few packets would effectively bring the whole codons infrastructure down. Sure, these bugs can be fixed. But if that argument is allowed for codons, it should be allowed for generic dns implementations. Building a new protocol based on the fact that there exist vulnerabilities in current implementations is circular. You'll have more bad implementations that will result in new protocols.... Roy PS, the codons folk have been informed about the vulnerability in their software.
participants (10)
-
Bill Larson
-
Florian Weimer
-
Jim Reid
-
Jim Reid
-
Måns Nilsson
-
Niall O'Reilly
-
Olaf M. Kolkman
-
Paul Vixie
-
Roy Arends
-
Sander Steffann