Gert, thank you for answering this question. On Mon, May 9, 2011 at 9:02 AM, Gert Doering <gert@space.net> wrote:
Hi,
On Thu, May 05, 2011 at 05:11:33AM -0400, Martin Millnert wrote:
"Considering invalid routes for BGP decision process is a pure ***local policy matter*** and should be done with utmost care." (Emphasis mine)
I am hoping you can give some practical examples on how one goes about considering routes invalid with utmost care.
You could, for example, adjust routing preference in accordance to the availability of an RPKI signature
- prefer routes with a valid RPKI ROA
- if no routes with a valid ROA can be found, consider routes with no ROA (neither matching nor invalid)
- if no such routes can be found, accept any route, even if the ROA lists a wrong origin AS
So if some random LEA convinces the RIPE NCC (via means of legislation already under way according to Malcolm (transfer resources to LEA), or some $lawyer-soup not foreseen today), the abuser could, with the intent of a DDoS of the overtaken resources, start announcing routes of a valid RPKI ROA, and this way steal traffic from anyone implementing at a minimum the above policy. The DDoS is successful and RPKI just censored a network off the Internet from those who implement at a minimum the above policy, which is the minimum you have to, else RPKI is pointless. In other words: influencing route preference from a single point of authority remains an abuse-vector. The only way to completely negate this abuse vector from the relying party's side is to not implement any policy based on any RPKI information. This is obvious since the reverse side of the coin, securing routing, requires automatic policy handling (else why develop RPKI?). Assuming large carriers are compliant to LEAs under nearly all circumstances, this remains a threatening scenario. It clearly suggests that great, great care must go into the choice of sources for the RPKI information. And having just one origin source of data, RIPE's database, should there not be some process of peer-review* before any revocation or re-assignment can go into effect? Not just of resource certifications, but resources over all? Should there not be a public, easily accessible log providing transparency and insight into the motivations of why these changes are done? (*By peer-review I mean per-event-peer review, performed by normal citizens, via more or less direct influence.) In response to a call on the APWG session last Friday to the critics of RPKI to "help out, propose solutions": My preferred way of securing routing relies on heavy decentralization, which by necessity must include that of the RIR's registry function since this is an apparent single point of failure. The most obvious way to me right now to go about this is to transform todays resource allocations to resource ownerships, which can pave the way for a truly decentralized registry function. I'm not convinced that a single point of failure is the most natural way to organize internet uniqueness as we go forward, and I'm not alone in identifying RIR's motivation for RPKI as a way for them to establish ownership of the Internet and secure their future business, and recognize that community opinion on whether this is good or bad differs. I think it is helpful to discuss this in more depth. The future is anything but certain in this area: in particular, it is going to be interesting to see what happens with regards to egistry accuracy and relevance once legacy resources starts to be traded without any RIR involvement. With this I'm passing the ball: I'd appreciate to see more cycles spent on considering plausible recovery-protocols once the first abuse is a fact, in the one-trust-anchor model. There goal ought to be an abuse-resistant system, and in this it must be assumed the RIPE NCC, or any other single trust anchor, will not be able to withstand the abuse previously discussed. A measure of an abuse-resistant system is, obviously, one where an abuse, a DDoS, fails. The bar must be set a bit higher. Going down this route still discomforts me greatly since an absolutely foreseeable problem/flaw is knowingly being designed into the system from scratch. A backup registry (of the RIPE NCC in case it goes bad) was mentioned in the APWG session last Friday. Why stop with 1? Why not 1000s of them, or, say, 1 per ASN? To me it is now clear (since a recent epiphany :) ) that the underlying internet infrastructure wants a distributed registry and I don't think this desire is about to go away as IPv4 starts to be traded, quite the contrary.
Randy has demonstrated in the workshop on last Monday morning how to do that in IOS. The implementation is such that the BGP engine doesn't *care* about the validity of a route/ROA, it will just mark the prefix with the result of the validation check - and then you can use the normal local policy language to influence your policy with that result.
One *choice* could be "drop all routes that have a ROA mismatch" - or, as outlined above "accept everything, but only use those routes as last-resort". Local policy decision.
See above.
The workshop was very enlightening, to actually see how the pieces fit together and how local policy is applied to the data coming from the (various) RPKI data stores.
Can the software handle multiple (not 2, not 3, but n) competing trust anchors, possibly assign weights to them and compare matching resources and produce a resulting set of non-conflicting resources, another set with conflicting resources / odd-cases? Seems called for to answer the above discussion, to me, at least. Kind Regards, Martin