On Wed, Apr 06, 2005 at 10:22:49PM +0200, Iljitsch van Beijnum wrote:
On 6-apr-05, at 19:20, Mike Hughes wrote:
The 200 /48s rule does fail the job, in the present environment.
Still waiting for the evidence on this one... Can someone show me a request that has been turned down that shouldn't have?
I've seen an application fail from an LIR planning to make a large (/35) allocation to a downstream ISP, who would then be making (2000+) end-user /48 assignments. The direct assignments from the LIR itself would not total more than 20, and as such either the ISP would have to become an LIR to receive its own allocation (which could then have an allocation made for its upstream's use) or for the upstream LIR to assign directly to the ISP's customers. Perhaps more relevant to this particular discussion is that we (uk.linx) house a number of other projects to whom we currently make v4 assignments. But since our primary area of business is not being an ISP, getting 200 customers to service in this manner is not exactly our top priority.. 10 is probably closer to the mark. Even so, our needs and the needs of our customers are the same as those of any 200-customer-plus organisation, so what other differences are there (engineering or otherwise) between the 'us' and the 'them'? There are others. Perhaps the NCC have some useful stats..
Because there are networks which are not end sites, who do make assignments to customers (just not 200 of them right now - or within 2 years), which cannot wait for the i*tf to get off the pot with whatever multi6/shim6 thing they are doing, which cannot wait the N years it will take for vendors to implement whatever comes out of the i*tf.
Just curious: why is waiting suddenly a problem? IPv6 has been a long time in coming for a long time. (And the letter you're looking for is "E".)
Well, the noise from customers about when they can have IPv6 is certainly getting louder. Deals are being made and broken by an ISP's ability to offer a viable IPv6 solution, and it hurts especially badly when an ISP is unable to receive an allocation in the first place because their customer count simply isn't worthy enough.
In case my opinion isn't clear, I support the proposal as it stands.
If there is a genuine and well-founded concern about pulling the "200 number" without some other form of safeguard, maybe we go with the proposal as it stands, but add a commitment (by the Chairs? group as a whole? NCC?) to table a review of the situation once the i*tf do come up with something, and there is vendor support for it?
I'm still waiting for those requests that were turned down, but in the mean time I think it might be a good idea to instruct the hostmasters that for a limited time (such as 2 years) and limited number of prefixes (say 256) they should evaluate PA requests that don't meet the 200 requirement and determine that it's not for "stealth PI" or some other less than legitimate purpose without specifying explicit limits.
So how exactly do you propose to determine what's 'stealth PI' and what's not? Surely if an LIR is going to be assigning to other end-site organisations we're looking at a genuine PA request, regardless of how many assignments will be made.
When this experimental period is finished we can then evaluate which requests were granted and which were denied and distill a new policy at that point.
And when the NCC do come back with this magic metric as to what (exactly) a PA-worthy LIR is, for how long do you expect that data to be accurate? Do we really want to have to review this policy every 2 years as we discover another corner case that disproves the last batch of changes? Andy -- Andy Furnell <andy@linx.net> Mob: +44 (0) 7909 680019 London Internet Exchange http://www.linx.net