Hi Randy,
as i know you are aware that non-trivial blocks of ipv4 space are no longer available from registries (except maybe afrinic), i do not understand how you think you are going to solve this at layer nine.
Perhaps I have not been clear: I am not advocating IPv4 for this. That decision lies with the agencies and I am not their representative. My role is simply trying to ensure a sane routing architecture, the same as I have been doing for 35 years now. I have zero financial interest in this and my employer is not involved (or supportive). The agencies do have experience and momentum with IPv4. I was simply pointing out their motivation. I’m here trying to get an infrastructure in place that makes it easy for agencies to do the right thing with IPv6.
how much space do you think you are going to need in a year, five, ten, ...?
I have no idea. How successfull will we be at colonizing Mars? I do know that we should start planning for it. I would like to put the right groundwork in place now, because getting people to do the right thing later is impossible.
i sympathise with your wanting to avoid a trajectory which de-aggregates agencies' existing space. not a good start. i am trying to understand how you think this is going to scale.
I am hopeful that the agencies that operate the deep space communications links will be influential in ensuring scalability and efficiency. These are a shared resource and all agencies are motivated to be cooperative here. It seems obvious that if we proceed down a path of deaggregation that we will not have an scalable, efficient system.
and what will intra-solar-system routing look like in 50 years? bgp-like? link state, with volatile links? we made the mistake of hacking addressing without routing once already.
I cannot predict the future for you. I can only point out that the topology is very likely to have serious bandwidth constraints between planetary objects and regions. We will not have an 800Gbps link to Mars anytime soon. The links today remind me more of 1200bps modems, along with long periods of disconnection. BGP and link state protocols not appropriate because of these link properties. I speculate that we will need a wholly different contol plane that will do scheduled dynamic traffic engineering taking into account volatile links, packet storage in relay nodes, and the finite buffer space in those nodes.
why would anyone want to build this with IPv4? Bandwidth is extremely constrained. Agencies may choose to optimize for performance.
do you really think v6 header size is that big a problem?
Yes. Again, 1200bps modems and mission efficiency push agencies towards lower overhead and what they already know.
if you could use ipv6, i can imagine the rir system finding a way to give you a big chunk or whatever. it would likely take some discussion, but it leaves the deaggregation problem on your side of the net.
I’m hoping that the RIR system will provide the addressing infrastructure so that we can make it easy for the agencies to support aggregation.
perhaps i am overly concerned by how this will scale in decades, with many agencies, countries, planets and moons, ... i can't help thinking it is déjà vu all over again. i do not like repeating the bad parts of history, only the good ones..
Exaclty why I’m here. Tony