An interesting proposal, but merging an external data set with RIPE Database arises some questions:
- RIPE Database is set up to contain hierarchical data already. With this proposal, we would take some of this data outside the database in a manner that does not guarantee consistency with the database itself. For example, 192.168/16 specifies a geofeed file that has different prefixes in it than the children of said inetnum (or cover a range that is not even listed in the RIPE Database).
- Or, if the maintainers of 192.168/16 are different from the maintainer of its children, the owner of 192.168/16 has rights to override whatever is below it, if e.g. the owner of 192.168.1/24 doesn't set geofeed: on its inetnum.
- It's also possible that ranges in the RIPE Database partially overlap with the prefixes in the geofeed file. E.g. you could specify an inetnum range '192.168.0.17-192.168.0.25' in the RIPE Database, while the geofeed csv allows prefixes only. That would make for some interesting choices when dealing with inconsistency in client code.
- It's repeated multiple times in the proposal that "many RPSL repositories have weak if any authentication". But it also says: "When using data from a geofeed file, one MUST ignore data outside of the inetnum: object's inetnum: attribute's address range". These statements seem contradictory to me. On one hand, don't trust authentication for inet[6]num objects, on the other hand, verify to only accept ranges that fall into said inetnum range?
- Although it's mentioned that clients should not request the geofeed file too often, it's not specified what constitutes as such. Lacking clear specification will lead to fragmentation in this area.
- A clear specification of the geofeed: property is necessary to forego abuse. For example, what stops me from specifying a geofeed URL "
https://google.com/?q=pr0n", or anything similar? Furthermore, when geofeed: attribute is presented in HTML, for example on the RIPE Database whois query form, it can be abused targeting the user's browser and cookies directly.
- The RIPE Database can be downloaded as text dumps from a single location, and many clients make use of this functionality. There is no such thing existing for geofeed: attribute, so each client have to visit each of the URLs presented in the RIPE Database text dump. That is N clients, M attributes, doesn't seem like a scalable solution. This would effectively scatter geolocation data across the whole internet instead of keeping it in one place, with all the advantages and disadvantages that come with that.
My personal impression of the proposal is quite mixed. On one hand, it's nice to see initiatives to extend usability of the RIPE Database and, within reasonable limits, fix the current situation around geolocation. On the other hand, the proposed solution would achieve this in a decentralized manner, and then sort of (ab)use the RIPE Database as a main anchor point, without adding any new features to the database itself. For example, the geofeed data could just as well be put in in-addr.arpa dns responses; it is not strictly tied to data already available in inet[6]num, merely has a 1:1 relationship to it.
Personally, I think this data could be integrated into the RIPE Database itself, which would fix all of the consistency issues and would offer a clear, single entry point to all data.
Cheers,
Agoston