On Wed, 09 May 2012 11:20:38 +0200 Philip Homburg <philip.homburg@ripe.net> wrote:
* TTM shutdown, Atlas is expected to provide functionality similar (but not identical) to what TTM provides You plan to change the measurement control protocol used in TTM ? I do not know which protocol TTM is using but if it's OWAMP ou TWAMP, that will not fit the needs for large scale measurement campaign runt from network edge. TTM boxes have GPS devices for time synchronization. That allows them to perform accurate one-way measurements. This capability will be lost.
Maybe not. There's two way to cope with the lost of GPS device : * The smart one : If you use a Linux or FreeBSD kernel, you can use : http://www.cubinlab.ee.unimelb.edu.au/radclock/ which is very accurate (less than a micro-second) and cheap. * The dumb one : The time keeping feature is done by the control-configuration server, which mean that no time-stamp will be sent to "probe", only the the time difference between the current time and the measurement time-stamp calculated on the server. It's not very accurate but it permit to do measurement without the need to have the "probe" synchronized. The main problem here is that the protocol is so dumb that the time elapsed by packet travel is never counted anywhere. If seconds accuracy is enough, that the simplest solution.
The TTM network is relatively small and static. Atlas can easily handle that, except you will be limited to two-way measurements.
In what Atlas call a "probe" (and what I call the measurement agent) ?
Yes. Except that an Atlas probe tends to be a physical device as well.
I hope the host functionalities and the measurement functionalities have been thought with some kind of separation :p
* Roll out of Atlas Anchor boxes (regular PCs at well connected locations that can serve as the target of measurements and as a more powerful Atlas probe) Sound like a good idea :)
You then should add a tag to the measurement result that will permit to distinguish the type of box running the measurement agent, like
"generated": atlas-probe "generated": atlas-box
for example. We still have to figure out where we want to document meta-data. It doesn't make much sense to put all data about a probe in each and every measurement result.
In the some kind of "hello" command that describe the probe by itself only. In your system, the measurement agent will probably expose two orthogonal API : * One to write a measurement; * One to gather information on the measurement host (OS type, packets in flight, and so on). I say probably because we also have discussed a pipelined architecture of measurements that will not make this kind of difference : measurement 1 -OK-> measurement 2 -OK-> measurement 3 | KO -> stop measurement which permit to mimic a decision tree with not must complexity added : measurement 2 wait for a result that might be conditional to be runt. The latter is probably the better.
* Better UDM interface * UDM for all RIPE members (instead of just probe hosts and sponsors) Eye candies.
No, it is not eye candy. UDM allows users of the Atlas system to measure their own targets using remote probes.
I see. Interface is then the protocol used by users to define their own measurement. The work flow should define is some way a "central authority" that will permit the measurement to be runt. As far as I see it, UDM is a moderated measurement campaign generator that can be hosted anywhere as long as the configuration approved go to the configuration server and are distributed to "probe".
We have this as internal documentation, but it should be published some time.
Let me known when the dust has settled and RIPE publish them.
We do not have securities problem because of OCaml choice on the implementation side. Securities mechanism will probably be the same as of the one you can find on most "web services" (shared secret salted and hashed) to ensure that a REST transaction is legit. I have to think about it some more ...
Our security policy goes further than just protecting the probe. We also try to avoid getting the probe hosts in trouble. For example, having a probe visit certain web sites may be a bad idea.
The policy is enforced on the configuration server ? (that what we intend to implement via some kind of automation when possible) I should write down the big picture after some talks by the end of may.
For API and JSON syntax standardisation, the first step is to write the specifications we(grenouille.com) plan to use and Atlas use and plan to use, then discuss and factor out the best of each. We have some writings but most of them are in French :)
Yes.
Great. We're going to translate some already written and document the whole architecture in details. Cheers. -- Jérôme Benoit aka fraggle La Météo du Net - http://grenouille.com OpenPGP Key ID : 9FE9161D Key fingerprint : 9CA4 0249 AF57 A35B 34B3 AC15 FAA0 CB50 9FE9 161D