clarifying questions for oonib.md
At 2013-08-05 14:56:14 Stephen Soltesz wrote: @hellais , @aagbsn
Hi, guys, sorry for the confusion. I posted this issue a few weeks ago, related to the oonib spec, but I guess it was in the wrong place. So I'm copying it here and associating it with the "Fully Specified" milestone, even though that milestone is already closed. If the spec is complete, then these questions should be answerable pretty quickly; maybe during the next call.
Copied from: https://github.com/TheTorProject/ooni-spec/issues/16
@meredithmeredith & I are walking through the oonib spec and have a few clarifying questions.
Starting with the oonib spec, I’ve tried to outline what I think is a sequence of operations that ‘probe’ and ‘oonib’ would take during a typical use-case on M-Lab. Where steps or the mechanisms used are ambiguous to us, I’ve added some clarifying questions.
So please read this issue first as a high level question that's basically asking "is the sequence of operations representative?" And, If not, how do you envision the operations proceeding? And, second, if the broad strokes are representative, then what about the inline questions?
Sequence of probe operations and interactions with the oonib api:
- probe contacts a "naming service" to locate an oonib collector, based on some criteria (geographic location maybe, or random).
- probe fetches list of operator-curated decks from oonib
- probe chooses a deck (perhaps based on user input)
- If the test deck includes references to operator-curated inputs (also hosted on an oonib), then the probe will fetch the specific inputs & input-ids
- the probe will run the nettests specified by the test deck using the inputs as arguments.
- Clarifying questions:
- Is it correct that a single nettest could need multiple test-helpers?
- What happens if a nettest (or test deck) needs test helpers on multiple systems (i.e. for port 80 behavior)? My understanding is that this would require another 'name service' lookup to find test helpers of a different type.
- Also, I recall a discussion in Berlin about validation of uploaded reports; specifically, at the time of report upload, it is necessary to determine that the "expected test-helper" and the "used test-helper" are the same type. This helps eliminate false-positives due to report errors caused by mismatches between the test-helper expected and used. This validation requires that the report is uploaded to the collector co-located with the test helper. Can testdecks be created to support the above?
- the results of the nettests are saved locally by the probe.
- the probe will upload the results to oonib using the report creation/update api.
- oonib validates that uploaded data is part of an operator-curated input & test deck. The oonib policy specifies nettest names and input ids that a probe can run and upload to the oonib.
- Clarifying questions:
- The arguments to "create a new report" include the test_name and test_version and input_hash. Do test_name/test_version refer to the nettests? or testdeck?
- If nettest, is a new report created for each nettest upload? If for each testdeck, how are individual nettests validated or policy specified for them?
- Can you comment some more on the interaction between "report creation"/"report updates" and the policy APIs?
- probe completes the test deck and closes the report at oonib. This archives the report and passes it along to the M-Lab collection pipeline.
Thanks so much.
This issue was automatically migrated from github issue https://github.com/TheTorProject/ooni-probe/issues/156