Changes between Version 10 and Version 11 of org/projects/projectM

Jan 19, 2013, 7:22:58 PM (6 years ago)



  • org/projects/projectM

    v10 v11  
     318Saturday, January 19
     32011a: OONI Tutorial (Arturo)
     322IRC help from Arturo: #ooni on (Arturo is hellais.)
     324There is some setup to be done.
     325Walkthrough of
     327Base class is NetTestCase.
     328inputFile class attribute determines the inputs that are used.
     329UsageOptions to specify command-line options.  (subclass of twisted's UsageOptions class).  Same as the twisted's UsageOptions class.  self.localOptions to retrieve the values of various options. Can set required options with local requiredOptions attribute.
     330test methods have "test_" prefix. anything with test_ gets called.  these then get appended to the report.['foo'] = bar gets written into the report
     332self.input refers to the current input
     333Test Templates
     334Base templates that allow the programmer to have some utility class methods that perform operations that will then be included inside of the report.
     335based on Scapy.
     336familiarize oneself with the concept of deferred in the twisted framework.  results in nested functions
     337Walkthrough of different types of tests
     342Question: What if I want to run a DNS test and and HTTP test as part of the same test?  Do we have to inherit from both?  Can compose.  Hopefully no clashing keys.
     344How to include a test:
     346drop in nettest
     347workon ooni (virtual environment stuff for python)
     348./bin/ooniprobe --help
     35112:30p: Router Hardware Discussion
     353Goal: Free hardware/software platform that is not dependent on vendors.
     355Goal: measurement, something that runs Tor, can be a bridge
     357Dream plug is not an option.
     359Three options:
     360RIPE Atlas Probe.  Disaster to rely on this kind of hardware.  Locked down hardware.
     361Raspberry Pi.  Not actually free (can’t build it yourself).  Requires loading a proprietary HDMI driver for the device to actually turn on.  Benefits: small, available, known piece of hardware.  Familiar to hackers, bloggers.  Unit cost: $30-35.  Another $8 for a USB WiFi 802.11 a/b/g/n.  So much proprietary stuff.  Very easy to reprogram.  3D printed case costs a buck or two.  800 MHz, 1.2 GB RAM.  20-30 Mbps without maxing out the CPU.  10/100 RJ45 interface. (Alternative: Arduino.)  Runs Debian.  apt-get “just works”.
     362Tor Router: $150 not including wireless interface or case.  (assuming 1000 boards).  Fully spec’d out here:
     363UIM slot for plugging in a SIM card.  SIM card is a generic device
     364SD Card slot
     365ESAT port for plugging in a hard drive (if desired)
     366Mini PCI port (e.g., for plugging in WiFi module).  To Do: Send Jake Mini PCI Ath 9k chips that we want Jake to install.
     36710/100 Ethernet port
     368GigE port
     369USB “on the go” port
     3702 USB high-voltage ports (charging, etc. is possible)
     371Audio in/out port.  (want people to have the ability to replace media servers, etc.).  Costs $1, but could significantly increase adoption.
     372on-board microphone.
     373Form factor: currently about 6” x 6” (prototype board).  Finally, the form factor will be about 6” x 3”.
     374Port for plugging in an LCD ribbon
     375port for battery charger (for charging)
     376three high-speed serial ports (could bolt-on GPS to this)
     377accelerometer (e.g., for clearing all of the keys if the box is moved)
     378pins for h/w intrusion detection
     379GPIO pin set (same as Raspberry Pi)
     3802 GB of DDR3 RAM (can support as much as 4GB)
     381Spartan FPGA
     3824 ARM9 cores.  1-1.3 GHz per core.  RNG built in, as well as AES acceleration.
     384Want to have a reliable base OS (Debian).
     386Can ship direct from factory.  Could have a “click to buy”.   Can I buy it readily configured as a home router?
     388Funding questions.
     389How many units for first order to bring unit costs down?
     3915:00p Discussion of Tests
     393Categories of tests:
     394Interference/Blocking Tests
     395Manipulation Tests: Some transformation has been performed on the traffic that you are sending (extra headers, extra latency, etc.).
     396Circumvention tools
     398Properties of tests:
     399They have an input
     400They detect packet mangling
     401Requirement for back-end
     402Incorporates outside/auxiliary information
     403Needs client/server
     404May return false positives
     405Requires root
     408List of tests in the initial M-Lab deployment
     412DNS tampering (should be called “DNS (in)consistency”)
     414Captive portal
     416Other implemented tests
     417Man in the middle (SSL, SSH)
     419Daphne: We open 'n' connections with the backend, and foreach connection 'i' we mutate the 'i'th byte of the conversation.  When the conversation is no longer blocked it means that censor can no longer find the fingerprint in our packets, and that the last mutated byte is part of the DPI fingerprint.  (requires server-side backend).  Does bidirectional tests, etc.
     421“Want” (Other tests worth considering)
     422Vern Paxson-style breaking of keywords across packets to see whether that still results in blocking
     423Do Host fields with just IP addresses get blocked?  Do host fields whose names do not match the IP address corresponding to the connection get blocked?
     424Transcoding (JPEG,MPEG) tests.  If I send an image or video, does the same thing show up on the other side?
     425Generalized: do particular byte sequences trigger actions
     426for various types of connections: SCTP, TCP, UDP, etc.
     427regardless of state of connections
     428in other parts of the protocol (e.g., in an HTTP cookie)
     429Testing for whether HTTP connections with various browser versions are blocked (or not)
     430Timing measurements (basically, latency)
     431{Packet loss, jitter, latency} detection (e.g., to different destinations)
     433IPv6 (perhaps all of the tests could be running concurrently/coupled with IPv4 tests).  Hypothesis might be that censorship is less pervasive on IPv6 than on IPv4 because some middleboxes may not support IPv6, or the IPv6 paths may be entirely different and not even go through the same middleboxes.
     435performance, etc.
     437Some discussion of the EFF Switzerland tool differential packet trace analysis tool to detect on-path packet mangling.
     439Where are the sources of information that could help us develop tests?  (e.g., Citizen Lab report on BlueCoat).
     441Want multiple independent implementations of the same tests.
     443Parameters for Prioritizing Tests
     444Goal: Wide breadth of tests that are “good enough” to gather some data.
     445Do we have other tests like it?
     446Effort required
     447to make a specification for the test.  (really important, particularly if there are multiple implementations of the same test. specification *is* the test, and the test *is* the specification.  the specification is versioned according to, e.g., the date).  constraint: make no claims based on data that is gathered based on code that is written without a specification
     448to get it gathering data (really important) — goal being to iterate quickly to gather a lot of data.
     449Is MLab willing to store the data?
     450to get it stable
     451Impact (how much censorship it will detect, based on data collected by other tools, anecdotal evidence, etc.)
     452How many places is it likely to be run? How expensive is it to run this test: how much are we asking the user, in terms of bandwidth, how aggressive it is, does it require root, does the tool require downloading other stuff, etc.
     453Likelihood of conclusive result as being censorship (vs. some other kind of innocuous behavior (likelihood of false positive)
     454Necessary behavior for the test to work correctly vs. “this was simply implemented that way”
     456How to prioritize tests?
     457Multiprotocol parasitic traceroute
     458Timing analysis + differential packet loss, jitter, latency
     461What does it mean to create a test?
     462Part of a well-specified production test is (1) specification; (2) implementation.
     464What should go in a specification for a test? (Jake has written all of this on the OONI project wiki in a more complete manner.)
     465version number
     467inputs (format/syntax and version number)
     468outputs (format/syntax, semantics and version number)
     471where the data should be stored (i.e., what’s public, MLab, private, etc.)
     472Also include “parameters for prioritizing tests” (above; e.g., impact, who should run the tool, likelihood of false positives, etc.) so that it’s clear *why* this is a useful test to implement
     474Packet capture considerations
     476OONI Spec documentation will go here:
     480Cross-platform implementation
     481Dominic’s proposal: Do some of the low-level support stuff in C, but then have bindings in Java, Lua, Python, etc.
     482Reduces memory constraints on OONI.
    319485=== Tests to run on M-Lab ===