We've made a pure Go WebRTC port over at pions/webrtc. This may provide a viable alternative to libwebrtc.
Disadvantages
We've not done much work on security yet. However, we definitely intend to work on this since hardening the security will be a requirement for many other use-cases.
Not entirely feature complete but this seems less important for your use-case.
We don't support TURN yet. We currently plan to build this for our next release.
Advantages
It is fully go-gettable, fixing the horrible built process. In addition, it should run everywhere Go runs.
We've tested our data channel implementation against multiple targets, including Chrome, Firefox and NodeJS and aim to automate these in the future.
It will give you more freedom to make changes to the WebRTC stack and even allow experimentation, E.g.: to reduce fingerprinting.
We're working on exposing an idiomatic API based on the io.ReadWriteCloser.
We have an active community and development team. We're more than happy to fix any problems that may arise. We're also open to prioritizing any features you conciser blocking. Lastly, we have great PR response times.
I'm also interested in collaborating on NAT testing as mentioned in #25595 (moved).
Trac: Username: backkem
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items ...
Show closed items
Linked items 0
Link issues together to show that they're related.
Learn more.
Taking a look at this now. The API is very similar to what we're using and the code has been actively maintained this whole time.
Although the API isn't a direct match, it looks like relatively little work to get it hooked up to what we have. I'm going to work on getting it building first, and then take a look at the code.
If this works, it will be a lot easier to build and maintain.
I got proxy-go building with pion/webrtc. The changes necessary were fairly small and can be seen here. Datachannel creation and teardown appear to be working as expected and some data is flowing through.
However, I'm having trouble bootstrapping the Tor client past 50%. Here's a log of the pion port:
Jun 14 22:44:25.188 [notice] Tor 0.2.9.16 (git-9ef571339967c1e5) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.1.0j and Zlib 1.2.8.Jun 14 22:44:25.188 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warningJun 14 22:44:25.188 [notice] Read configuration file "/go/bin/torrc-5".Jun 14 22:44:25.190 [warn] Path for DataDirectory (datadir5) is relative and will resolve to /go/bin/datadir5. Is this what you wanted?Jun 14 22:44:25.191 [notice] Opening Socks listener on 127.0.0.1:9055Jun 14 22:44:25.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip.Jun 14 22:44:25.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6.Jun 14 22:44:25.000 [notice] Bootstrapped 0%: StartingJun 14 22:44:25.000 [notice] Delaying directory fetches: No running bridgesJun 14 22:44:27.000 [notice] Bootstrapped 5%: Connecting to directory serverJun 14 22:44:27.000 [notice] Bootstrapped 10%: Finishing handshake with directory serverJun 14 22:44:30.000 [notice] Learned fingerprint 2B280B23E1107BB62ABFC40DDCC8824814F80A72 for bridge 0.0.3.0:1 (with transport 'snowflake').Jun 14 22:44:30.000 [notice] Bootstrapped 15%: Establishing an encrypted directory connectionJun 14 22:44:30.000 [notice] Bootstrapped 20%: Asking for networkstatus consensusJun 14 22:44:30.000 [notice] new bridge descriptor 'flakey' (fresh): $2B280B23E1107BB62ABFC40DDCC8824814F80A72~flakey at 0.0.3.0Jun 14 22:44:30.000 [notice] I learned some more directory information, but not enough to build a circuit: We have no usable consensus.Jun 14 22:44:32.000 [notice] Bootstrapped 25%: Loading networkstatus consensusJun 14 22:44:34.000 [notice] I learned some more directory information, but not enough to build a circuit: We have no usable consensus.Jun 14 22:44:34.000 [notice] Bootstrapped 40%: Loading authority key certsJun 14 22:44:34.000 [notice] Bootstrapped 45%: Asking for relay descriptorsJun 14 22:44:34.000 [notice] I learned some more directory information, but not enough to build a circuit: We need more microdescriptors: we have 0/6533, and can only build 0% of likely paths. (We have 0% of guards bw, 0% of midpoint bw, and 0% of exit bw = 0% of path bw.)Jun 14 22:44:34.000 [notice] Bootstrapped 50%: Loading relay descriptorsJun 14 22:45:07.000 [notice] Delaying directory fetches: No running bridges
compared to a log of using go-webrtc:
Jun 14 22:48:51.025 [notice] Tor 0.2.9.16 (git-9ef571339967c1e5) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.1.0j and Zlib 1.2.8.Jun 14 22:48:51.025 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warningJun 14 22:48:51.025 [notice] Read configuration file "/go/bin/torrc-6".Jun 14 22:48:51.026 [warn] Path for DataDirectory (datadir6) is relative and will resolve to /go/bin/datadir6. Is this what you wanted?Jun 14 22:48:51.027 [notice] Opening Socks listener on 127.0.0.1:9056Jun 14 22:48:51.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip.Jun 14 22:48:51.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6.Jun 14 22:48:51.000 [notice] Bootstrapped 0%: StartingJun 14 22:48:51.000 [notice] Delaying directory fetches: No running bridgesJun 14 22:48:53.000 [notice] Bootstrapped 5%: Connecting to directory serverJun 14 22:48:53.000 [notice] Bootstrapped 10%: Finishing handshake with directory serverJun 14 22:48:53.000 [notice] Learned fingerprint 2B280B23E1107BB62ABFC40DDCC8824814F80A72 for bridge 0.0.3.0:1 (with transport 'snowflake').Jun 14 22:48:53.000 [notice] Bootstrapped 15%: Establishing an encrypted directory connectionJun 14 22:48:53.000 [notice] Bootstrapped 20%: Asking for networkstatus consensusJun 14 22:48:53.000 [notice] new bridge descriptor 'flakey' (fresh): $2B280B23E1107BB62ABFC40DDCC8824814F80A72~flakey at 0.0.3.0Jun 14 22:48:53.000 [notice] I learned some more directory information, but not enough to build a circuit: We have no usable consensus.Jun 14 22:48:54.000 [notice] Bootstrapped 25%: Loading networkstatus consensusJun 14 22:48:58.000 [notice] I learned some more directory information, but not enough to build a circuit: We have no usable consensus.Jun 14 22:48:58.000 [notice] Bootstrapped 40%: Loading authority key certsJun 14 22:48:58.000 [notice] Bootstrapped 45%: Asking for relay descriptorsJun 14 22:48:58.000 [notice] I learned some more directory information, but not enough to build a circuit: We need more microdescriptors: we have 0/6533, and can only build 0% of likely paths. (We have 0% of guards bw, 0% of midpoint bw, and 0% of exit bw = 0% of path bw.)Jun 14 22:48:58.000 [notice] Bootstrapped 50%: Loading relay descriptorsJun 14 22:49:01.000 [notice] Bootstrapped 57%: Loading relay descriptorsJun 14 22:49:01.000 [notice] Bootstrapped 65%: Loading relay descriptorsJun 14 22:49:01.000 [notice] Bootstrapped 71%: Loading relay descriptorsJun 14 22:49:01.000 [notice] Bootstrapped 78%: Loading relay descriptorsJun 14 22:49:01.000 [notice] Bootstrapped 80%: Connecting to the Tor networkJun 14 22:49:02.000 [notice] Bootstrapped 90%: Establishing a Tor circuitJun 14 22:49:02.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.Jun 14 22:49:02.000 [notice] Bootstrapped 100%: Done
I've reproduced this several times with no luck getting past 50%. Going to take a look at whether the messages are getting through as expected.
Looks like something's up with reading from the data channel. I put a version of BytesSyncLogger at the proxy-go instance and at the client but changed it to count totals and not reset the inbound and outbound counts. I'm finding that the proxy-go instance (which runs pion webrtc) is not receiving all of the messages that the client (running go-webrtc) sends:
At the proxy:
2019/06/18 13:58:34 Traffic Bytes (in|out): 40279 | 650650 -- (34 OnMessages, 97 Sends)2019/06/18 13:58:34 OnClose channelortc ERROR: 2019/06/18 13:58:34 Failed to read from data channel sending reset packet in non-established state: state=Closed
I'm going to dig into the pion/webrtc code to see if anything obvious comes up. As far as I can tell the datachannels being created are reliable just as before so maybe there's something weird with the signaling code they use to notify the datachannel that data has been read.
The client has 9 unreceived sends as of 13:58:24 and the proxy-go instance errors out 10 full seconds later.
Also note that the error message of the closed channel state occurs on the proxy-go side before the the client closes their end of the datachannel. From 13:58:34 on, the client believes the channel is still open and is trying to read data.
I'm actually using pion/webrtc at the proxy-go instance (which doesn't speak to little-t-tor directly and doesn't have a datadir). For this i'm again using the -log option to set my own logs. As you can see above I'm using both to compare :)
If you're interested in contributing to snowflake you can check out the README's in each of the directories for more information, but you might have to look at the source code in some cases to see what the behaviour is.
Upon further investigation, it looks like the Snowflake bridge is actually closing the connection to the proxy at that 13:58:34 timestamp. That leads me to believe that the webrtc library is somehow misordering or mangling data causing the bridge to error out the connection.
I checked the snowflake server logs and see these messages:
2019/06/18 13:58:34 error copying ORPort to WebSocket2019/06/18 13:58:34 error copying WebSocket to ORPort
A byte-by-byte comparison of data sent by the client and data received from OnMessage by the proxy show that they differ :/
Interestingly, data send by the proxy (pion) and received by the client (go-webrtc) appears fine. So maybe the pion/webrtc isn't ordering data upon receipt?
A closer inspection yields that a big chunk of data (~298KB) went missing at the proxy side, but other than that bytes seem to be in order. This shouldn't happen if the channel is reliable. I've confirmed the channel type is set to reliable and ordered, so this could be a bug.
Great work on this, cohosh. Missing data does seem to point to a bug in the pions implementation. backkem should be Cc'ed on this ticket so they'll get the report.
My first guess is that the bug lies somewhere inside pion/sctp. As I understand it, a reliable dataconnection is SCTP inside DTLS. A working SCTP should make it impossible to drop a chunk of data.
Found the issue. The reassembly queue is returning an io.ErrShortBuffer error. It seems the dataChannelBufferSize constant is too small for the data that the client is sending: datachannel.go#L16
The reassembly queue works by concatenating all of the fragments for a SCTP Stream Sequence Number (SSN) and trying to read them into the provided buffer all at once. If the buffer is too short, an io.ErrShortBuffer is returned from reassemblyQueue.readhere, but the function calling it (Stream.ReadSCTP) doesn't return an error and the data for that sequence number is simply lost here. Not that in particular the error returned from the reassemblyQueue read is being overwritten.
There's a few bugs here:
If the buffer is too small, we should split up the reads into multiple subsequent reads.
The error for the reassembly queue should be checked.
I can write up a patch for this, now that I have a good handle of what's going on.
Great! One concern I have is that if there are specific features in pion implementation that are different from the native implementation, which make it easy to block.
Great! One concern I have is that if there are specific features in pion implementation that are different from the native implementation, which make it easy to block.
Hi! I am Sean DuBois (one of the Pion WebRTC devs) We should have zero differences, and if they do pop up will try my best to fix them :)
We run our test suite against both Pion, and the browser implementation. We use the same Go code in both cases (but compile it to WASM for the browser)
Thanks again for the fixes cohosh! Go is amazing, it makes things so easy to fix.
Great! One concern I have is that if there are specific features in pion implementation that are different from the native implementation, which make it easy to block.
I am not too worried about that at this point, because what little research we did using libwebrtc ([[doc/Snowflake/Fingerprinting]]) showed that even with the native library, Snowflake did not match other applications. I don't think swapping one library for another costs much at this point, in that respect.
I would be surprised if this is the case--unless pion has paid extraordinary attention to matching externally visible protocol implementation details, which goes farther than interoperability. What about the order of ciphersuites in the DTLS handshake, or the metadata inside STUN messages? One of the things we found is that there's no single "WebRTC" fingerprint, nor even a single "Chrome WebRTC" fingerprint--it depends on the specific application. That said, I'm glad that you are involved, and I am hopeful that pion will be easier to adapt if and when needed.
And Snowflake clients are now bootstrapping to 100% :)
A next step is perhaps to run and monitor a pion-based proxy-go alongside the existing libwebrtc-based ones? We can check for crashes and see if there are anomalies with regard to number of clients handled, for example.
And Snowflake clients are now bootstrapping to 100% :)
A next step is perhaps to run and monitor a pion-based proxy-go alongside the existing libwebrtc-based ones? We can check for crashes and see if there are anomalies with regard to number of clients handled, for example.
Sounds good. Do you want to do a code review first?
I've also started the process of switching over to pion/webrtc in the client.
Great! One concern I have is that if there are specific features in pion implementation that are different from the native implementation, which make it easy to block.
I am not too worried about that at this point, because what little research we did using libwebrtc ([[doc/Snowflake/Fingerprinting]]) showed that even with the native library, Snowflake did not match other applications. I don't think swapping one library for another costs much at this point, in that respect.
There's also something to be said for the trade-off in complexity/adaptability, which I believe motivated the move from a headless Firefox helper to uTLS in meek: https://trac.torproject.org/projects/tor/ticket/29077
If pion/webrtc makes it so we can build snowflake for Windows and Android more easily and if they are amenable to us proposing changes based on observed blocking, then this seems like a good path forward to me.
Great! One concern I have is that if there are specific features in pion implementation that are different from the native implementation, which make it easy to block.
I am not too worried about that at this point, because what little research we did using libwebrtc ([[doc/Snowflake/Fingerprinting]]) showed that even with the native library, Snowflake did not match other applications. I don't think swapping one library for another costs much at this point, in that respect.
I would be surprised if this is the case--unless pion has paid extraordinary attention to matching externally visible protocol implementation details, which goes farther than interoperability. What about the order of ciphersuites in the DTLS handshake, or the metadata inside STUN messages? One of the things we found is that there's no single "WebRTC" fingerprint, nor even a single "Chrome WebRTC" fingerprint--it depends on the specific application. That said, I'm glad that you are involved, and I am hopeful that pion will be easier to adapt if and when needed.
Oh yes you are 100% right about that. I haven't paid any attention to that, I have only been concerned about interoperability.
This hasn't been a concern for me, but I would love to make this work for you. Maybe we can write some sort of test suite that does ICE/DTLS/SCTP and compares libwebrtc/Pion nightly and brings down the drift.
Would it also be helpful to 'randomize' Pion? We could add features that helps make Snowflake more resistant to fingerprinting. I am not up to date on concepts/needs around censorship circumvention
If pion/webrtc makes it so we can build snowflake for Windows and Android more easily and if they are amenable to us proposing changes based on observed blocking, then this seems like a good path forward to me.
I would love to take any/all changes you propose! We have a few active contributors, and everyone just needs one sign-off to merge master.
I am really excited to get your involvement/oversight. The quality of Tor's work is so high so I it will end up making Pion a lot better I think :) Pion doesn't make any money/not a corporate project so I only do this after work hours, but I try to move it quickly.
Hi guys, sorry for the delay. Apparently, I didn't have an email address configured.
Really cool to see this moving forward. We've had users run pion/webrtc on all sorts of platforms, including Windows and Android. Since the entire stack is in pure Go with little to no dependencies, it should be highly portable. We even act as a wrapper around the JavaScript WebRTC implementation when compiling to the JS/WASM build target.
Some comments:
It may be worth considering the use of go modules to ensure you get all the correct dependencies when building pion/webrtc.
As mentioned in the original ticket and the SCTP PR, you can 'detach' a data channel to get access to the underlying idiomatic API based on the io.ReadWriteCloser.
As mentioned in the original ticket and the SCTP PR, you can 'detach' a data channel to get access to the underlying idiomatic API based on the io.ReadWriteCloser.
Thanks! I replied there but I'm copying the answer here to keep everyone in the same loop. While this does solve the problem of how to handle an io.ErrShortBuffer, it doesn't fix the bug in SCTP. I think we'd still prefer to have access to the OnMessage callback anyway instead of implementing our own readLoop, but depending on how you decide to handle the io.ErrShortBuffer return in your readLoop we may need to go that route.
Most of the changes were small, but there were some tricky differences to work around:
OnNegotiationNeeded is no longer supported, but putting the code from that callback into a go routine and calling it immediately after creation of the PeerConnection seems to work just fine.
By default, something called the "trickle method" is set to false, which means the OnICEGatheringStateChange callback never gets called (see icegatherer.go#L262 vs icegatherer.go#L143). We get around this by using a SettingEngine here, but it was a bit confusing. The default API should probably have isTrickle set to true by default.
OnICECandidate returns a nil candidate when gathering is complete. This isn't really a problem but I got a silent failure when i didn't check for it. It would be nice if this behaviour was documented in the GoDocs.
And Snowflake clients are now bootstrapping to 100% :)
A next step is perhaps to run and monitor a pion-based proxy-go alongside the existing libwebrtc-based ones? We can check for crashes and see if there are anomalies with regard to number of clients handled, for example.
Sounds good. Do you want to do a code review first?
Most of the changes were small, but there were some tricky differences to work around:
OnNegotiationNeeded is no longer supported, but putting the code from that callback into a go routine and calling it immediately after creation of the PeerConnection seems to work just fine.
By default, something called the "trickle method" is set to false, which means the OnICEGatheringStateChange callback never gets called (see icegatherer.go#L262 vs icegatherer.go#L143). We get around this by using a SettingEngine here, but it was a bit confusing. The default API should probably have isTrickle set to true by default.
OnICECandidate returns a nil candidate when gathering is complete. This isn't really a problem but I got a silent failure when i didn't check for it. It would be nice if this behaviour was documented in the GoDocs.
This is great, thanks. I looked over the changes and didn't spot any problems. Here there's still a reference to OnNegotiationNeeded.
I just modified my PRs to pion/webrtc after listening to their feedback. They linked an interesting article [https://lgrahl.de/articles/demystifying-webrtc-dc-size-limit.html] about message size limits in different implementations of WebRTC (which was the root of our problem). Looks like no implementations handle this particularly well.
I wanted to quickly reiterate that this is one of the reasons for existence of our (Pion WebRTC) detach API. It provides you with an io.ReadWriter. With this pattern the upper layer protocol (your protocol) can supply the buffer for reading/writing. This means, since you likely know what the appropriate buffer size is for your use-case, you can allocate it accordingly. There is actually a long running issue to expose this in the WebRTC API. More modern web APIs are modeled in a similar way, E.g.: incoming-stream & outgoing-stream. To ensure compatibility with other WebRTC implementations it is of-course still recommended to respect the buffer-size limits mentioned in Lennart's blog post.
Thanks, the size limit change in my pull request actually affects reads only. I still think you want to make that change since your peers have a strictly larger write limit (the write limit is 64KiB and the read limit is 16KiB) than read limit, meaning that peers that both use your implementation will frequently drop data from messages that are too large using the normal Send/OnMessage API. Respecting the limits in the linked article would mean either increasing your read lihttps://github.com/pion/webrtc/issues/718mit to match the write limit of 64KiB or decreasing your write limit to match the read limit of 16KiB. Either would help the problem on this end.
Also a note that I modified my pull request from the original to not increase the buffer on an io.ErrShortBuffer, as explained here
I'm going to deploy a proxy-go instance built with pion/webrtc and try running a client with pion/webrtc and see how it goes.
I've started a new proxy-go instance on snowflake.bamsoftware.com named pion-proxy which runs a pion library version of the proxy-go instance located in /usr/local/bin/pion-proxy-go. Logs are available in /home/snowflake-proxy/pion-proxy.d
Just to give an update on this, building Tor Browser with this pion library is a bit painful right now. Our reproducible build system (rbm) doesn't work nicely with modules and, after a conversation with boklm, it's preferrable to create a separate project for each go lib dependency. This means a total of 13 pion libraries plus an additional 14+ dependencies that these libraries have. There might be more, I stopped going down the rabbit hole after a while. I don't think creating 30-ish projects just to build this is a viable or sustainable option.
There's an open ticket for integrating go modules into rbm (#28325 (moved)) it would be nice to know how fast or difficult that task is. Otherwise we could maybe hack together something with custom build commands. It will be a pain to keep track of versions/commits for all of the dependencies.
Just to give an update on this, building Tor Browser with this pion library is a bit painful right now. Our reproducible build system (rbm) doesn't work nicely with modules and, after a conversation with boklm, it's preferrable to create a separate project for each go lib dependency. This means a total of 13 pion libraries plus an additional 14+ dependencies that these libraries have. There might be more, I stopped going down the rabbit hole after a while. I don't think creating 30-ish projects just to build this is a viable or sustainable option.
I'm going to try brute-force packaging all the dependency projects. The go mod graph command outputs a tree of dependencies. I'm going to use that to try and automate the creation of most of the dependency projects, probably followed by some manual editing.
Just to give an update on this, building Tor Browser with this pion library is a bit painful right now. Our reproducible build system (rbm) doesn't work nicely with modules and, after a conversation with boklm, it's preferrable to create a separate project for each go lib dependency. This means a total of 13 pion libraries plus an additional 14+ dependencies that these libraries have. There might be more, I stopped going down the rabbit hole after a while. I don't think creating 30-ish projects just to build this is a viable or sustainable option.
I'm going to try brute-force packaging all the dependency projects. The go mod graph command outputs a tree of dependencies. I'm going to use that to try and automate the creation of most of the dependency projects, probably followed by some manual editing.
One idea that I didn't try yet but that could maybe help with this would be to create a generic go-module project, in order to be able to list all go dependencies in input_files without having to create a separate project for each. For example a project requiring goxnet and goxsys would include this in its input_files:
In order to avoid cloning all modules in the same git_clones directory, projects/go-module/config would need to define git_clone_dir to something like:
The script doesn't do everything by itself. Its raw output, basically a transcription of go mod graph, is here. I added fixup commits here (adding missing dependencies, enumerating multiple modules within one repo, removing duplicates) and here (expanding version numbers into hashes).
I discovered by accident that one of the dependencies, golang.org/x/sync, was not actually necessary, so I removed it. It's possible that there are more than can be removed. I'm not sure if this is a maintainer neglecting to run go mod tidy to remove an unused dependency, or what.
Overall, my impression so far is that this is not the way we want to continue doing things. The problem I foresee is maintenance across upgrades: the pion-webrtc module upgrades ones of its dependencies, which causes a cascade of updated version requirements down the dependency tree. boklm's suggestion from comment:42 would prevent a proliferation of rbm projects, but we'll still want something to semi-automatically handle upgrades for us. We could of course use go itself--but that's a discussion for #28325 (moved).
Everything builds, and from the command line I can run snowflake-client -h and see that it produces output, but unfortunately it doesn't bootstrap for me. But then again, neither does 3cc240625c from cohosh's pion branch from comment:28. So whatever is going wrong for me, is possibly not related to the rbm build.
This is what I see in the snowflake-client log. After this, there's no more output for at least several minutes (that's as long as I waited).
2019/08/29 01:43:47 Rendezvous using Broker at: https://snowflake-broker.bamsoftware.com/2019/08/29 01:43:47 WebRTC: Collecting a new Snowflake. Currently at [0/3]2019/08/29 01:43:47 snowflake-UQ9COqlX3fZ5JMmA connecting...2019/08/29 01:43:47 Started SOCKS listener.2019/08/29 01:43:47 SOCKS listening...2019/08/29 01:43:47 WebRTC: PeerConnection created.2019/08/29 01:43:47 WebRTC: DataChannel created.2019/08/29 01:43:47 WebRTC: Created offer2019/08/29 01:43:47 WebRTC: Set local description2019/08/29 01:43:48 SOCKS accepted: {[scrubbed] map[]}
Everything builds, and from the command line I can run snowflake-client -h and see that it produces output, but unfortunately it doesn't bootstrap for me. But then again, neither does 3cc240625c from cohosh's pion branch from comment:28. So whatever is going wrong for me, is possibly not related to the rbm build.
This is what I see in the snowflake-client log. After this, there's no more output for at least several minutes (that's as long as I waited).
{{{
2019/08/29 01:43:47 Rendezvous using Broker at: https://snowflake-broker.bamsoftware.com/
2019/08/29 01:43:47 WebRTC: Collecting a new Snowflake. Currently at [0/3]
2019/08/29 01:43:47 snowflake-UQ9COqlX3fZ5JMmA connecting...
2019/08/29 01:43:47 Started SOCKS listener.
2019/08/29 01:43:47 SOCKS listening...
2019/08/29 01:43:47 WebRTC: PeerConnection created.
2019/08/29 01:43:47 WebRTC: DataChannel created.
2019/08/29 01:43:47 WebRTC: Created offer
2019/08/29 01:43:47 WebRTC: Set local description
2019/08/29 01:43:48 SOCKS accepted: {[scrubbed] map[]}
}}}
Noting that I can reproduce this issue seemingly 100% of the time, I'll investigate whether it's due to recent changes to any of the pion libraries, since bootstrapping used to work
On running it, I got a popup from Windows Firewall asking if I wanted to allow snowflake-client.exe to do something (bind a UDP socket, if I had to guess). It runs, but does not bootstrap, producing a log similar to the one in comment:43.