Opened 7 months ago

Last modified 31 hours ago

#28942 accepted enhancement

Evaluate pion WebRTC

Reported by: backkem Owned by: cohosh
Priority: Medium Milestone:
Component: Circumvention/Snowflake Version:
Severity: Normal Keywords: anti-censorship-roadmap-august
Cc: ahf, dcf, arlolra, cohosh, backkem Actual Points:
Parent ID: Points: 5
Reviewer: Sponsor: Sponsor28-must

Description

We've made a pure Go WebRTC port over at pions/webrtc. This may provide a viable alternative to libwebrtc.

Disadvantages

  • We've not done much work on security yet. However, we definitely intend to work on this since hardening the security will be a requirement for many other use-cases.
  • Not entirely feature complete but this seems less important for your use-case.
  • We don't support TURN yet. We currently plan to build this for our next release.

Advantages

  • It is fully go-gettable, fixing the horrible built process. In addition, it should run everywhere Go runs.
  • We've tested our data channel implementation against multiple targets, including Chrome, Firefox and NodeJS and aim to automate these in the future.
  • It will give you more freedom to make changes to the WebRTC stack and even allow experimentation, E.g.: to reduce fingerprinting.
  • It may solve/invalidate some of your other problems, including #19026, #19315, #19569, #22718 and #25483.
  • We're working on exposing an idiomatic API based on the io.ReadWriteCloser.
  • We have an active community and development team. We're more than happy to fix any problems that may arise. We're also open to prioritizing any features you conciser blocking. Lastly, we have great PR response times.

I'm also interested in collaborating on NAT testing as mentioned in #25595.

Child Tickets

Change History (37)

comment:1 Changed 6 months ago by gaba

Sponsor: Sponsor19

comment:2 Changed 6 weeks ago by gaba

Keywords: ex-sponsor-19 added

Adding the keyword to mark everything that didn't fit into the time for sponsor 19.

comment:3 Changed 6 weeks ago by phw

Sponsor: Sponsor19Sponsor28-can

Moving from Sponsor 19 to Sponsor 28.

comment:4 Changed 5 weeks ago by cohosh

Owner: set to cohosh
Status: newassigned

Taking a look at this now. The API is very similar to what we're using and the code has been actively maintained this whole time.

Although the API isn't a direct match, it looks like relatively little work to get it hooked up to what we have. I'm going to work on getting it building first, and then take a look at the code.

If this works, it will be a lot easier to build and maintain.

comment:5 Changed 5 weeks ago by cohosh

Cc: cohosh added

comment:6 Changed 5 weeks ago by cohosh

I got proxy-go building with pion/webrtc. The changes necessary were fairly small and can be seen here. Datachannel creation and teardown appear to be working as expected and some data is flowing through.

However, I'm having trouble bootstrapping the Tor client past 50%. Here's a log of the pion port:

Jun 14 22:44:25.188 [notice] Tor 0.2.9.16 (git-9ef571339967c1e5) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.1.0j and Zlib 1.2.8.
Jun 14 22:44:25.188 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Jun 14 22:44:25.188 [notice] Read configuration file "/go/bin/torrc-5".
Jun 14 22:44:25.190 [warn] Path for DataDirectory (datadir5) is relative and will resolve to /go/bin/datadir5. Is this what you wanted?
Jun 14 22:44:25.191 [notice] Opening Socks listener on 127.0.0.1:9055
Jun 14 22:44:25.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip.
Jun 14 22:44:25.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6.
Jun 14 22:44:25.000 [notice] Bootstrapped 0%: Starting
Jun 14 22:44:25.000 [notice] Delaying directory fetches: No running bridges
Jun 14 22:44:27.000 [notice] Bootstrapped 5%: Connecting to directory server
Jun 14 22:44:27.000 [notice] Bootstrapped 10%: Finishing handshake with directory server
Jun 14 22:44:30.000 [notice] Learned fingerprint 2B280B23E1107BB62ABFC40DDCC8824814F80A72 for bridge 0.0.3.0:1 (with transport 'snowflake').
Jun 14 22:44:30.000 [notice] Bootstrapped 15%: Establishing an encrypted directory connection
Jun 14 22:44:30.000 [notice] Bootstrapped 20%: Asking for networkstatus consensus
Jun 14 22:44:30.000 [notice] new bridge descriptor 'flakey' (fresh): $2B280B23E1107BB62ABFC40DDCC8824814F80A72~flakey at 0.0.3.0
Jun 14 22:44:30.000 [notice] I learned some more directory information, but not enough to build a circuit: We have no usable consensus.
Jun 14 22:44:32.000 [notice] Bootstrapped 25%: Loading networkstatus consensus
Jun 14 22:44:34.000 [notice] I learned some more directory information, but not enough to build a circuit: We have no usable consensus.
Jun 14 22:44:34.000 [notice] Bootstrapped 40%: Loading authority key certs
Jun 14 22:44:34.000 [notice] Bootstrapped 45%: Asking for relay descriptors
Jun 14 22:44:34.000 [notice] I learned some more directory information, but not enough to build a circuit: We need more microdescriptors: we have 0/6533, and can only build 0% of likely paths. (We have 0% of guards bw, 0% of midpoint bw, and 0% of exit bw = 0% of path bw.)
Jun 14 22:44:34.000 [notice] Bootstrapped 50%: Loading relay descriptors
Jun 14 22:45:07.000 [notice] Delaying directory fetches: No running bridges

compared to a log of using go-webrtc:

Jun 14 22:48:51.025 [notice] Tor 0.2.9.16 (git-9ef571339967c1e5) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.1.0j and Zlib 1.2.8.
Jun 14 22:48:51.025 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Jun 14 22:48:51.025 [notice] Read configuration file "/go/bin/torrc-6".
Jun 14 22:48:51.026 [warn] Path for DataDirectory (datadir6) is relative and will resolve to /go/bin/datadir6. Is this what you wanted?
Jun 14 22:48:51.027 [notice] Opening Socks listener on 127.0.0.1:9056
Jun 14 22:48:51.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip.
Jun 14 22:48:51.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6.
Jun 14 22:48:51.000 [notice] Bootstrapped 0%: Starting
Jun 14 22:48:51.000 [notice] Delaying directory fetches: No running bridges
Jun 14 22:48:53.000 [notice] Bootstrapped 5%: Connecting to directory server
Jun 14 22:48:53.000 [notice] Bootstrapped 10%: Finishing handshake with directory server
Jun 14 22:48:53.000 [notice] Learned fingerprint 2B280B23E1107BB62ABFC40DDCC8824814F80A72 for bridge 0.0.3.0:1 (with transport 'snowflake').
Jun 14 22:48:53.000 [notice] Bootstrapped 15%: Establishing an encrypted directory connection
Jun 14 22:48:53.000 [notice] Bootstrapped 20%: Asking for networkstatus consensus
Jun 14 22:48:53.000 [notice] new bridge descriptor 'flakey' (fresh): $2B280B23E1107BB62ABFC40DDCC8824814F80A72~flakey at 0.0.3.0
Jun 14 22:48:53.000 [notice] I learned some more directory information, but not enough to build a circuit: We have no usable consensus.
Jun 14 22:48:54.000 [notice] Bootstrapped 25%: Loading networkstatus consensus
Jun 14 22:48:58.000 [notice] I learned some more directory information, but not enough to build a circuit: We have no usable consensus.
Jun 14 22:48:58.000 [notice] Bootstrapped 40%: Loading authority key certs
Jun 14 22:48:58.000 [notice] Bootstrapped 45%: Asking for relay descriptors
Jun 14 22:48:58.000 [notice] I learned some more directory information, but not enough to build a circuit: We need more microdescriptors: we have 0/6533, and can only build 0% of likely paths. (We have 0% of guards bw, 0% of midpoint bw, and 0% of exit bw = 0% of path bw.)
Jun 14 22:48:58.000 [notice] Bootstrapped 50%: Loading relay descriptors
Jun 14 22:49:01.000 [notice] Bootstrapped 57%: Loading relay descriptors
Jun 14 22:49:01.000 [notice] Bootstrapped 65%: Loading relay descriptors
Jun 14 22:49:01.000 [notice] Bootstrapped 71%: Loading relay descriptors
Jun 14 22:49:01.000 [notice] Bootstrapped 78%: Loading relay descriptors
Jun 14 22:49:01.000 [notice] Bootstrapped 80%: Connecting to the Tor network
Jun 14 22:49:02.000 [notice] Bootstrapped 90%: Establishing a Tor circuit
Jun 14 22:49:02.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
Jun 14 22:49:02.000 [notice] Bootstrapped 100%: Done

I've reproduced this several times with no luck getting past 50%. Going to take a look at whether the messages are getting through as expected.

comment:7 Changed 5 weeks ago by cmm323

Have you looked at the snowflake logs? I forgot where they are, but snowflake produces its own logs, probably in the data dir.

Last edited 5 weeks ago by cmm323 (previous) (diff)

comment:8 Changed 4 weeks ago by cohosh

Looks like something's up with reading from the data channel. I put a version of BytesSyncLogger at the proxy-go instance and at the client but changed it to count totals and not reset the inbound and outbound counts. I'm finding that the proxy-go instance (which runs pion webrtc) is not receiving all of the messages that the client (running go-webrtc) sends:

At the proxy:

2019/06/18 13:58:34 Traffic Bytes (in|out): 40279 | 650650 -- (34 OnMessages, 97 Sends)
2019/06/18 13:58:34 OnClose channel
ortc ERROR: 2019/06/18 13:58:34 Failed to read from data channel sending reset packet in non-established state: state=Closed

At the client:

2019/06/18 13:58:24 Traffic Bytes (in|out): 650650 | 335191 -- (97 OnMessages, 43 Sends)
2019/06/18 13:58:53 Traffic Bytes (in|out): 650650 | 339361 -- (97 OnMessages, 44 Sends)
2019/06/18 13:58:53 WebRTC: No messages received for 30 seconds -- closing stale connection.
2019/06/18 13:58:53 WebRTC: closing DataChannel

I'm going to dig into the pion/webrtc code to see if anything obvious comes up. As far as I can tell the datachannels being created are reliable just as before so maybe there's something weird with the signaling code they use to notify the datachannel that data has been read.

The client has 9 unreceived sends as of 13:58:24 and the proxy-go instance errors out 10 full seconds later.

Also note that the error message of the closed channel state occurs on the proxy-go side before the the client closes their end of the datachannel. From 13:58:34 on, the client believes the channel is still open and is trying to read data.

Last edited 4 weeks ago by cohosh (previous) (diff)

comment:9 in reply to:  7 Changed 4 weeks ago by cohosh

Replying to cmm323:

Have you looked at the snowflake logs? I forgot where they are, but snowflake produces its own logs, probably in the data dir.

Sorry I just saw your question. Snowflake clients have logging turned off by default: https://gitweb.torproject.org/pluggable-transports/snowflake.git/tree/client/snowflake.go#n72
but you can set a log path using the -log option.

I'm actually using pion/webrtc at the proxy-go instance (which doesn't speak to little-t-tor directly and doesn't have a datadir). For this i'm again using the -log option to set my own logs. As you can see above I'm using both to compare :)

If you're interested in contributing to snowflake you can check out the README's in each of the directories for more information, but you might have to look at the source code in some cases to see what the behaviour is.

Last edited 4 weeks ago by cohosh (previous) (diff)

comment:10 Changed 4 weeks ago by cohosh

Upon further investigation, it looks like the Snowflake bridge is actually closing the connection to the proxy at that 13:58:34 timestamp. That leads me to believe that the webrtc library is somehow misordering or mangling data causing the bridge to error out the connection.

I checked the snowflake server logs and see these messages:

2019/06/18 13:58:34 error copying ORPort to WebSocket
2019/06/18 13:58:34 error copying WebSocket to ORPort

comment:11 Changed 4 weeks ago by cohosh

A byte-by-byte comparison of data sent by the client and data received from OnMessage by the proxy show that they differ :/

Interestingly, data send by the proxy (pion) and received by the client (go-webrtc) appears fine. So maybe the pion/webrtc isn't ordering data upon receipt?

Last edited 4 weeks ago by cohosh (previous) (diff)

comment:12 Changed 4 weeks ago by cohosh

A closer inspection yields that a big chunk of data (~298KB) went missing at the proxy side, but other than that bytes seem to be in order. This shouldn't happen if the channel is reliable. I've confirmed the channel type is set to reliable and ordered, so this could be a bug.

comment:13 Changed 4 weeks ago by dcf

Great work on this, cohosh. Missing data does seem to point to a bug in the pions implementation. backkem should be Cc'ed on this ticket so they'll get the report.

My first guess is that the bug lies somewhere inside pion/sctp. As I understand it, a reliable dataconnection is SCTP inside DTLS. A working SCTP should make it impossible to drop a chunk of data.

comment:14 Changed 4 weeks ago by cohosh

Cc: backkem added

Found the issue. The reassembly queue is returning an io.ErrShortBuffer error. It seems the dataChannelBufferSize constant is too small for the data that the client is sending: datachannel.go#L16

The reassembly queue works by concatenating all of the fragments for a SCTP Stream Sequence Number (SSN) and trying to read them into the provided buffer all at once. If the buffer is too short, an io.ErrShortBuffer is returned from reassemblyQueue.read here, but the function calling it (Stream.ReadSCTP) doesn't return an error and the data for that sequence number is simply lost here. Not that in particular the error returned from the reassemblyQueue read is being overwritten.

There's a few bugs here:

  1. If the buffer is too small, we should split up the reads into multiple subsequent reads.
  2. The error for the reassembly queue should be checked.

I can write up a patch for this, now that I have a good handle of what's going on.

comment:15 Changed 4 weeks ago by cohosh

Looking at the webrtc specification it seems to be a good idea to preserve user message boundaries when passing data to OnMessage().

It also looks like the pion webrtc implementation is set up to check for io.ErrShortBuffer errors: datachannel.go#L292, but it isn't handled.

I think I'll go about this by writing two patches:

  • a patch for pion/sctp that correctly forwards the error message
  • a patch for pion/webrtc that calls ReadDataChannel again with a larger buffer

comment:16 in reply to:  15 ; Changed 4 weeks ago by cohosh

Replying to cohosh:

Looking at the webrtc specification it seems to be a good idea to preserve user message boundaries when passing data to OnMessage().

It also looks like the pion webrtc implementation is set up to check for io.ErrShortBuffer errors: datachannel.go#L292, but it isn't handled.

I think I'll go about this by writing two patches:

  • a patch for pion/sctp that correctly forwards the error message

Fixed with https://github.com/pion/sctp/pull/51

  • a patch for pion/webrtc that calls ReadDataChannel again with a larger buffer

Fixed with https://github.com/pion/webrtc/pull/719

And Snowflake clients are now bootstrapping to 100% :)

comment:17 Changed 4 weeks ago by cmm323

Great! One concern I have is that if there are specific features in pion implementation that are different from the native implementation, which make it easy to block.

comment:18 in reply to:  17 Changed 4 weeks ago by Sean-Der

Replying to cmm323:

Great! One concern I have is that if there are specific features in pion implementation that are different from the native implementation, which make it easy to block.

Hi! I am Sean DuBois (one of the Pion WebRTC devs) We *should* have zero differences, and if they do pop up will try my best to fix them :)

We run our test suite against both Pion, and the browser implementation. We use the same Go code in both cases (but compile it to WASM for the browser)

Thanks again for the fixes cohosh! Go is amazing, it makes things so easy to fix.

comment:19 in reply to:  17 ; Changed 4 weeks ago by dcf

Replying to cmm323:

Great! One concern I have is that if there are specific features in pion implementation that are different from the native implementation, which make it easy to block.

I am not too worried about that at this point, because what little research we did using libwebrtc (doc/Snowflake/Fingerprinting) showed that even with the native library, Snowflake did not match other applications. I don't think swapping one library for another costs much at this point, in that respect.

Replying to Sean-Der:

We *should* have zero differences

I would be surprised if this is the case--unless pion has paid extraordinary attention to matching externally visible protocol implementation details, which goes farther than interoperability. What about the order of ciphersuites in the DTLS handshake, or the metadata inside STUN messages? One of the things we found is that there's no single "WebRTC" fingerprint, nor even a single "Chrome WebRTC" fingerprint--it depends on the specific application. That said, I'm glad that you are involved, and I am hopeful that pion will be easier to adapt if and when needed.

comment:20 in reply to:  16 ; Changed 4 weeks ago by dcf

Replying to cohosh:

And Snowflake clients are now bootstrapping to 100% :)

A next step is perhaps to run and monitor a pion-based proxy-go alongside the existing libwebrtc-based ones? We can check for crashes and see if there are anomalies with regard to number of clients handled, for example.

comment:21 in reply to:  20 ; Changed 4 weeks ago by cohosh

Replying to dcf:

Replying to cohosh:

And Snowflake clients are now bootstrapping to 100% :)

A next step is perhaps to run and monitor a pion-based proxy-go alongside the existing libwebrtc-based ones? We can check for crashes and see if there are anomalies with regard to number of clients handled, for example.

Sounds good. Do you want to do a code review first?

I've also started the process of switching over to pion/webrtc in the client.

comment:22 in reply to:  19 ; Changed 4 weeks ago by cohosh

Replying to dcf:

Replying to cmm323:

Great! One concern I have is that if there are specific features in pion implementation that are different from the native implementation, which make it easy to block.

I am not too worried about that at this point, because what little research we did using libwebrtc (doc/Snowflake/Fingerprinting) showed that even with the native library, Snowflake did not match other applications. I don't think swapping one library for another costs much at this point, in that respect.

There's also something to be said for the trade-off in complexity/adaptability, which I believe motivated the move from a headless Firefox helper to uTLS in meek: https://trac.torproject.org/projects/tor/ticket/29077

If pion/webrtc makes it so we can build snowflake for Windows and Android more easily and if they are amenable to us proposing changes based on observed blocking, then this seems like a good path forward to me.

comment:23 in reply to:  19 Changed 4 weeks ago by Sean-Der

Replying to dcf:

Replying to cmm323:

Great! One concern I have is that if there are specific features in pion implementation that are different from the native implementation, which make it easy to block.

I am not too worried about that at this point, because what little research we did using libwebrtc (doc/Snowflake/Fingerprinting) showed that even with the native library, Snowflake did not match other applications. I don't think swapping one library for another costs much at this point, in that respect.

Replying to Sean-Der:

We *should* have zero differences

I would be surprised if this is the case--unless pion has paid extraordinary attention to matching externally visible protocol implementation details, which goes farther than interoperability. What about the order of ciphersuites in the DTLS handshake, or the metadata inside STUN messages? One of the things we found is that there's no single "WebRTC" fingerprint, nor even a single "Chrome WebRTC" fingerprint--it depends on the specific application. That said, I'm glad that you are involved, and I am hopeful that pion will be easier to adapt if and when needed.

Oh yes you are 100% right about that. I haven't paid any attention to that, I have only been concerned about interoperability.

This hasn't been a concern for me, but I would love to make this work for you. Maybe we can write some sort of test suite that does ICE/DTLS/SCTP and compares libwebrtc/Pion nightly and brings down the drift.

Would it also be helpful to 'randomize' Pion? We could add features that helps make Snowflake more resistant to fingerprinting. I am not up to date on concepts/needs around censorship circumvention

comment:24 in reply to:  22 Changed 4 weeks ago by Sean-Der

Replying to cohosh:

If pion/webrtc makes it so we can build snowflake for Windows and Android more easily and if they are amenable to us proposing changes based on observed blocking, then this seems like a good path forward to me.

I would love to take any/all changes you propose! We have a few active contributors, and everyone just needs one sign-off to merge master.

I am really excited to get your involvement/oversight. The quality of Tor's work is so high so I it will end up making Pion a lot better I think :) Pion doesn't make any money/not a corporate project so I only do this after work hours, but I try to move it quickly.

comment:25 Changed 4 weeks ago by backkem

Hi guys, sorry for the delay. Apparently, I didn't have an email address configured.

Really cool to see this moving forward. We've had users run pion/webrtc on all sorts of platforms, including Windows and Android. Since the entire stack is in pure Go with little to no dependencies, it should be highly portable. We even act as a wrapper around the JavaScript WebRTC implementation when compiling to the JS/WASM build target.

Some comments:

comment:26 Changed 3 weeks ago by cohosh

Status: assignedaccepted

Moving tickets I'm currently working on to "accepted"

comment:27 in reply to:  25 Changed 3 weeks ago by cohosh

Replying to backkem:

  • As mentioned in the original ticket and the SCTP PR, you can 'detach' a data channel to get access to the underlying idiomatic API based on the io.ReadWriteCloser.

Thanks! I replied there but I'm copying the answer here to keep everyone in the same loop. While this does solve the problem of how to handle an io.ErrShortBuffer, it doesn't fix the bug in SCTP. I think we'd still prefer to have access to the OnMessage callback anyway instead of implementing our own readLoop, but depending on how you decide to handle the io.ErrShortBuffer return in your readLoop we may need to go that route.

Thanks! This is really interesting, we'll take a look. We have a few tickets about alternative signaling as well: #25985 and #25985

comment:28 Changed 3 weeks ago by cohosh

Finished porting the client to pion/webrtc: https://github.com/cohosh/snowflake/commit/6bbb9a4b820f34aef4d45c14acff72374307da5e

Most of the changes were small, but there were some tricky differences to work around:

  • OnNegotiationNeeded is no longer supported, but putting the code from that callback into a go routine and calling it immediately after creation of the PeerConnection seems to work just fine.
  • By default, something called the "trickle method" is set to false, which means the OnICEGatheringStateChange callback never gets called (see icegatherer.go#L262 vs icegatherer.go#L143). We get around this by using a SettingEngine here, but it was a bit confusing. The default API should probably have isTrickle set to true by default.
  • OnICECandidate returns a nil candidate when gathering is complete. This isn't really a problem but I got a silent failure when i didn't check for it. It would be nice if this behaviour was documented in the GoDocs.

comment:29 in reply to:  21 Changed 2 weeks ago by dcf

Replying to cohosh:

Replying to dcf:

Replying to cohosh:

And Snowflake clients are now bootstrapping to 100% :)

A next step is perhaps to run and monitor a pion-based proxy-go alongside the existing libwebrtc-based ones? We can check for crashes and see if there are anomalies with regard to number of clients handled, for example.

Sounds good. Do you want to do a code review first?

The proxy-go changes look good to me.

comment:30 in reply to:  28 Changed 2 weeks ago by dcf

Replying to cohosh:

Finished porting the client to pion/webrtc: https://github.com/cohosh/snowflake/commit/6bbb9a4b820f34aef4d45c14acff72374307da5e

Most of the changes were small, but there were some tricky differences to work around:

  • OnNegotiationNeeded is no longer supported, but putting the code from that callback into a go routine and calling it immediately after creation of the PeerConnection seems to work just fine.
  • By default, something called the "trickle method" is set to false, which means the OnICEGatheringStateChange callback never gets called (see icegatherer.go#L262 vs icegatherer.go#L143). We get around this by using a SettingEngine here, but it was a bit confusing. The default API should probably have isTrickle set to true by default.
  • OnICECandidate returns a nil candidate when gathering is complete. This isn't really a problem but I got a silent failure when i didn't check for it. It would be nice if this behaviour was documented in the GoDocs.

This is great, thanks. I looked over the changes and didn't spot any problems. Here there's still a reference to OnNegotiationNeeded.

comment:31 Changed 2 weeks ago by cohosh

I just modified my PRs to pion/webrtc after listening to their feedback. They linked an interesting article https://lgrahl.de/articles/demystifying-webrtc-dc-size-limit.html about message size limits in different implementations of WebRTC (which was the root of our problem). Looks like no implementations handle this particularly well.

comment:32 Changed 2 weeks ago by backkem

I wanted to quickly reiterate that this is one of the reasons for existence of our (Pion WebRTC) detach API. It provides you with an io.ReadWriter. With this pattern the upper layer protocol (your protocol) can supply the buffer for reading/writing. This means, since you likely know what the appropriate buffer size is for your use-case, you can allocate it accordingly. There is actually a long running issue to expose this in the WebRTC API. More modern web APIs are modeled in a similar way, E.g.: incoming-stream & outgoing-stream. To ensure compatibility with other WebRTC implementations it is of-course still recommended to respect the buffer-size limits mentioned in Lennart's blog post.

comment:33 Changed 2 weeks ago by cohosh

Hi @bakkem,

Thanks, the size limit change in my pull request actually affects reads only. I still think you want to make that change since your peers have a strictly larger write limit (the write limit is 64KiB and the read limit is 16KiB) than read limit, meaning that peers that both use your implementation will frequently drop data from messages that are too large using the normal Send/OnMessage API. Respecting the limits in the linked article would mean either increasing your read lihttps://github.com/pion/webrtc/issues/718mit to match the write limit of 64KiB or decreasing your write limit to match the read limit of 16KiB. Either would help the problem on this end.

Also a note that I modified my pull request from the original to not increase the buffer on an io.ErrShortBuffer, as explained here

Last edited 2 weeks ago by cohosh (previous) (diff)

comment:34 Changed 7 days ago by cohosh

As an update to this, the pion/webrtc and pion/sctp pull requests have been approved and merged to master.

I'm going to deploy a proxy-go instance built with pion/webrtc and try running a client with pion/webrtc and see how it goes.

comment:35 in reply to:  34 Changed 3 days ago by cohosh

Replying to cohosh:

I'm going to deploy a proxy-go instance built with pion/webrtc and try running a client with pion/webrtc and see how it goes.

I've started a new proxy-go instance on snowflake.bamsoftware.com named pion-proxy which runs a pion library version of the proxy-go instance located in /usr/local/bin/pion-proxy-go. Logs are available in /home/snowflake-proxy/pion-proxy.d

comment:36 Changed 35 hours ago by gaba

Keywords: anti-censorship-roadmap-august added; ex-sponsor-19 removed
Points: 5

comment:37 Changed 31 hours ago by gaba

Sponsor: Sponsor28-canSponsor28-must
Note: See TracTickets for help on using tickets.