Opened 6 days ago

Last modified 5 days ago

#28726 new defect

Loosen restrictions on message sizes in WebSocket server

Reported by: dcf Owned by:
Priority: Medium Milestone:
Component: Obfuscation/Snowflake Version:
Severity: Normal Keywords:
Cc: ahf, dcf, arlolra Actual Points:
Parent ID: Points:
Reviewer: Sponsor:


ahf couldn't bootstrap beyond 25% when running his own client, broker, and WebSocket server (i.e., not using the public infrastructure). I asked him to try relaxing the message size limit in server.go:

-const maxMessageSize = 64 * 1024
+const maxMessageSize = 10 * 1024 * 1024

This enabled him to bootstrap to 100% at least once, but "it still doesn't work most of the times i bootstrap from a clean tor instance."

What I suspect is happening is that the browser proxy is sending WebSocket messages larger than 64 KB, which is causing the WebSocket server to error and tear down the connection. How much larger than 64 KB, I don't know. The underlying websocket package returns an error like

"frame payload length of %d exceeds maximum of %d"

but we currently throw away that error message as a precaution until we've audited error logs to ensure that IP addresses don't appear.

As an alternative to allowing larger messages at the server, we could try to ensure that proxies don't produce such over-large messages. Here, in onClientToRelayMessage, we could break recv into 64 KB chunks before pushing them onto @c2rSchedule. In proxy-go, I suspect we are succeeding by accident: the code uses io.Copy, which must use an internal buffer smaller than 64 KB.

Taking a longer view, there's no good reason for a message size limit to exist at all. It stems from a time when I was naive in Go and didn't know how to provide an implementation that didn't buffer the entirety of each message in memory. The reason I wrote my own WebSocket implementation was that the other one that existed at the time,, had no limits on message size at all, and you could trivially DoS it (out of memory) by sending a 1 TB message. (Looks like this got fixed in 2016.) A good solution would be to rewrite our WebSocket library to provide a streaming interface without message buffering, so to investigate whether other WebSocket libraries like can do it.

Child Tickets

Change History (1)

comment:1 Changed 5 days ago by dcf

I ran a browser proxy for a day and a half, with a patch to keep track of the size of WebSocket messages it was sending. I only got 5 or 6 sessions, but I didn't see any sends bigger than 32 KB. And messages that big only happened once the session was pretty well established, not at the beginning during bootstrapping. (Which makes sense, because the client doesn't upload much during bootstrapping.)

So while we could probably benefit from raising the limit a little, it doesn't seem so constraining that it would cause bootstrapping errors most of the time. Maybe a faster or slower network would have different buffering behavior and give different results, though.

new max message size 3656

new max message size 5227

new max message size 4077

new max message size 7283

new max message size 11909
new max message size 23940
new max message size 25418
new max message size 32768

Here is the patch I used (applied on top of #28732 patches):

  • proxy/

    a b class ProxyPair 
    2121  flush_timeout_id: null
    2222  onCleanup:   null
    2323  id:          null
     24  max_message_size: 0
    2526  ###
    2627  Constructs a ProxyPair where:
    class ProxyPair 
    134135  onClientToRelayMessage: (msg) =>
    135136    if DEBUG
    136137      log 'WebRTC --> websocket data: ' + + ' bytes'
     138    if > @max_message_size
     139      @max_message_size =
     140      log 'new max message size ' + @max_message_size
    137141    @c2rSchedule.push
    138142    @flush()
Note: See TracTickets for help on using tickets.