Trac: Sponsor: N/Ato Sponsor19 Summary: New design for client -- proxy protocol to New design for client -- proxy protocol for Snowflake Priority: Very High to Medium
At moment, we're focusing on the proxy--broker protocol, but it is somewhat related. I'll assign it to myself since we'll cover it together with #29207 (moved)
I'm going to ask for an incremental review on this one, mostly just to get another pair of eyes on what I've done and my next steps before I sink more time into going possibly the wrong direction: https://github.com/cohosh/snowflake/compare/sequencing
What I've done:
Added a ReadWriteCloser that takes data to be sent and adds a header before passing it on to the Write function of the webrtc connection and removes the header in a Read on the other side. This is implemented as a common package because it's used by both the client and server
Implement a timeout (maybe making the ReadWriteCloser a net.Conn and implementing SetReadDeadline and a fixed size window to stop sending packets before others have been acknowledged)
Make it backwards compatable. I'm thinking of doing this by having an extra field in the header that should be set to all zeros. If it's not zeros, we just forward the data as is. If it is zeros, we extract the header.
Make the tests better. Right now I'm just reading and writing from a buffer, we should test with network connections
Make it backwards compatable. I'm thinking of doing this by having an extra field in the header that should be set to all zeros. If it's not zeros, we just forward the data as is. If it is zeros, we extract the header.
Actually another way of doing backwards compatability is to make a new handler for the new version, or add an http header with some kind of versioning information.
I'm going to ask for an incremental review on this one, mostly just to get another pair of eyes on what I've done and my next steps before I sink more time into going possibly the wrong direction: https://github.com/cohosh/snowflake/compare/sequencing
High-level summary: splits the data stream into frames. Each frame is preceded by a 12-byte header consisting of seq, ack, and length fields. The receiver buffers an entire frame in memory, then accepts it if its sequence number exactly matches the expected sequence number, or drops it otherwise.
Overall the design looks good.
type snowflakeHeader struct { seq int ack int length int //length of the accompanying data (including header length)}
The fields would be better as uint32, because the size of intmay vary (though I think you handled it correctly by always specifying 32 bits in serialization functions). length may be better as uint16; that's plenty for one call to Write and it avoids the risk that an implementation will try to buffer 4 GB in memory.
IMO length should be exclusive of the header size. Otherwise you have to define what it means if length < 12. (I believe the current code doesn't work in this case because header.length - snowflakeHeaderLen underflows.) The packets sent by sendAck should have length = 0.
This should have b[:length] in place of b:
{{{
s.buffer = append(s.buffer, b...)
}}}
This order of operations (Read then copy) will lose any data remaining in s.out at EOF. It would be better to do the copy first, and only when s.out is empty do a Read and possibly return an error.
{{{
length, err := s.Conn.Read(b)
if err != nil {
return length, err
}
s.buffer = append(s.buffer, b...)
n := copy(b, s.out)
s.out = s.out[n:]
if n == len(b) {
return n, err
}
}}}
I would prefer to start with an implementation of Read that is simpler, if less efficient. I am picturing something like this (untested):
{{{
func NewSnowflakeReadWriter(sConn io.ReadWriteCloser) *SnowflakeReadWriter {
pr, pw := io.Pipe()
s := &SnowflakeReadWriter{
Conn: sConn,
pr: pr,
}
go s.readLoop(pw)
return s
}
func (s *SnowflakeReadWriter) readLoop(pw io.PipeWriter) {
var err error
for err == nil {
// strip headers and write data into the pipe
var header snowflakeHeader
err = readHeader(s.Conn, &header)
if err != nil {
break
}
if header.seq == s.ack {
_, err = io.CopyN(pw, s.Conn, int64(header.length))
} else {
_, err = io.CopyN(ioutil.Discard, s.Conn, int64(header.length))
}
}
pw.CloseWithError(err)
}
func (s *SnowflakeReadWriter) Read(b []byte) (int, error) {
// read de-headered data from the pipe
return s.pr.Read(b)
}
> * Implement a timeout (maybe making the ReadWriteCloser a net.Conn and implementing SetReadDeadline and a fixed size window to stop sending packets before others have been acknowledged)I agree that ultimately `SnowflakeReadWriter` should be a `net.Conn`, though I don't see why the read deadline should affect the sending of packets.> * Make the tests better. Right now I'm just reading and writing from a buffer, we should test with network connections[net.Pipe](https://golang.org/pkg/net/#Pipe) can be good for this.
Trac: Status: needs_review to needs_revision Summary: New design for client -- proxy protocol for Snowflake to New design for client -- server protocol for Snowflake Description: Right now we just send traffic. We need to add some control messages for communication between the client and proxy.
to
Right now we just send traffic. We could add some control messages or another layer for reliability/sequencing.
I'm going to ask for an incremental review on this one, mostly just to get another pair of eyes on what I've done and my next steps before I sink more time into going possibly the wrong direction: https://github.com/cohosh/snowflake/compare/sequencing
The fields would be better as uint32, because the size of intmay vary (though I think you handled it correctly by always specifying 32 bits in serialization functions). length may be better as uint16; that's plenty for one call to Write and it avoids the risk that an implementation will try to buffer 4 GB in memory.
Fixed in c102894
IMO length should be exclusive of the header size. Otherwise you have to define what it means if length < 12. (I believe the current code doesn't work in this case because header.length - snowflakeHeaderLen underflows.) The packets sent by sendAck should have length = 0.
I'm not sure about the implementation of Read:
I would prefer to start with an implementation of Read that is simpler, if less efficient. I am picturing something like this [...]
I like this implementation a lot more. Fixed in b6fb987 and 9c1b12f
I'm going to start on implementing fixed size send windows and cleaning up the unit tests a bit.