As originally noted at comment:7:ticket:33211, the quic-go turbotunnel client sometimes uses 100+% CPU for a few minutes before returning to normal operation. It is specific to the quic-go implementation; it doesn't happen with the kcp-go implementation nor the non-turbotunnel client.
As best I can figure, the cause has something to do with timers created under (*session) maybeResetTimer.
session.maybeResetTimer() and session.run() were using slightly different definitions of when a keep-alive PING should be sent. Under certain conditions, this would make us repeatedly set a timer for the keep-alive, but on timer expiration no keep-alive would be sent.
With a necessary later bugfix commit as well:
The firstAckElicitingPacketAfterIdleSendTime condition was inverted in a recent PR, maybe just a typo. This was causing only one ping to be sent during periods of no activity. The ack from the first keepalive ping causes firstAckElicitingPacketAfterIdleSentTime to be set to zero. If there is no further activity, it will remain zero and prevent further keepalive pings.
If the current time is more than 50% of IdleTimeout past idleTimeoutStartTime, then this line computes a deadline in the past. The deadline in the past makes the s.timer always immediately selectable, which makes session.run always call back right into maybeResetTimer and set another deadline in the past. It continues until some external event such as an arriving packet resets idleTimeoutStartTime.
We can resolve the problem by using a more recent commit of quic-go. But the commit 079279b we need is not in any published release yet (v0.14.4 is the newest as of this writing, released just 4 days ago). So we would have to set it to a specific commit rather than a tag.
The instability of quic-go is making me like it less for Turbo Tunnel purposes. This isn't the first time I've found a bug in the most recent tagged release that had already been fixed in master (i.e., not in any version that go get would get for you with Go modules activated). The other time was GH#2172.