Some relays have been getting this warning for the past few days. It's unclear what's causing it.
From briefly reading the code, it seems like someone is trying to establish an RP on the relay, while he has already asked the relay to be an RP in the same circuit. I'm not exactly sure if that user can succeed in establishing multiple RPs in one circuit. That is, it's still unclear whether this is a Tor client bug, or maybe a DoS attacker.
We should probably look into this message a bit more.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items 0
Show closed items
No child items are currently assigned. Use child items to break down this issue into smaller parts.
Linked items 0
Link issues together to show that they're related.
Learn more.
I am experiencing the same thing. It happens every couple of minutes now. The first time starting on Apr 20, really infrequently (one a day or something). My relay is on v0.2.6.7 and was started on Apr 19, three hours before this message first appeared.
This happens every minute these days on moria5. Here's a hint:
May 21 12:06:06.383 [info] rend_mid_establish_rendezvous(): Received an ESTABLISH_RENDEZVOUS request on circuit 1773468333May 21 12:06:06.383 [info] rend_mid_establish_rendezvous(): Established rendezvous point on circuit 1773468333 for cookie F24FC724May 21 12:06:06.486 [info] rend_mid_establish_rendezvous(): Received an ESTABLISH_RENDEZVOUS request on circuit 1773468333May 21 12:06:06.486 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)
Unclear yet if it's a bug with our Tor client, or somebody else's intentionally misbehaving Tor client. Sure seems to happen a lot.
I just got this as well. I am running relay named VickMackey. I've only been up about a week and my platform is Tor 0.2.6.9 on Linux. Log entry is as follows.
Jun 21 08:29:11.000 [warn] Tried to establish rendezvous on non-OR circuit with purpose Acting as rendevous (pending)
Looking at the code, clients send ESTABLISH_RENDEZVOUS cells in rend_client_send_establish_rendezvous() through rend_client_rendcirc_has_opened() which is calle only in circuit_has_opened(). Hence, from first glance there doesn't seem to be some sort of relaunch mechanism that would establish two rendezvous in the same circuit.
Next step is to check if this can happen through cannibalization or something. Otherwise, it does seem like a misbehaving Tor client.
Looking at the code, clients send ESTABLISH_RENDEZVOUS cells in rend_client_send_establish_rendezvous() through rend_client_rendcirc_has_opened() which is calle only in circuit_has_opened(). Hence, from first glance there doesn't seem to be some sort of relaunch mechanism that would establish two rendezvous in the same circuit.
Next step is to check if this can happen through cannibalization or something. Otherwise, it does seem like a misbehaving Tor client.
I don't see anything in the cannibalization process that could trigger a client to choose a circuit with the purpose of CIRCUIT_PURPOSE_C_ESTABLISH_REND. The circuit_find_to_cannibalize() only returns general purpose circuit.
I digged a bit here and what I think can possibly happen here (quite difficult to follow the code so somebody else should confirm) is that in circuit_get_open_circ_or_launch(), towards the end when we do open a new circuit with circuit_launch_by_extend_info(), if the circuit returned by this function happens to be open (CIRCUIT_STATE_OPEN), we call circuit_has_opened(). Also in circuit_send_next_onion_skin(), when the state of the circuit's cpath is CPATH_STATE_OPEN, we again call circuit_has_opened().
So, what could be happening (NOT 100% sure) is that we ask for a new circuit, it opens super fast where circuit_send_next_onion_skin() set the circuit state to open and then calls circuit_has_opened() and then following this it's called again in circuit_get_open_circ_or_launch() because the circuit we got in there is open. That can explain why this error is not at every establish rendezvous cell but just some of them in the network because it's a clear race condition if I'm right.
Someone with better knowledge of circuit building should look at that in more depth though because I'm a bit unsure of the fix (if any...).
Anyone wants to try to "confirm" my comment:8 ? :)
If it sounds plausible, the fix is not complicated but it simply needs someone to follow the code and somehow confirm that it could be possible? Deferring to 030 because I don't think this is critical for 029 at all. Feel free to rollback if you think otherwise.
Trac: Milestone: Tor: 0.2.9.x-final to Tor: 0.3.0.x-final
Recently got this message (16 December 2016 and respectively 18 December 2016) 3 times on 0.2.9.6-rc, so it's still happening.
Branch bug15618_030-testing in dgoulet's repo includes a patch that will help us confirm or infirm the theory described at comment:8 . I have compiled that branch and hardcoded the relay running it as RP, but can't properly test it at this time because of #21056 (moved) so we'll have to wait little more, I'll revert with results after.
The problem is the message:
[warn] BLAM. Circuit has_opened() called 3 times.
is logged on the client instance when trying to open more ( > 1 ) concurrent connections to the same onion service with the same (hardcoded) RP but different circuits because requests are sent with socks username / password isolation. Nothing unusual on the relay instance - no warnings at all.
Having the log at debug level on client instance I was able to see that the hardcoded RP is in majority of circuits hop number 4 which means cannibalization, because the RP counted from client side should be hop number 3, right?