Jul 20 00:45:16.000 [warn] http status 400 ("Authdir is rejecting routers in this range.") response from dirserver '193.23.244.244:80'. Please correct.
Jul 20 00:45:16.000 [warn] http status 400 ("Authdir is rejecting routers in this range.") response from dirserver '171.25.193.9:443'. Please correct.
Jul 20 00:45:16.000 [warn] http status 400 ("Authdir is rejecting routers in this range.") response from dirserver '194.109.206.212:80'. Please correct.
Jul 20 00:45:16.000 [warn] http status 400 ("Authdir is rejecting routers in this range.") response from dirserver '128.31.0.34:9131'. Please correct.
Jul 20 00:45:16.000 [warn] http status 400 ("Authdir is rejecting routers in this range.") response from dirserver '154.35.32.5:80'. Please correct.
Jul 20 00:45:16.000 [warn] http status 400 ("Authdir is rejecting routers in this range.") response from dirserver '76.73.17.194:9030'. Please correct.
Jul 20 00:45:16.000 [warn] http status 400 ("Authdir is rejecting routers in this range.") response from dirserver '208.83.223.34:443'. Please correct.
Trac: Username: tmpname0901
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items 0
Show closed items
No child items are currently assigned. Use child items to break down this issue into smaller parts.
Linked items 0
Link issues together to show that they're related.
Learn more.
Hi tmpname! Sorry for the confusion here. We had some trouble from a bunch of relays on this /16, so we blocked it all for now. I'm hoping that soon we'll be able to implement more precise blocks to keep those relays out while allowing the others back in. Thanks for your patience.
On a site note this was a bit of a miss on our part. When flagging these relays we mentioned that we should file a ticket so we don't forget to narrow the policy later but then forgot. We're presently in the middle of reorganizing ourselves in this space so it should go smoother in the future. Thanks for the reminder ticket! :)
Node fingerprint is:
Firestorm 3FB4E77571D1770D8451E09E93109B8584D07362
Log is:
Sep 28 13:20:45.000 [notice] Now checking whether ORPort 50.7.76.163:443 and DirPort 50.7.76.163:9030 are reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Sep 28 13:20:46.000 [notice] Self-testing indicates your DirPort is reachable from the outside. Excellent.
Sep 28 13:20:47.000 [notice] Self-testing indicates your ORPort is reachable from the outside. Excellent. Publishing server descriptor.
Sep 28 13:20:52.000 [notice] Performing bandwidth self-test...done.
Sep 28 13:21:44.000 [warn] http status 400 ("Authdir is rejecting routers in this range.") response from dirserver '194.109.206.212:80'. Please correct.
Sep 28 13:21:44.000 [warn] http status 400 ("Authdir is rejecting routers in this range.") response from dirserver '193.23.244.244:80'. Please correct.
Just to keep this updated, the issue does still appear to be occurring:
Oct 14 09:52:20.000 [notice] Bootstrapped 100%: Done.
Oct 14 09:52:20.000 [notice] Now checking whether ORPort 50.7.76.163:443 and DirPort 50.7.76.163:9030 are reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Oct 14 09:52:21.000 [notice] Self-testing indicates your DirPort is reachable from the outside. Excellent.
Oct 14 09:52:22.000 [notice] Self-testing indicates your ORPort is reachable from the outside. Excellent. Publishing server descriptor.
Oct 14 09:52:28.000 [notice] Performing bandwidth self-test...done.
Oct 14 09:53:13.000 [warn] http status 400 ("Authdir is rejecting routers in this range.") response from dirserver '194.109.206.212:80'. Please correct.
Oct 14 09:53:13.000 [warn] http status 400 ("Authdir is rejecting routers in this range.") response from dirserver '193.23.244.244:80'. Please correct.
So, few days back my vm got screwed up and I had to bring it back up from scratch, and now I don't see this issue anymore. The IP address hasn't changed so I'd guess the issue is solved (for my IP atleast).
(perhaps unrelated) my vm has been running as a relay for more than 3 days now, but I don't see it listed on atlas yet.
So, few days back my vm got screwed up and I had to bring it back up from scratch, and now I don't see this issue anymore. The IP address hasn't changed so I'd guess the issue is solved (for my IP atleast).
Nope, my bad. Just checked the logs earlier today and the issue still persists :(
don't seem to see the 'authdir is rejecting..' message anymore (at least for the past week). Could someone care to comment on whether below log looks ok or if there's something to worry about?
● tor.service - Anonymizing overlay network for TCP
Loaded: loaded (/usr/lib/systemd/system/tor.service; enabled)
Active: active (running) since Sat 2015-04-18 22:43:37 UTC; 5 days ago
Process: 29617 ExecStop=/bin/kill -INT ${MAINPID} (code=exited, status=0/SUCCESS)
Main PID: 29620 (tor)
CGroup: /system.slice/tor.service
└─29620 /usr/bin/tor --runasdaemon 0 --defaults-torrc /usr/share/tor/defaults-torrc -f /etc/tor/torrc
Apr 23 22:43:39 nagato.amegakure.ch Tor[29620]: TLS write overhead: 7%
Apr 24 04:43:39 nagato.amegakure.ch Tor[29620]: Heartbeat: Tor's uptime is 5 days 6:00 hours, with 0 circuits open. I've sent 2.12 MB and received 57.73 MB.
Apr 24 04:43:39 nagato.amegakure.ch Tor[29620]: Average packaged cell fullness: 43.814%
Apr 24 04:43:39 nagato.amegakure.ch Tor[29620]: TLS write overhead: 7%
Apr 24 10:43:39 nagato.amegakure.ch Tor[29620]: Heartbeat: Tor's uptime is 5 days 12:00 hours, with 0 circuits open. I've sent 2.22 MB and received 60.56 MB.
Apr 24 10:43:39 nagato.amegakure.ch Tor[29620]: Average packaged cell fullness: 43.667%
Apr 24 10:43:39 nagato.amegakure.ch Tor[29620]: TLS write overhead: 7%
Apr 24 16:43:39 nagato.amegakure.ch Tor[29620]: Heartbeat: Tor's uptime is 5 days 18:00 hours, with 0 circuits open. I've sent 2.32 MB and received 63.42 MB.
Apr 24 16:43:39 nagato.amegakure.ch Tor[29620]: Average packaged cell fullness: 43.537%
Apr 24 16:43:39 nagato.amegakure.ch Tor[29620]: TLS write overhead: 7%
To be clear, a few directory authorities rejecting the relay is not enough to keep it out of the consensus. What I think is happening here is that a couple of directory authorities are ignoring our requests to remove those reject lines from their config. Not the end of the world.
Still, it's a hassle for the relay operators though, I agree.
I think the real answer is to have much more thorough scripts for tracking which authorities are rejecting which relays.
Sebastian, atagar, Micah, Linus, did we make any useful concrete plans about the above 'more thorough scripts' in Berlin?
Trac: Sponsor: N/AtoN/A Severity: N/Ato Normal Summary: "Authdir is rejecting routers in this range." to Some directory authorities reject IP ranges long after we ask them to stop Cc: Sebastian to Sebastian, ln5, micah
The DirAuth meeting in Berlin (wiki:org/meetings/2015SummerDevMeeting/DirectoryAuthorityOperators) discussed many things, and there has been some progress on some of them. However, we didn't actually discuss scripts for tracking which auths are rejecting which relays. There has been some movement in shoring up some of the foundational pieces so that we can do this, but there is still some work to do. Reviewing what these things are makes me realize that many of these aren't in trac, and need to be.
standardize format of bad.conf: dgoulet came up with a format that can be machine parseable (#17299 (moved)) and a script to generate those (#12261 (moved))
the plan was once we got that format standardized, we would start to clean out old entries (#18164 (moved))
we also wanted to push more people to use the dirauth bits. ln5 now does this, so one step forward
we need to find a way to determine vote divergence so we can find out who is and who is not voting on things that everyone else is agreeing on. DocTor is the perfect framework for monitoring everything, so we need to somehow link those things (or parse each DirAuth vote to find divergences? I dont know if Reject lines are published in the votes, I know Invalid and BadExit are there, but I dont think Reject is, perhaps that is because if it were there it would leak that information?) - I created #18165 (moved) for this
Have a frank discussion with dirauth operators about their role, their impact, and come up with some standard requirements otherwise some of their "power" will be reduced (https://trac.torproject.org/projects/tor/ticket/16558)
Meanwhile ln5 and I started also voting BadExit, and are running exitmap confirmations, and I started running confirmations for the hsdir work that dgoulet and donnacha are doing and more of us have been paying attention to bad-relays and reacting when things happen (instead of only arma), so things are improving, but they are not nearly there yet.
Moving to the DirAuth subcomponent now that we have one.
I think the cause of this particular ticket was from directory authority operators who are responsive to our "help, please blacklist this attacking relay!" mails, but then either they don't act when we send the "ok, you can stop blacklisting those now" follow-up, or we don't do the follow-up at all.
To name names, dizum was the problem child in this particular ticket.
Micah, do you have a sense of whether the directory authority operators who would otherwise be in this situation are now successfully using the dirauth-conf repo?
Trac: Reviewer: N/AtoN/A Component: Core Tor/Tor to Core Tor/DirAuth
(In an attempt to do some cleanup for this component)
dizum is still not using dirauth-conf.git so the only way Alex can apply the relay rules (badexit, reject, ...) is by applying the diff received by email when a commit occurs.
That being said, the other 8 dirauths are super responsive these days about this, even dizum is by email. I don't think its worth the effort today to put time in a script to track which dirauth is not applying rules.
If you disagree, no problem but I would propose to open a ticket about that very specific task of writing a script/service that parses the votes regularly for that and CC the network health team.