Custom Query (24747 matches)

Filters
 
Or
 
  
 
Columns

Show under each result:


Results (7 - 9 of 24747)

1 2 3 4 5 6 7 8 9 10 11 12 13
Ticket Resolution Summary Owner Reporter
#32664 fixed hs-v3: Segfault in hs_circ_service_get_established_intro_circ() dgoulet
Description

Reported by atagar on IRC:

<+atagar> Looks like stem's jenkins runs are presently failing due to tor segfaults: https://paste.debian.net/plain/1119133

The report:

...
Dec 02 17:03:09.077 [notice] Configured to measure directory request statistics, but no GeoIP database found. Please specify a GeoIP database using the GeoIPFile option.
Dec 02 17:03:09.085 [warn] Controller gave us config lines that didn't validate: Unknown option 'bombay'.  Failing.
Dec 02 17:03:09.085 [warn] Tor is currently configured as a relay and a hidden service. That's not very secure: you should probably run your hidden service in a separate Tor process, at least -- see https://trac.torproject.org/8742
Dec 02 17:03:09.088 [notice] Configured to measure directory request statistics, but no GeoIP database found. Please specify a GeoIP database using the GeoIPFile option.
Dec 02 17:03:09.225 [warn] Failed to find node for hop #1 of our path. Discarding this circuit.

============================================================ T= 1575306189
Tor 0.4.3.0-alpha-dev died: Caught signal 11
/srv/jenkins-workspace/workspace/stem-tor-ci/RESULT/tor(+0x21ceb5)[0x56360d945eb5]
/srv/jenkins-workspace/workspace/stem-tor-ci/RESULT/tor(hs_circ_service_get_established_intro_circ+0x27)[0x56360d8327e7]
/srv/jenkins-workspace/workspace/stem-tor-ci/RESULT/tor(hs_circ_service_get_established_intro_circ+0x27)[0x56360d8327e7]
/srv/jenkins-workspace/workspace/stem-tor-ci/RESULT/tor(hs_service_run_scheduled_events+0x14ff)[0x56360d849ccf]
/srv/jenkins-workspace/workspace/stem-tor-ci/RESULT/tor(+0x72961)[0x56360d79b961]
/srv/jenkins-workspace/workspace/stem-tor-ci/RESULT/tor(+0x762f3)[0x56360d79f2f3]
/usr/lib/x86_64-linux-gnu/libevent-2.0.so.5(event_base_loop+0x6a0)[0x7f6688b925a0]
/srv/jenkins-workspace/workspace/stem-tor-ci/RESULT/tor(do_main_loop+0xe5)[0x56360d79e565]
/srv/jenkins-workspace/workspace/stem-tor-ci/RESULT/tor(tor_run_main+0x122d)[0x56360d78ba2d]
/srv/jenkins-workspace/workspace/stem-tor-ci/RESULT/tor(tor_main+0x3a)[0x56360d7891da]
/srv/jenkins-workspace/workspace/stem-tor-ci/RESULT/tor(main+0x19)[0x56360d788d59]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)[0x7f66873e72e1]
/srv/jenkins-workspace/workspace/stem-tor-ci/RESULT/tor(_start+0x2a)[0x56360d788daa]
#32660 fixed onionoo-backend is killing the ganeti cluster metrics-team anarcat
Description

hello!

today i noticed that, since last friday (UTC) morning, there has been pretty big spikes on the internal network between the ganeti nodes, every hour. it looks like this, in grafana:

We can clearly see a correlation between the two node's traffic, in reverse. This was confirmed using iftop and tcpdump on the nodes during a surge.

It seems this is due to onionoo-backend-01 blasting the disk and CPU for some reason. This is the disk I/O graphs for that host, which correlate pretty cleanly with the above graphs:

This was confirmed by an inspection of drbd, the mechanisms that synchronizes the disks across the network. It seems there's a huge surge of "writes" on the network every hour which lasts anywhere between 20 and 30 minutes. This was (somewhat) confirmed by running:

watch -n 0.1 -d cat /proc/drbd

on the nodes. The device IDs 4, 13 and 17 trigger a lot of changes in DRBD. 13 and 17 are the web nodes, so that's expected - probably log writes? But device ID 4 is onionoo-backend, which is what led me to the big traffic graph.

could someone from metrics investigate?

can i just turn off this machine altogether, considering it's basically trying to murder the cluster every hour? :)

#32659 fixed Remove IPv6 address of dgoulet's default bridge tbb-team phw
Description

The default bridge CDF2E852BF539B82BD10E27E9115A31734E378C2 has both an IPv4 and an IPv6 address but isn't reachable over IPv6. David says that he migrated to a new host a while ago, and this new host doesn't have IPv6. Let's remove the IPv6 address of this bridge. I'll push patches in a minute.

1 2 3 4 5 6 7 8 9 10 11 12 13
Note: See TracQuery for help on using queries.