We have a bit of a tendency to forget to test IPv6 solutions properly and in a structured way. We should make sure that IPv6 is working properly with Snowflake.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items ...
Show closed items
Linked items 0
Link issues together to show that they're related.
Learn more.
Back in 2017, I inquired about IPv6 addresses. The reply is that IPv6 is only supported in one of the Greenhost data centers, namely Amsterdam
...instances on our Amsterdam location we can give you an ipv6 prefix. Other locations don't have ipv6 available yet.
The bridge is in the Amsterdam location, so I activated IPv6 for it back then. But the broker is in the Hong Kong location. I sent another support request this week to ask whether anything had changed, but IPv6 is still not available in Hong Kong.
Unfortunately there are no ipv6 block available yet for our Hong Kong customers.
My proposed solution is to migrate the broker to the Amsterdam data center.
Provision a new VM in Amsterdam.
Set it up just as the current broker and rsync past logs to it.
Change the snowflake-broker.bamsoftware.com DNS record to point to the new broker.
a. Restart our proxy-go instances. Web badge and WebExtension instances should restart automatically.
Run the two brokers in parallel for a while.
Shut down the Hong Kong broker.
If all goes well, this plan means no required downtime. The downside I see is that during step 4, there will be two separate sets of logs (snowflake.log and metrics.log) being kept. We will need to either merge them, or ignore one copy during the transition.
My proposed solution is to migrate the broker to the Amsterdam data center.
I've set up a new host in the Amsterdam data center and documented the installation instructions at [[org/teams/AntiCensorshipTeam/SnowflakeBrokerInstallationGuide]]. I copied over usernames, passwords, and ssh/authorized_keys, but not the contents of home directories or logs. You should be able to SSH and sudo as before.
One problem: the IPv6 doesn't work :) I did the same thing I did last time with the bridge, but I cannot send nor receive IPv6 on the network interface. I'm going to contact support to see if I'm doing something wrong.
I got the IPv6 situation sorted out. (Just needed a different prefix.) Here's the information. After we've switched to this new host, I'll update the SSH fingerprints in [[org/teams/AntiCensorshipTeam/SnowflakeBrokerSurvivalGuide]].
Now the question is what to do about handling the migration. We can talk about this at the next meeting on Thursday. All that's needed is to point the following DNS names to the new IPv4 and IPv6 addresses:
snowflake-broker.bamsoftware.com
snowflake-broker.freehaven.net (currently a CNAME for snowflake-broker.bamsoftware.com)
snowflake-broker.torproject.net
About logs, I'm thinking we just let the log files happen in parallel on the old and new hosts. Then after we've made the switch, we check the old logs for sanitization and publish them. Logs we publish in the future from the new broker will partially temporally overlap those from the old, but that should be no problem.
I've copied over the contents of people's home directories.
Trac: Summary: What is the IPv6 story with Snowflake to Provide an IPv6 address for the Snowflake broker Status: needs_information to needs_review
Now the question is what to do about handling the migration. We can talk about this at the next meeting on Thursday.
Today we decided to start by pointing the snowflake-broker.torproject.net DNS, which is currently unused, at the new broker, so we can test it ourselves.
Today we decided to start by pointing the snowflake-broker.torproject.net DNS, which is currently unused, at the new broker, so we can test it ourselves.
snowflake-broker.torproject.net is now set up for us. Using the following proxy-go command and torrc I was able (using an IPv6 connection to the broker) to connect to myself and bootstrap to 100%.
2019/10/17 19:37:32 http: TLS handshake error from [scrubbed]: 403 urn:acme:error:unauthorized: Account creation on ACMEv1 is disabled. Please upgrade your ACME client to a version that supports ACMEv2 / RFC 8555. See https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430 for details.2019/10/17 19:37:37 http: TLS handshake error from [scrubbed]: acme/autocert: missing certificate2019/10/17 19:37:41 http: TLS handshake error from [scrubbed]: acme/autocert: missing certificate
Okay I think we can go ahead and finish switching hosts now.
About logs, I'm thinking we just let the log files happen in parallel on the old and new hosts. Then after we've made the switch, we check the old logs for sanitization and publish them. Logs we publish in the future from the new broker will partially temporally overlap those from the old, but that should be no problem.
The metrics logs will be the largest problem (see #322131). I propose this for the switch:
stop the broker process on the new and old host
copy over all metrics log files from the old host to the new host
start the new host
We're going to lose partial metrics for the collection period that overlaps with the switch, but that actually happens every time we restart the broker since metrics for the time period (which is one) are stored in memory until the time period ends at which point they are written to a file.
So, maybe the better question to ask here is: is that okay and if not how do we solve it more generally?
After the DNS changes propagate, I need to restart snowflake-proxy-restartless. If I'm not mistaken, all other proxies will restart themselves and update on their own.
After the DNS changes propagate, I need to restart snowflake-proxy-restartless.
I just did sv restart snowflake-proxy-restartless. I'm planning to let the others just restart themselves naturally. One of them must have already done so, because https://snowflake-broker.torproject.net/debug is currently (2019-11-14 22:10:00) showing 2 standalone proxies:
current snowflakes available: 7 standalone proxies: 2 browser proxies: 5