Hello,
I have finished deploying a test of OnionPerf on OTF Cloud.
It can be accessed at: https://199.119.112.144/ (certificate is self signed atm)
Now that we can look at the generated files, we can start thinking and planning about how we want to consume them.
I think this is a good moment to decide if we want to include an option to generate a different format within OnionPerf, or we want to write a script to parse the files in a format we can use.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items ...
Show closed items
Linked items 0
Link issues together to show that they're related.
Learn more.
I am finishing installing also the tpo instance as I will need some libraries installed there first. I will let you know once this is completed by updating this ticket.
I could try to make onionperf measurements available for collector, or maybe you prefer me to do something else instead. I haven't honestly touched Java in a very long time ;)
hiro, I'm having a difficult time reading your results. Can you put them on GitHub Pages or export some images and attach them here? And can you add explanations what your results show and why you consider them close enough to Torperf data that we can switch over? That would be awesome!
Regarding the CollecTor integration, I started hacking on that yesterday evening. No need for you to touch Java just for this. :)
I have started this board: https://github.com/hiromipaw/onionperf-notebook/blob/master/board.md
I will be collecting graphs for all the measurements provided by onionperf. At the moment I only have the 50KiB download time but I will be adding more measurements before our meeting tomorrow.
My idea up to this point has been that if I do not find something that it is completely unreasonable when comparing w/ Torperf I assume the measurements (and my analysis) can be considered as accurate.
Looks good to me, thanks! I'd say let's go ahead and include these measurements on Metrics, and if we later find out that there are measurement issues, we can still take broken measurements out. But if nobody looks at the data, nobody will spot any issues.
So, seems like the missing piece is the CollecTor integration. I'll continue working on that.
Oh, and can you look into getting (Let's Encrypt) certificates, maybe using op-$cc.$something as domain names? Otherwise we'll first have to look into ways to tell CollecTor which (self-signed) certificate to expect on each OnionPerf instance, which might be possible but which is probably not trivial. Thanks!
Hi Karsten,
I have renamed the op-us and op-nd ones already. Also renamed the files too. I will have left to rename the hostname fields in the already generated log. Doing that at the moment.
Regarding the logs I am checking why these are generated but not copied over. Maybe there is a mismatch in directories. I am checking that.
Regarding op-tpo I am working to have the webserver running.
Also installing let's encrypt right now :)
For OFT Cloud we have a new location available: Hong Kong. Are we interested in setting this up?
Regarding the tpf files not showing up in the directory. Logs are generated but files are not "packed" into tpf at midnight. I am trying to investigate why this is happening.
Any idea what's up with those? Could it be that this instance is configured to make more requests than it should? Can you maybe paste the configuration here?
Please review my task-21272 branch with a commit that downloads .tpf files from OnionPerf instances. Still quite rough around the edges, but passed an initial test by successfully downloading .tpf files from https://op-us.onionperf.torproject.net/.
From reading the git diffs, I have a few questions and suggestions:
Does this code reflect the transition period of processing both
torperf and onionperf files? Is that the reason for introducing
the empty property with the meaning of 'nothing to download'?
Surely tests will be added for the new functionality of Configuration's
getStringArray and getStringArrayArray methods?
I assume current code that uses these methods relies on receiving a non-empty
array or double array. It needs to be verified that we don't run
into trouble in the existing code using these methods.
All storing of files should be done by the persist-mechanism, i.e.,
a o.t.c.persist.TpfPersistence class needs to be introduced (this would
shorten TorperfDownloader quite a bit and prevent another
re-implementation of this functionality).
And, this is a prerequisite for also syncing these files with
CollecTor's sync-mechanism, which was left out because we knew
a Torperf replacement is coming.
It might be useful to put the Configuration's getUrlArray method
to work. The old Torperf code wasn't modernized, b/c we knew
it will be replaced. Now, it might be better to have
OnionPerfHosts as Url array property. This way the url-property
checking in the downloader class is not necessary anymore as Configuration
already provides it.
Why is the host-nick-name needed and could not be replaced by the hostname
itself? (For torperf it is used for further configuration.) Or, is this
one of the rough edges and note yet available?
If so, maybe have a different configuration approach instead of the hybrid-String-Url array?
It would be great to make the configuration simpler to read
and edit and keep allmost all property-checking code in the
Configuration class. For example, an url-array and a string-array-array
property, the latter for the configuration similar to torperf.
The only check left for the TorperfDownloader would be to match
length of these two array- properties, but there might be other
good approaches.
From reading the git diffs, I have a few questions and suggestions:
Wow, quick review there. Thanks!
Does this code reflect the transition period of processing both
torperf and onionperf files? Is that the reason for introducing
the empty property with the meaning of 'nothing to download'?
Huh, good point. What I had in mind that we could use this code to 0) run the same configuration as we do now, 1) start collecting OnionPerf files, and soon after 2) stop collecting Torperf files. But we could achieve the same by deploying this code when we do 1) and deploy another patch for 2). I can undo that change.
Surely tests will be added for the new functionality of Configuration's
getStringArray and getStringArrayArray methods?
I assume current code that uses these methods relies on receiving a non-empty
array or double array. It needs to be verified that we don't run
into trouble in the existing code using these methods.
Right. Let me undo this part of the change.
All storing of files should be done by the persist-mechanism, i.e.,
a o.t.c.persist.TpfPersistence class needs to be introduced (this would
shorten TorperfDownloader quite a bit and prevent another
re-implementation of this functionality).
And, this is a prerequisite for also syncing these files with
CollecTor's sync-mechanism, which was left out because we knew
a Torperf replacement is coming.
Hmm, is this something you might be able to do? I'd really want us to deploy this code before Amsterdam, so that we can discuss next steps there. Otherwise, how about we defer this part until after Amsterdam?
It might be useful to put the Configuration's getUrlArray method
to work. The old Torperf code wasn't modernized, b/c we knew
it will be replaced. Now, it might be better to have
OnionPerfHosts as Url array property. This way the url-property
checking in the downloader class is not necessary anymore as Configuration
already provides it.
Why is the host-nick-name needed and could not be replaced by the hostname
itself? (For torperf it is used for further configuration.) Or, is this
one of the rough edges and note yet available?
If so, maybe have a different configuration approach instead of the hybrid-String-Url array?
It would be great to make the configuration simpler to read
and edit and keep allmost all property-checking code in the
Configuration class. For example, an url-array and a string-array-array
property, the latter for the configuration similar to torperf.
The only check left for the TorperfDownloader would be to match
length of these two array- properties, but there might be other
good approaches.
Well, my idea was that we should use the configured source name to make sure that an OnionPerf host doesn't send us measurement results for other sources, either accidentally or on purpose. I'm not sure how we could use the hostname there if we want to stick to the current naming schema of .tpf files. But it could be that I overlooked something.
However, if we want to keep this, what we could do is add a new method Configuration.getStringUrlMap() that returns a (sorted) map of Strings to URLs. Again, would you want to write such a thing?
Only two short questions about naming scheme and host url:
Currently the beginning of the host url is reflected in the file name.
Why can't we just use the beginning of the host url, i.e., the op-$cc part
for checking the filenames?
Is the goal now to just download all available measurements regarding file size?
Only two short questions about naming scheme and host url:
Currently the beginning of the host url is reflected in the file name.
Why can't we just use the beginning of the host url, i.e., the op-$cc part
for checking the filenames?
Sure, we could do that. And we could extend that if we need to. Works for me.
Is the goal now to just download all available measurements regarding file size?
Hmm, I guess the main reason for specifying file sizes explicitly in TorperfFilesLines was that we didn't parse a directory listing, so we had to know which file sizes exist. But here we do know which file sizes are measured, so the only reason for specifying those would be to exclude sizes we don't recognize. Do we have to do that? I'd say as long as the file sizes stated in the .tpf file name matches the FILESIZE value contained in that file, we'll take what we get.
Only two short questions about naming scheme and host url:
Currently the beginning of the host url is reflected in the file name.
Why can't we just use the beginning of the host url, i.e., the op-$cc part
for checking the filenames?
Sure, we could do that. And we could extend that if we need to. Works for me.
So, all paved for using getUrlArray, isn't it?
Is the goal now to just download all available measurements regarding file size?
Hmm, I guess the main reason for specifying file sizes explicitly in TorperfFilesLines was that we didn't parse a directory listing, so we had to know which file sizes exist. But here we do know which file sizes are measured, so the only reason for specifying those would be to exclude sizes we don't recognize. Do we have to do that? I'd say as long as the file sizes stated in the .tpf file name matches the FILESIZE value contained in that file, we'll take what we get.
I think it's fine, to reduce the configuration.
The verification of filename vs. FILESIZE value should be done by descriptor, if we feel the need to introduce it.
Hi Karsten,
I just wanted to point out that there is no configuration option in onion perf. The measurements are started by the onionperf process. Something we could do is investigating how to provide a small config. I could dig through the code and see what Rob thinks. What do you think?
hiro, I just looked a bit more at the data and found a few differences that we can work with. For example, OnionPerf randomly picks a file size (with different weights, though) every five minutes whereas Torperf had three separate schedules for each file size, which is okay. Also, OnionPerf alternates between direct download and download via onion address, which is different from Torperf, but which is okay, too.
However, there's one difference that we should fix: the direct downloads go to port 8080, not to port 80. The issue is that the set of exits permitting port 8080 may be different from the set permitting port 80, so this might impact measurement results. As far as I can see there are two ways to do this: either run OnionPerf on port 80 (which may have security implications), or configure a firewall rule to forward port 80 to 8080. I assume we'll want to do the latter. In that case we'll have to use OnionPerf's options --tgen-listen-port and --tgen-connect-port to tell it where to listen (8080) and where to connect (80).
Replying to [comment:23 karsten] and what's left from [comment:18]:
iwakeh, please find my updated task-21272 branch with some tweaks as discussed above.
Thanks, for these changes!
Couldn't downloadFromOnionPerfHost do some of the filename checking before
calling downloadAndParseOnionPerfTpfFile?
About the older code:
Could we just also make the change for #20514 (moved) now?
In addition, there are quite a few places where try-with resources
or some of Files' methods would prevent unclosed readers/writers.
Still this older code needs even more work, sigh. Maybe, after Amsterdam?
And, regarding the descriptor parsing don't the checks inside this loop belong into descriptor/metrics-lib itself?
(that might be a new ticket, though)
I can take a look at the Persistence topic at some point.
Replying to [comment:23 karsten] and what's left from [comment:18]:
iwakeh, please find my updated task-21272 branch with some tweaks as discussed above.
Thanks, for these changes!
Couldn't downloadFromOnionPerfHost do some of the filename checking before
calling downloadAndParseOnionPerfTpfFile?
Well, that wouldn't change functionality but would be a simple refactoring, right? What's the goal there? Make methods more testable or easier to read or something else? In any case, would you want to suggest new methods, and I'll move around code? Or do you want to work on a patch?
About the older code:
Could we just also make the change for #20514 (moved) now?
In addition, there are quite a few places where try-with resources
or some of Files' methods would prevent unclosed readers/writers.
Still this older code needs even more work, sigh. Maybe, after Amsterdam?
No need to spend time on this. Let's just remove the Torperf code in a few weeks when we're certain that the OnionPerfs can take over.
And, regarding the descriptor parsing don't the checks inside this loop belong into descriptor/metrics-lib itself?
(that might be a new ticket, though)
Well, if we moved that code to metrics-lib, users wouldn't be able to read Torperf results from anything else than the originally named .tpf file. We usually avoid dependencies on file names if we can. This case is a bit different, because we're archiving .tpf files, and we should be certain that they contain what they say. I'd say that's specific to the CollecTor case though and cannot be generalized in metrics-lib.
Note that we could have picked a different approach by parsing descriptors from files and appending them to new files with file names taken from descriptor contents. The result might be the exact same output, or it could be a file with fewer descriptors or descriptors in a different order, etc. But I felt it's easier to verify .tpf file contents and either take them or leave them.
I can take a look at the Persistence topic at some point.
Sounds good! The last paragraph above might be relevant here. Hope this works with the persistence classes.
Replying to [comment:23 karsten] and what's left from [comment:18]:
iwakeh, please find my updated task-21272 branch with some tweaks as discussed above.
Thanks, for these changes!
Couldn't downloadFromOnionPerfHost do some of the filename checking before
calling downloadAndParseOnionPerfTpfFile?
Well, that wouldn't change functionality but would be a simple refactoring, right? What's the goal there? Make methods more testable or easier to read or something else? In any case, would you want to suggest new methods, and I'll move around code? Or do you want to work on a patch?
No, I intended to avoid the superfluous parsing of the URL for each file and avoid download when the filename doesn't make sense.
No refactoring.
About the older code:
Could we just also make the change for #20514 (moved) now?
In addition, there are quite a few places where try-with resources
or some of Files' methods would prevent unclosed readers/writers.
Still this older code needs even more work, sigh. Maybe, after Amsterdam?
No need to spend time on this. Let's just remove the Torperf code in a few weeks when we're certain that the OnionPerfs can take over.
That's fine, too.
And, regarding the descriptor parsing don't the checks inside this loop belong into descriptor/metrics-lib itself?
(that might be a new ticket, though)
Well, if we moved that code to metrics-lib, users wouldn't be able to read Torperf results from anything else than the originally named .tpf file. We usually avoid dependencies on file names if we can. This case is a bit different, because we're archiving .tpf files, and we should be certain that they contain what they say. I'd say that's specific to the CollecTor case though and cannot be generalized in metrics-lib.
There could be a boolean parameter for strict checking?
But, I didn't mean to hijack this ticket. I can just write that down on a list on my desk ;-)
Note that we could have picked a different approach by parsing descriptors from files and appending them to new files with file names taken from descriptor contents. The result might be the exact same output, or it could be a file with fewer descriptors or descriptors in a different order, etc. But I felt it's easier to verify .tpf file contents and either take them or leave them.
Hmm, ok, good to know.
I can take a look at the Persistence topic at some point.
Sounds good! The last paragraph above might be relevant here. Hope this works with the persistence classes.
Couldn't downloadFromOnionPerfHost do some of the filename checking before
calling downloadAndParseOnionPerfTpfFile?
Well, that wouldn't change functionality but would be a simple refactoring, right? What's the goal there? Make methods more testable or easier to read or something else? In any case, would you want to suggest new methods, and I'll move around code? Or do you want to work on a patch?
No, I intended to avoid the superfluous parsing of the URL for each file and avoid download when the filename doesn't make sense.
No refactoring.
Hmm, I'm a bit lost what you mean here. We only download if the filename makes sense. But this is discussion has become very theoretical. Want to provide a patch? :D
Well, if we moved that code to metrics-lib, users wouldn't be able to read Torperf results from anything else than the originally named .tpf file. We usually avoid dependencies on file names if we can. This case is a bit different, because we're archiving .tpf files, and we should be certain that they contain what they say. I'd say that's specific to the CollecTor case though and cannot be generalized in metrics-lib.
There could be a boolean parameter for strict checking?
But, I didn't mean to hijack this ticket. I can just write that down on a list on my desk ;-)
Sounds like a new ticket. But I'm not yet sold on the idea. If it's something that only CollecTor needs as provider of .tpf files and none of the consumers, then we shouldn't add make it part of metrics-lib. Of course, if there's a plausible use case for consumers, happy to reconsider. But, new (metrics-lib) ticket?
hiro, I just looked a bit more at the data and found a few differences that we can work with. For example, OnionPerf randomly picks a file size (with different weights, though) every five minutes whereas Torperf had three separate schedules for each file size, which is okay. Also, OnionPerf alternates between direct download and download via onion address, which is different from Torperf, but which is okay, too.
However, there's one difference that we should fix: the direct downloads go to port 8080, not to port 80. The issue is that the set of exits permitting port 8080 may be different from the set permitting port 80, so this might impact measurement results. As far as I can see there are two ways to do this: either run OnionPerf on port 80 (which may have security implications), or configure a firewall rule to forward port 80 to 8080. I assume we'll want to do the latter. In that case we'll have to use OnionPerf's options --tgen-listen-port and --tgen-connect-port to tell it where to listen (8080) and where to connect (80).
Would you want to try making that change?
Hi Karsten sure I can do that.
In the meantime I have fixed the logrotation issue on op-nl. Hopefully our tpo instance should also be ready. I will let you know if everything is working correctly on https://onionperf.torproject.org
Hi Karsten,
I have setup a proxy forward for 8080 to 80 for op-nl. I'll check if everything is ok and eventually will set it up also for op-us and op.tpo.
More soon.
hiro, I looked at the data and have two questions:
It seems that the .tpf files produced by OnionPerf don't show whether connections were made to port 80 (and forwarded by firewall rules) or were made to port 8080. But we should only use data from measurements to port 80, or we'll compare two different measurements. Can you delete older measurements from before you changed that?
The OnionPerf source name (papillare) and DNS name (onionperf) of https://onionperf.torproject.org/papillare-5242880-2017-03-12.tpf don't match, which is something we implicitly require on CollecTor (which you couldn't know). Ideally, we'd stick with the naming scheme of the other instances and rename both to op-de. That's probably easiest to understand for users who will see these source names on Tor Metrics and would rightfully ask what a papillare is. Can you change the name of both OnionPerf instance and in the DNS entry?
Hi Karsten,
Sure I will delete all the old measurements starting from the day before yesterday. Regarding papillare, the host and dns names where the bits that I was missing. Will have that changed. :)
-Silvia
Update:
I have changed the onionperf source name on papillare as onionperf. I was chatting w the tpa team and they would prefer not having to change the dns to op-de. If we can stick with onionperf as service name and subdomain we are good, otherwise I'll bump them again.
Hopefully the ports are correct now.
Will finish deleting old files when we will check the new results tomorrow, to compare.
iwakeh, patch looks good, pushed to my task-21272 branch. Thanks! Do you want to change anything else, or can I squash and merge?
So far, this seems ready for merge. Tests pass and the download works.
The persistence topic from comment:18 and before, and also the later sync-addition has a separate ticket #21759 (moved).
Let's move any further things related to CollecTor java code to #21760 (moved), which already lists the open final move to onionperf and the tests etc.
Thanks for looking again, and thanks for opening those new tickets. I just squashed and merged to master, together with a change log entry.
I noticed the empty file, too. It's yet unclear whether it will go away tonight. If not, we should handle that case less loudly.
I'll keep this ticket open until deployment is complete, with data being collected by CollecTor. But let's wait at least until tomorrow to see what the three op-xx instances produce.
I took another look at the data and compared it to Torperf data. Please find the attached graph. I have two remaining questions before adding the new data to CollecTor:
It looks like op-nl is quite a bit faster than the other OnionPerf and Torperf instances. One measurement only took 80 milliseconds from making the request until receiving the last byte. Is this realistic? Or does OnionPerf take any shortcuts that the other instances don't take?
The IP address of op-hk gets resolved to the Netherlands, though the measurements look very different from the op-nl host. Can we confirm that the host is really located in Hong Kong and that it's just the IP-to-country resolution that is behind?
If we have answers to these questions, I'd say let's make the data available.
I took another look at the data and compared it to Torperf data. Please find the attached graph. I have two remaining questions before adding the new data to CollecTor:
It looks like op-nl is quite a bit faster than the other OnionPerf and Torperf instances. One measurement only took 80 milliseconds from making the request until receiving the last byte. Is this realistic? Or does OnionPerf take any shortcuts that the other instances don't take?
Looking at the relays chosen for that path, is looks like the full path is:
Netherlands(client)-->France(guard)-->Germany(middle)-->France(exit)-->Netherlands(server)
This means the latency on each of those links is about 10 milliseconds. That seems feasible to me, given how close those countries are and that the machines are probably well connected to the backbone. The server can send all ~100 cells at once and it will likely travel through Tor in one piece, so I don't think that would cause any or much delay.
Since most of the Tor relay bandwidth is in Europe, more of the circuits of a client in Europe would not leave the continent compared to clients in other regions. I'm not surprised if an OnionPerf client in Europe would trend faster than other countries on average. I wonder if either but not both of the two TorPerf nodes are in Europe, as we could potentially use the data they collect to test my hypothesis.
Also, there is more information about that circuit in particular and the download process in general in the json.xz files that are also dumped to the data directory. I added some notes to the elements there quite some time ago to help us understand what is available:
Finally, I have been running my OnionPerf instance since April 2016, and would like to contribute to Tor metrics. Is that possible? I think it would be great if you could import the data that I have been collecting if it's valid (someone should double check that it's valid first, and I'm happy to do so). If there is something about my setup that you would prefer that I change before accepting data from my instance, please let me know. Also, if you want all instances to be run by TPI, let me know that too so I can stop paying for the VPS.
Quick and only partial answer: I think it would be great to add your data to CollecTor and Tor Metrics. Here's one thing that we'd need to change though:
We currently require that the subdomain of the OnionPerf source ("onionperf" in your case) must match the OnionPerf source name ("phantomtrain" in your case). The reason is that we want to avoid a case where one instance produces data with the source name of another instance, which would mess up things quickly. You could probably work around this by adding yet another subdomain in front of your current domain. But let's first discuss which name you should pick.
Another requirement is that we pick source names in a way that Tor Metrics users are not confused by seeing them listed on https://metrics.torproject.org/torperf.html. It was not very smart to pick "torperf" as one name there, because all three are Torperf instances, and the names "moria" and "siv" don't have much meaning, either. That's why we picked "op-us" for "OnionPerf instance located in the U.S." etc. for the current instances. Obviously, that scheme does not scale either, with your instance being hosted in the U.S., too. But "onionperf" is just too generic. "phantomtrain" might work, because it's just a name. What do you think? Still time to do it right this time. :)
Thanks! (Will reply to the other parts later today.)
I took another look at the data and compared it to Torperf data. Please find the attached graph. I have two remaining questions before adding the new data to CollecTor:
It looks like op-nl is quite a bit faster than the other OnionPerf and Torperf instances. One measurement only took 80 milliseconds from making the request until receiving the last byte. Is this realistic? Or does OnionPerf take any shortcuts that the other instances don't take?
The IP address of op-hk gets resolved to the Netherlands, though the measurements look very different from the op-nl host. Can we confirm that the host is really located in Hong Kong and that it's just the IP-to-country resolution that is behind?
If we have answers to these questions, I'd say let's make the data available.
Hi Karsten, I can have a chat w/ the people at Greenhost and find out about your questions. I was wondering that too actually.
robgjansen, thanks for the response on those very fast measurements. Sounds plausible to me. I'll add add op-us and op-nl to CollecTor tomorrow afternoon. And I'll op-hk as soon as we're confident where it's located and your instance as soon as we have resolved the naming part. Thanks!
hiro, it would be great if you could chat with Greenhost people about the location. Just keep it running, and once we know better where it is we can add its data to CollecTor. Thanks!
For naming, it seems like the current scheme works fine for now. Mine would be 'op-ca' since it appears that mine is located in Canada:
https://geoiptool.com/en/?ip=167.114.171.3
But yeah, you may run into collisions if you plan to run more instances. Some other schemes I can think of are:
IP address: 'op-ca-167.114.171.3'
AS number: 'op-ca-as1234'
Simple increment: 'op-ca1', 'op-ca2', etc.
I like tying it to an IP or AS number, because it is more precise. The name is longer though, and if you move the instance, you would need to rotate the name (which may be a good thing anyway since it moved network locations).
I would have no problem updating the names in the data hosted by my instance before you import it.
I looked through my data a bit. I started out running my instance in a different network location (216.17.99.183) before moving it to a new VPS at 167.114.171.3. It appears that I was running the TGen server listening on port 216.17.99.183:6666 before, and now it is listening on 167.114.171.3:8080. So, connections from Tor will request these ports.
I think you are trying to get your instances to run a TGen server at port 80, right? Is that a hard requirement? If so, my existing data is useless. I wrote a quick script (attached in case it's useful) to help us understand how much exit bandwidth by consensus weight allows exit to various ports:
167.114.171.3:80 is allowed by 83.1011269711 percent of exit bandwidth167.114.171.3:443 is allowed by 85.7649427029 percent of exit bandwidth167.114.171.3:6666 is allowed by 78.7485340684 percent of exit bandwidth167.114.171.3:8080 is allowed by 77.0591360016 percent of exit bandwidth167.114.171.3:8443 is allowed by 76.3254015495 percent of exit bandwidth216.17.99.183:80 is allowed by 83.1011269711 percent of exit bandwidth216.17.99.183:443 is allowed by 85.7649427029 percent of exit bandwidth216.17.99.183:6666 is allowed by 78.7485340684 percent of exit bandwidth216.17.99.183:8080 is allowed by 77.0591360016 percent of exit bandwidth216.17.99.183:8443 is allowed by 76.3254015495 percent of exit bandwidth
I think that the difference between port 6666 or 8080 and 80 is minor.
Let me know how to proceed. If you want to use my existing data, perhaps I should use a different name for each of the data sets that were gathered in different locations? It looks like 216.17.99.183 is in California, so that would be another 'op-us-XXX' name.