The new Tor compression code in 0.3.1 assumes that all the compression functions are bound at runtime.
For example, tor_lzma_method_supported() returns 1 when HAVE_LZMA is defined, but that doesn't mean that lzma_version_string() has actually been bound to a non-NULL address in the binary.
This is more likely to happens when tor is used as a shared library rather than linked as an executable (shadow, iOS), and when using weak, lazy symbol binding.
This might not be an issue we can solve unless we check for all the symbols being NULL at runtime. Maybe the responsibility for proper linking is on people who are compiling tor with weak, lazy symbol binding.
This bug was discovered by Rob Jansen when running shadow.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items
0
Show closed items
No child items are currently assigned. Use child items to break down this issue into smaller parts.
Linked items
0
Link issues together to show that they're related.
Learn more.
This is going to be an issue for all external dependencies that we use via pkg-config I think. The old TOR_SEARCH_LIBRARY() macro is probably unaffected since it tries to build a small test file.
I will try to do the same as we do in TOR_SEARCH_LIBRARY() when the library is found to ensure that we can link against the .so and that the headers are present.
If this is a hard issue to fix, we might just say "don't do that then" -- we don't build Tor with weak/lazy symbol binding ourselves, and while we don't tell people not to do it, we could just say that it is incumbent on people who change our build system to keep it working with their changes.
If this is a hard issue to fix, we might just say "don't do that then" -- we don't build Tor with weak/lazy symbol binding ourselves, and while we don't tell people not to do it, we could just say that it is incumbent on people who change our build system to keep it working with their changes.
Thoughts?
Documenting this issue seems like a sensible solution, but we should check if this affects iOS and Tor.framework before deciding. If it does affect iOS, and we want to support iOS as a first-class platform, maybe linking would be a good check to add.
I'd also phrase it as "pass unusual [linker] options to our build system", as "change our build system" could be unclear.
You mentioned during the mobile session in Montreal that you were going to look into adding support for LZMA and Zstandard for OnionBrowser.
Do you think this is going to be an issue for you on iOS?
Actually, my current guess is that this isn't an issue for iOS at all because the dependencies (including tor) get built as static libraries and then they're all linked into the app's single executable anyway.
I'm hoping to take a stab at building with liblzma (in Tor.framework / Onion Browser) over the next week or so. So maybe I'll have more specific feedback for y'all soon.
Note that the patch does the NULL function checks for the zlib backend: we might want to consider adding a tor_assert(0) in the pathological case there since zlib is still required to be able to use Tor.
I'm marking this needs_review, but these patches does not fix everything Teor mentioned above. We still need to add checks to configure and document the weak linking issue.
One thing to keep in mind, I'm not fully sure the patch I've proposed work under the given condition. I assume that doing &function yields NULL if the function is unavailable.
Looks okay at first glance, but: do we really want to go down this road? Will we wind up doing this for every shared library we use in Tor, including libevent and openssl?
Also, zlib is mandatory: we can't keep going if zlib isn't going to work.
I don't think doing this is a particularly good solution. I added the changes to the zlib code in case we move away from Tor's TOR_SEARCH_LIBRARY() where this issue isn't detected.
The other option is that we add the configure checks that TOR_SEARCH_LIBRARY() adds to configure to ensure that our zstd/lzma code works in the same way.
This is of course still an issue for OpenSSL, Libevent, and other external dependencies we may have.
Would you prefer a configure check to this or should we do something entirely different?
I think Nick is right, checking for NULL functions could rapidly get out of control.
(I don't think we should check for mandatory libraries. But then I asked myself: why bother to check at all?)
I suggest we check that the zstd and lzma library functions are non-NULL at configure time. And fail if they are NULL.
That way, builders get instant feedback if their linker flags or install are broken.
People who design custom build processes will have to make sure they work themselves.
how exactly does one "enable weak symbols"? as far as I can tell, on both Mac and Linux, this process involves inserting __attribute__((weak)) or #pragma weak in the actual source files that reference the symbol in question. even in cases of shared libraries, AIUI, if executable a NEEDS library b.so, and a has a weak symbol f with no default, and b.so has an undefined symbol f, then a will fail to start.