buffers.c calls monotime_coarse_get_stamp() to store a timestamp per chunk, marking when it was allocated. This is weird layering for a simple bytes container.
To upload designs, you'll need to enable LFS and have an admin enable hashed storage. More information
Child items
0
Show closed items
No child items are currently assigned. Use child items to break down this issue into smaller parts.
Linked items
0
Link issues together to show that they're related.
Learn more.
nickm previously pointed out the shakiness of the layering here.
The sole consumer of this timestamp is circuits_handle_oom(), which uses it to sort a list to prioritize which circuits and connections to close first. (The circuit_t struct field .age_tmp also only exists for the OOM handler's benefit.)
The sole place where this timestamp being produced matters (for the OOM handler) is in mainloop/connection.c, when writing to conn->outbuf in connection_write_to_buf_impl_().
All other users of buf_t have no interest in creating or reading this timestamp. Should it be replaced with connection_t-specific code, or removed entirely with no replacement?
replace the per-chunk timestamp with a timestamp stored in struct connection_t, which is updated with the last point in time that outbuf had been empty? Move age-calculation logic there.
Or if not that, keep a uint32_t field per chunk, but have it be an opaque tag kept at zero normally, but callers that care about timestamps like connection.c can call buf_tag_new_chunks(buf_t, uint32_t) after writing to the buffer, assigning that uint32_t to any chunk that currently has a tag of zero (ie, any newly created chunks). Then OOM can call buf_get_head_tag() to retrieve the number and calculate its age.
Technically, buf_get_oldest_chunk_timestamp() is a misnomer anyway, since it calculates the age from the timestamp and returns its age in seconds, not the timestamp.
replace the per-chunk timestamp with a timestamp stored in struct connection_t, which is updated with the last point in time that outbuf had been empty? Move age-calculation logic there.
I would worry about this one -- the authors found that the methodology in their paper was effective against the attack they found, but some other methodologies were not. The algorithm you suggest here would change the behavior -- instead of killing the connections that had been stalled for the longest, it would kill connections that had been making steady progress if they had never flushed fully.
Or if not that, keep a uint32_t field per chunk, but have it be an opaque tag kept at zero normally, but callers that care about timestamps like connection.c can call buf_tag_new_chunks(buf_t, uint32_t) after writing to the buffer, assigning that uint32_t to any chunk that currently has a tag of zero (ie, any newly created chunks). Then OOM can call buf_get_head_tag() to retrieve the number and calculate its age.
Hm. So, "tag_new_chunks" sounds like it would require us to be able to walk the buffer's chunks in both directions, when currently it's only a singly-linked queue. Either that, or it would be O(N) to reach the end.
Instead, we could have something where we call "buf_set_tag_for_new_chunks" preemptively, and then have the buffer backend copy that tag into any new chunks it allocates.
Or (maybe) better still, there could be some opaque "get_tag_for_new_chunk()" callback that only got called as needed. That way, we wouldn't be doing extra timestamp calls, and the users of the buffer API wouldn't need to worry about whether they had called buf_set_tag_... before or after all their possible read operations.
(Of course, in Tor, all the get_tag_for_new_chunk() callback would do is what the code currently does now, which makes me think that we might be just introducing abstraction for abstraction's sake)
instead of killing the connections that had been stalled for the longest, it would kill connections that had been making steady progress if they had never flushed fully.
This is true, I was kinda hoping that the set of connections that had not stalled yet also had never been flushed fully would be miniscule to nonexistent in the real world. Some experimental logging might be able to demonstrate if that's true or not? Or, instead of recording a timestamp when the buffer rose from containing zero bytes to nonzero bytes, record when it rose above [extremely low threshold here], low enough to not be practically scaleable with a parallel sniper attack. (maybe sizeof(struct connection_t))
Edit: Also, the current logic already has a similar flaw for theoretically killing some connections it shouldn't, since it only records the time at which a 4096-byte chunk of memory was allocated. The bytes that currently remain in that chunk might have been added extremely recently, if it has been making steady progress for the past 4000 bytes without ever fully emptying.
Or to more conservatively match current behavior with less abstraction burden on buf_t, there is a third option:
Store a smartlist of uintptr_t alongside the buf_t, add an API for buf_get_num_chunks(), and add/remove timestamps from the smartlist as the number of chunks increases/decreases.