A Tor-based Validator Anonymity Approach (incl. comparison to Dandelion)

@Mikerah Thank you for the reply.

I still think 500 ms is quite a lot

How much latency overhead would you consider as tolerable?

It is comparable in added latency to other solutions like the Dandelion solution.
For the latency cost, the Tor solution offers good anonymity properties and is easy-to-deploy.

I assume the current expected latency is lower than 500ms.
I’d proceed with further latency testing and a test implementation if there are no apparent issues making the Tor solution infeasible.
I am waiting for more comments :).

For most validators, the pros of privacy at this level don’t outweigh their gains when participating in systems like MEV-boost and how cutthroat the environment is for signing/attesting blocks and sending across the network as quickly as possible.

Agreed. But this Tor extension would not have to be used by all validators.
It would be optional, and would make linking a validator’s network parameters significantly harder.
Also, could proposer/builder separation mitigate this pressure for validators?

As for other concerns for using Tor is the fact that Tor traffic is blocked in a lot of places. Set aside the usual culprits e.g. rogue governments, many relatively innocent (not sure if this is a good term ) block Tor traffic for a variety of reasons. For example, universities might block Tor traffic.

Good point. But the Tor approach would be optional. If it is feasible at the validator’s location, it can be used.
I agree, we need further research to get to solutions that cover these validators as well.
But for now, Tor would be an easy-to-deploy solution.

It would be interesting to investigate, what percentage of validators could not feasibly use Tor.

Further, there are metadata concerns with using Tor as well that people here don’t take into account. If you are the only entity within a specific area using Tor then your traffic sticks out like a sore thumb effectively getting us back to square one.

I see guard fingerprinting as well as exit fingerprinting as potential related attacks (see OP).

Simply knowing that someone uses Tor within a given network segment allows censorship, but not yet correlating or fingerprinting (it helps enabling such attacks, but is not sufficient yet).

Imo, this does not bring us back to square one.
Yes, there are weaknesses of the Tor approach. But, imo, non of these make it infeasible or not worth the effort of rolling it out in a testnet;
it is a significant improvement over the status quo that is worth to further investigate and/or test.

I have some more thoughts that I should write up as I’ve been thinking about this problem for a few years now.

That would be very helpful :).