The main bottleneck when running a Tor relay seems to be little-t-tor’s CPU performance. Now thanks to AMD, 8 or more threads is the mainstream now, and in the future we’ll have server CPUs that have 128*2 threads. And so it becomes natural to ask when will Tor fully exploit all CPU threads? Will the work on that only start after the Rust rewrite (arti) matures enough?
The C implementation of Tor is unlikely to ever scale linearly with the increasing number of cores in current and future generation of CPU’s. We don’t plan on spending much energy on this issue in the C implementation of Tor.
Arti, our upcoming Tor implementation in Rust, will utilize multiple cores/threads much better and is designed with modern computers in mind from the start.
As @ahf said, it is planned to come with Arti in a few years time.
Until then, you can pretty much max out a symmetrical 1 Gbit/s line with 2 modern CPU cores (Ryzen 3000/5000 series) and two separate instances of tor running under one IPv4. While tor is not good at using multiple threads, the hardware requirements in general are fairly low. It’ll even run quite well on a Pi nowadays. Therefore, I can’t really agree with this statement:
Can you elaborate? How and why would you come to that assumption?
I currently run 100+ relays under one of the 5 largest families in the tor network and not in one single case (running under linux) am I CPU limited in any way. I am limited by the network and/or by the abundance of bandwidth available for tor in general. For quite some time now, the tor network has had a large surplus in available bandwidth, which is of course a good thing.
I do think there is value in operators being able to consolidate many relay processes as they have to run today, for performance reasons, on a single machine into one. Each process needs to have connections to/from the majority of the network and we could reduce some connections there if operators only need to run a single instance on Tor on their gigantic machines rather than as it is today