Bandwidth scanner: Consensus weight far lower than advertised bandwidth

I have recently re-keyed my two middle relays: #1 and #2.

I have CenturyLink Fiber hosting the two relays and am located in Seattle, WA.

I have noticed that since re-keying, while my advertised bandwidth and consensus weight are far higher than they were pre-congestion control (thank you!), presumably since I’m not stuck in a ‘slow partition’ anymore, the consensus weight is still 1/3 of the advertised bandwidth.

The discrepancy has gotten better over a few days, but still exists. Would the discrepancy eventually go away as I ramp up, or is it here to stay due to my location in Seattle, versus say an east coast city like NYC or Boston (or Europe)?

BTW if you remember my “latency spikes” post, well I “fixed” it. There was a 16,384 TCP connection limit with my ISP’s ONT/modem. I had to clone it to fix it, but now it’s perfect.

Neel Chauhan via Tor Project Forum:

I have recently re-keyed my two middle relays: #1 and #2.

I have CenturyLink Fiber hosting the two relays and am located in Seattle, WA.

I have noticed that since re-keying, while my advertised bandwidth and consensus weight are far higher than they were pre-congestion control (thank you!), presumably since I’m not stuck in a ‘slow partition’ anymore, the consensus weight is still 1/3 of the advertised bandwidth.

The discrepancy has gotten better over a few days, but still exists. Would the discrepancy eventually go away as I ramp up, or is it here to stay due to my location in Seattle, versus say an east coast city like NYC or Boston (or Europe)?

I don’t think this discrepancy should stay. And I have my doubts that it
would go away if you moved your servers to the east coast as,
interestingly, the other relays at the same ISP do not seem to suffer
from what you are seeing. At least that’s what I got from taking a quick
look at the available data. We gonna take a closer look, though, to rule
out any sbws bugs.

Do you firewall off connections to your relays? If so, that might be a
reason for the trouble you are seeing, not sure.

1 Like

Another reason might be that some recent measurements were not published by the dirauths due to the ongoing dos, but i don’t know.

If this issue persists, we can investigate more.

Sorry for deleting a post.

I realized the real issue is FreeBSD’s default TCP stack configuration.

I swapped to openSUSE “Tumbleweed” on my relay for a day, and the consensus weight increased dramatically, which led me to figure out FreeBSD’s default TCP stack isn’t optimized well for a Tor relay, even when the firewall stayed with OPNsense which is based on FreeBSD.

What needs to be done to bandwidth-optimize a FreeBSD relay is to enable BBR congestion control.

I moved my relay back to FreeBSD and the high bandwidth scanner values stayed once I had a BBR-enabled kernel.

1 Like

Hi @neel, have you tried to upgrade to FreeBSD 13.1? I’m asking because after upgrading to 13.1, my relays consensus weight increased considerably. Probably related:

1 Like

I am running 13.1. In fact, I was running it even before I switched to openSUSE for a day.

A lot of the issue could be that the default FreeBSD configuration doesn’t bode well for relays with higher latency to other relays, such as those on the west coast US, hence BBR is needed. This means that when a west coast relay is measured, it likely crosses between Europe and the West Coast US which FreeBSD isn’t tuned for by default to deal with that much latency very well, despite Tor’s congestion control.

Your FreeBSD relay seems to be in Europe which is closer to other relays, so when the bandwidth is measured you get a fairer reporting, since the other relay being used is closeby and is less hit by latency.

When I hosted relays in Europe, I had high consensus weight values, because the other relay used was closeby. Nowadays I just stick to the US coincidentally, more specifically the west coast for some reason.