Guard relay in Kubernetes

Hello, I’m trying to create a guard relay inside my Kubernetes cluster. The problem is that tor is rejecting any request during the ORport reachability test.

The problem is not about reachability of the container, I have tried replacing it with an echo server and it’s working well, on the good port.

Is there any specific configuration to have, regarding addresses or ports, or something that is abstracted by the container that I need to specify in the torrc?

Thank you for your help!

9 times out of 10, a failed reachability test to a relay located on an internal network that isn’t directly facing the internet is usually a firewall rule issue, and sometimes it can be a outbound bind address issue.

Are you using auto IPv4 or are you specifying a particular address? If auto, try specifying your relays inet address, if its on an internal subnet that has to pass through a router first before getting to the internet, try specifying outbound bind address to the local address, and on your router, forward connections to your OR port source.
Is IPv6 enabled?
Have you tried using another, non default OR port number like port 14300 or something or just 9001?

Also, if you dont mind, please post the contents of your torrc file, excluding some private info.

If you can, enable debug logging on your tor client, if you have it set to RunAsDaemon 1, comment out that line while we debug and stop the tor service if you have it running as a service before enabling logging.
Once logging is enabled, just use the command tor to start it from the CLI, and the printed output should give us a clue as to why its being fucky. Much easier to turn tor on and off, also its nice it prints debug info to terminal output which can be digested from session to session instead of a monolithic log file.

Once this is done, just run tor and copy pasta the output here. Feel free to remove any sensitive info you dont want the internet to see.

Thank you for your answer !

To begin with, here is the torrc I’m using, embedded in a k8s configmap:

---
apiVersion: v1
kind: ConfigMap
data:
  torrc: |-
    Log notice stdout

    SOCKSPort 0
    # Address 10.233.66.66
    ORPort 0.0.0.0:9001 IPv4Only
    DirPort 9030
    User root
    DataDirectory /var/lib/tor
    Nickname prrlvrRelay
    ExitRelay 0
metadata:
  name: torrc

For the address, should I put the address of the k8s node running the container (i.e the IP i’m NAT forwarding to) or the cluster’s internal IP ?

I am not using tor as a deamon, because it is simply launched by the container with command tor -f /etc/torrc. (the configmap is mounted as /etc/torrc obviously).

I have tried with another ports, but I end up getting the same results.
For now, I am only setting up IPv4.

The output (tell me if we need to set another log level):

Aug 15 17:52:22.598 [notice] Tor 0.4.7.8 running on Linux with Libevent 2.1.12-stable, OpenSSL 1.1.1q, Zlib 1.2.12, Liblzma 5.2.5, Libzstd 1.5.2 and Unknown N/A as libc.
Aug 15 17:52:22.598 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://support.torproject.org/faq/staying-anonymous/
Aug 15 17:52:22.598 [notice] Read configuration file "/etc/tor/torrc".
Aug 15 17:52:22.599 [notice] Based on detected system memory, MaxMemInQueues is set to 3984 MB. You can override this by setting MaxMemInQueues by hand.
Aug 15 17:52:22.600 [notice] Opening OR listener on 0.0.0.0:9001
Aug 15 17:52:22.600 [notice] Opened OR listener connection (ready) on 0.0.0.0:9001
Aug 15 17:52:22.600 [notice] Opening Directory listener on 0.0.0.0:9030
Aug 15 17:52:22.600 [notice] Opened Directory listener connection (ready) on 0.0.0.0:9030
Aug 15 17:52:23.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip.
Aug 15 17:52:23.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6.
Aug 15 17:52:23.000 [notice] Configured to measure statistics. Look for the *-stats files that will first be written to the data directory in 24 hours from now.
Aug 15 17:52:23.000 [warn] You are running Tor as root. You don't need to, and you probably shouldn't.
Aug 15 17:52:23.000 [notice] Your Tor server's identity key  fingerprint is 'prrlvrRelay 4C9EDFDD2172F01375206F61816BD392C55A6FCA'
Aug 15 17:52:23.000 [notice] Your Tor server's identity key ed25519 fingerprint is 'prrlvrRelay xq07kstMT71VpV9VhSXWcvI2aZDsHLgTgNe8GhZcubU'
Aug 15 17:52:23.000 [notice] Bootstrapped 0% (starting): Starting
Aug 15 17:52:23.000 [notice] Starting with guard context "default"
Aug 15 17:52:30.000 [notice] Bootstrapped 5% (conn): Connecting to a relay
Aug 15 17:52:30.000 [notice] Unable to find IPv4 address for ORPort 9001. You might want to specify IPv6Only to it or set an explicit address or set Address.
Aug 15 17:52:30.000 [notice] Bootstrapped 10% (conn_done): Connected to a relay
Aug 15 17:52:30.000 [notice] Bootstrapped 14% (handshake): Handshaking with a relay
Aug 15 17:52:30.000 [notice] Bootstrapped 15% (handshake_done): Handshake with a relay done
Aug 15 17:52:30.000 [notice] Bootstrapped 75% (enough_dirinfo): Loaded enough directory info to build circuits
Aug 15 17:52:30.000 [notice] Bootstrapped 90% (ap_handshake_done): Handshake finished with a relay to build circuits
Aug 15 17:52:30.000 [notice] Bootstrapped 95% (circuit_create): Establishing a Tor circuit
Aug 15 17:52:31.000 [notice] External address seen and suggested by a directory authority: 77.207.207.191
Aug 15 17:52:31.000 [notice] Bootstrapped 100% (done): Done

Fingerprints are not changed because I will clean start the server once it works.

I havn´t seen the line Unable to find IPv4 address for ORPort 9001, it probably comes from this, but I don’t understand it…

I think you are missing some conf there:

// We tell tor to listen on this port but not advertise it
ORPort 9001 NoAdvertise IPv4Only
// We tell tor to advertise this port but not listen because this is the port mapped by the kubernetes node --> this is the port to open on the firewall
ORPort 32150 NoListen IPv4Only
// Put here you public IP
Address <public_ip>

Now for the k8s conf I use helm to deploy my nodes:

deployment.yml

{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
command: ["/usr/sbin/tor"]
imagePullPolicy: {{ .Values.image.pullPolicy }}
volumeMounts:
- mountPath: /etc/tor
name: etc-volume
- mountPath: /var/lib/tor
name: var-volume
- mountPath: /var/log/tor
name: log-volume
ports:
- name: tor
containerPort: 9001
protocol: TCP
- name: torm
containerPort: 9036
protocol: TCP
livenessProbe:
tcpSocket:
port: 9001
readinessProbe:
tcpSocket:
port: 9001
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumes:
- name: etc-volume
hostPath:
path: /etc/tor-k8s
- name: var-volume
hostPath:
path: /var/lib/tor-k8s
- name: log-volume
hostPath:
path: /var/log/tor-k8s
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

service.yaml

apiVersion: v1
kind: Service
metadata:
name: {{ include "torap.fullname" . }}
labels:
{{- include "torap.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: 9001
targetPort: 9001
protocol: TCP
nodePort: 32150
name: tor
- port: 9036
targetPort: 9036
protocol: TCP
nodePort: 32151
name: torm
selector:
{{- include "torap.selectorLabels" . | nindent 4 }}

I have hardcoded values just to show you, hope this helps !

1 Like

Hello, thank you very much!

I have translated your helm to my Kustomize version (and still using your hardcoded values):
deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tor-relay
spec:
  selector:
    matchLabels:
      app: tor-relay
  replicas: 1
  template:
    metadata:
      labels:
        app: tor-relay
    spec:
      containers:
      - name: tor-relay
        image: leplusorg/tor:latest
        imagePullPolicy: Always
        command: ["tor", "-f", "/etc/tor/torrc"]
        ports:
        - name: tor
          containerPort: 9001
          protocol: TCP
        - name: tor-dir
          containerPort: 9030
          protocol: TCP
        livenessProbe:
          tcpSocket:
            port: 9001
        readinessProbe:
          tcpSocket:
            port: 9001
        volumeMounts:
        - mountPath: "/var/lib/tor"
          name: tor-volume
        - name: config-volume
          mountPath: /etc/tor
      volumes:
      - name: tor-volume
        persistentVolumeClaim:
          claimName: tor-relay-pvc
      - name: config-volume
        configMap:
          name: torrc
          items:
          - key: torrc
            path: torrc

service.yml

apiVersion:  v1
kind: Service
metadata:
  name: tor-relay-svc
spec:
  type: NodePort
  selector:
    app: tor-relay
  ports:
  - name: tor
    protocol: TCP
    port: 9001
    targetPort: 9001
    nodePort: 32150
  - name: tor-dir
    protocol: TCP
    port: 9030
    targetPort: 9030
    nodePort: 32151

and the torrc parts looks like this now:

    Address 77.207.207.191

    ORPort 9001 NoAdvertise IPv4Only
    ORPort 32150 NoListen IPv4Only

    DirPort 9030 NoAdvertise IPv4Only
    ORPort 32151 NoListen IPv4Only

I have NAT-forwaded ports 32150-32151 from my router to the same ports on the k8s node running the pod.
But, it is still not reachable, everything looks good to me now

Aug 15 19:56:36.005 [notice] Tor 0.4.7.8 running on Linux with Libevent 2.1.12-stable, OpenSSL 1.1.1q, Zlib 1.2.12, Liblzma 5.2.5, Libzstd 1.5.2 and Unknown N/A as libc.
Aug 15 19:56:36.005 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://support.torproject.org/faq/staying-anonymous/
Aug 15 19:56:36.005 [notice] Read configuration file "/etc/tor/torrc".
Aug 15 19:56:36.006 [notice] Based on detected system memory, MaxMemInQueues is set to 3984 MB. You can override this by setting MaxMemInQueues by hand.
Aug 15 19:56:36.007 [notice] Opening OR listener on 0.0.0.0:9001
Aug 15 19:56:36.007 [notice] Opened OR listener connection (ready) on 0.0.0.0:9001
Aug 15 19:56:36.007 [notice] Opening Directory listener on 0.0.0.0:9030
Aug 15 19:56:36.007 [notice] Opened Directory listener connection (ready) on 0.0.0.0:9030
Aug 15 19:56:36.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip.
Aug 15 19:56:36.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6.
Aug 15 19:56:36.000 [notice] Configured to measure statistics. Look for the *-stats files that will first be written to the data directory in 24 hours from now.
Aug 15 19:56:36.000 [warn] You are running Tor as root. You don't need to, and you probably shouldn't.
Aug 15 19:56:36.000 [notice] Your Tor server's identity key  fingerprint is 'prrlvrRelay 4C9EDFDD2172F01375206F61816BD392C55A6FCA'
Aug 15 19:56:36.000 [notice] Your Tor server's identity key ed25519 fingerprint is 'prrlvrRelay xq07kstMT71VpV9VhSXWcvI2aZDsHLgTgNe8GhZcubU'
Aug 15 19:56:36.000 [notice] Bootstrapped 0% (starting): Starting
Aug 15 19:56:37.000 [notice] Starting with guard context "default"
Aug 15 19:56:44.000 [notice] Bootstrapped 5% (conn): Connecting to a relay
Aug 15 19:56:44.000 [notice] Bootstrapped 10% (conn_done): Connected to a relay
Aug 15 19:56:44.000 [notice] Bootstrapped 14% (handshake): Handshaking with a relay
Aug 15 19:56:44.000 [notice] Bootstrapped 15% (handshake_done): Handshake with a relay done
Aug 15 19:56:44.000 [notice] Bootstrapped 75% (enough_dirinfo): Loaded enough directory info to build circuits
Aug 15 19:56:44.000 [notice] Bootstrapped 90% (ap_handshake_done): Handshake finished with a relay to build circuits
Aug 15 19:56:44.000 [notice] Bootstrapped 95% (circuit_create): Establishing a Tor circuit
Aug 15 19:56:45.000 [notice] Bootstrapped 100% (done): Done
Aug 15 19:56:45.000 [notice] Now checking whether IPv4 ORPort 77.207.207.191:32150 is reachable... (this may take up to 20 minutes -- look for log messages indicating success)
Aug 15 20:16:43.000 [warn] Your server has not managed to confirm reachability for its ORPort(s) at 77.207.207.191:32150. Relays do not publish descriptors until their ORPort and DirPort are reachable. Please check your firewalls, ports, address, /etc/hosts file, etc.

I think I’ve spotted your problem.
Assuming the machine that’s forwarding you connections is the NAT, and the nat is forwarding 32150 and 32151 to address 10.233.66.66:

2 things, your only forwarding ports 32150-32151 to your relay? Externally your NAT accepts and directs requests on 32150-1, but your relay isn’t listening on those ports.
Second thing, your OR port listener is listening in the blind, 0.0.0.0:9001. Set your ORPort to the IP address the NAT is forwarding 32150 and 32151 to. Assuming that’s 10.233.66.66

This way your specifying the address to listen on, and the ports that are actually forwarded by the NAT.
Since your also accepting Directory mirroring, set your torrc dirport line to 32151.

The other way is to advertise on 32150 but accept on 9001, but that takes some fuckery and forwarding sorcery to work correctly. And im not an expert on Kubernetes either. But this seems to be your issue I believe.

You might also want to double check that your DNS is working correctly as well, sometimes its sneaky but could be an issue too if the address simply isn’t resolving like it should.

Hope this helps.

I’m not sure having two orports advertised works ??

Maybe try removing the dir stuff and see how it goes, also you can try disabling iptables/nf_tables on your node if it is running Linux !

He isn’t advertising 2, he’s advertising ORPort 32150 but listening on 9001, the NAT isn’t redirecting connections from source port 9001 to desto port 32150. Its only redirecting 32150.

Either the nat needs to accept externally on 9001 and foward external connections on 9001 to 32250 internally to the relay, or change the relay to listen on the ports being forwarded by NAT directly.

I am advertising port 32150 and listening to 9001, because the NAT redirect port 32150 to the node, and the service listen on 32150 and target port 9001

For me it goes:
32150 → 32150 → 9001
NAT → K8S → container
Is there any misunderstanding?

I removed to dir part to test ; but it is still not working.

yes he is actually ?

ORPort 9001 NoAdvertise IPv4Only
ORPort 32150 NoListen IPv4Only
DirPort 9030 NoAdvertise IPv4Only
ORPort 32151 NoListen IPv4Only

Ah. I see.
In that case, if you want to listen on 32150 and accept on 9001, you’ll have to setup some local forwarding on the node using iptables or fw, whatever the firewall CLI is for your machine.

I think your issue is that your node is accepting on 32150, but listening on 9001, but once your relay accepts a connection on 32150 and doesn’t know what to do with it, as the tor service listening on 9001. If there’s no forwarding, that’s where your disconnect is. Try forwarding connection attempts on 32150 to 9001 locally on the relay.
This could potentially be done by telling tor to listen on port 9001 locally, 127.0.0.2 or something, and setting firewall/ipfilter whatever to take connections made externally on 32150 and redirect them to local host port 9001. This is the forwarding fuckery I mentioned earlier

@prrlvr

Any updates? Still having issues or did you get it worked out?