Memory limits for Snowflake process

What if you build 32 bit snowflake and set start script to restart it in case of crash?
So it reaches 2 gb, crashes and restarts.

I can write program, which will track RAM usage of 64 bit process and restart it if needed.
No problem to make hacks; I want proper solution.

Im inclined to agree with you, hacky workarounds shouldnt be the go to solution.

Possibly a dumb question, but I donā€™t think its been asked yet. Is this perhaps the result of shitty memory management as apposed to a glitch?

I donā€™t understand what you mean.
If program allocates more memory than needed, thatā€™s a problem with memory management.
But also itā€™s a bug, since such incorrect management allows to perform DoS attacks.
If you are saying that it may be OS, who manages memory wrong, then answer is no. OS canā€™t allocate 8 GB if you ask 100 MB. OS memory management problems are more subtle and usually not so easily triggered.

Apologies, not the OS. I agree that the OSā€™s buffer wouldnā€™t be enough to cause this.

I mean in snowflake, is it possible that snowflake or some interaction between snowflake and tor and how it manages memory could be responsible?

Small clarification: Snowflake proxy do not need Tor binary to work at all.
It communicates with other software via network protocols:
Network ā†” Snowflake proxy ā†” Network

Which means that there are not many places where memory may be managed wrong:

  1. OS level: kernel mode and user mode libraries. (less likely)
  2. Go libraries level: WebRTC, DTLS, ā€¦ (most likely)
  3. Snowflake proxy by itself. (average likelihood)

Also problem may lie somewhere between layers.

Thank you for that, I was under the impression the proxy ran on top of the Tor network, or communicated with the tor gateway. Thank you for clearing that up.

Im going to spin up a few VMs on the cloud this weekend with identical hardware and run 4 different Linux Distros, and see if I can reproduce a similar issue. If I do, ill share the data and logs so we can find the root cause of this issue and bring It to the attention of the snowflake devs

However, if it does lie somewhere between layers, im fairly confident the only way weā€™ll catch it in the act is with a raw capture. If its in layer 7, at least we know its in the program and not the network, if its somewhere below layer 4, well thatā€™s a whole other problem weā€™ll have to investigate. If its not an application layer issue, itā€™ll be a pain in the ass im sure.

But im pretty sure we can rule out transport layer, unless its an issue with wrapping, but considering the program is standalone from tor, I donā€™t see how it could be related to TLS unless its a crypto library issue, which has been a known issue as of late. But I donā€™t want to speculate on whether its the issue here, as that issue has mostly cropped up in relation to libssl1.1.1 and its variations directly related to tor and torsocks, it didnā€™t have anything to do with buffer as far as I know.

Snowflake proxy communicates with Tor network, but it do not need Tor binary for that.
So it is somewhat isolated from problems, which may come from Tor.

Sorry, typo. Was before coffee kicked in.

Just wanted to give you an update @Vort
Which is say no update, the 4 linux distros I spun up didnā€™t reproduce the issue. They were a bit wonky but nothing unusual overall. Sorry bud. ill keep looking into it

I have slight update too.
Main problem with 8GB of RAM used not reproduced for me too anymore.
But several days ago I cleared working set with RamMap and some amount of RAM is still not used by proxy after several days of operation.
Now process uses 550 meg of virtual memory, but only 160 meg of it contained in working set.
I think it is bad no matter what is the reason of such behaviour.
It is either memory leak or some problem, which looks exactly like memory leak.
It is possible to chase this problem instead of ā€œ8 GB problemā€.
If someone will be able to reproduce it of course.