Bug #5149
closed
memory leak(s) in strongswan
Added by Chris Buechler about 9 years ago.
Updated about 9 years ago.
Description
There is a memory leak of some sort in strongswan with FreeBSD. Ticket opened by one of our users in their bug tracker.
https://wiki.strongswan.org/issues/1106
I've confirmed it leaks memory on stock FreeBSD with stock strongswan from FreeBSD ports as well. It seems like it may not be as bad as with our strongswan port, so there may be multiple issues at hand.
Files
- Assignee set to Chris Buechler
- Assignee changed from Chris Buechler to Luiz Souza
disabling all logging, with the following in strongswan.conf:
daemon {
ike_name = yes
default = -1
}
# disable logging under auth so logs aren't duplicated
auth {
default = -1
}
appears to stop the memory leak. Memory RRDs from two endpoints with the same config, only diff being one with default logging and one with no logging, attached. Before disabling logging, they leaked equally. The one without logging isn't leaking at all thus far. Logging was disabled at around 16:30. Through 5 rekeying cycles now, leaving it running.
If this seems to be working i can test on my box, Just need to know how i can edit the strongswan.conf without it being overwritten.
- Status changed from Confirmed to Feedback
the next snapshot build run should have strongswan compiled with --with-printf-hooks=vstr, which is the best option we've found to significantly reduce memory leaks. Disabling logging completely gets rid of much of the issue, disabling only IKE logging gets rid of quite a bit of it but not as much as disabling all logging. vstr looks to have only very minimal leaks, with 400 tunnels rekeying hourly, it's only grown several MB in 3+ days where without vstr the same circumstance has leaked >200 MB.
2.2.5-DEVELOPMENT (amd64) built on Tue Oct 27 10:31:57 CDT 2015
Not completely definitive yet but the graph I posted here:
https://forum.pfsense.org/index.php?topic=101468.msg566173#msg566173
shows memory usage running in a rather more sane manner than it has in the past. I wont bother posting the longer term graphs but they all show the same pattern of a 2 day linear climb to nearly 100% "Inactive" which I believe is effectively swap utilization. The box runs like that for several days and then crashes. It's done that since around mid April.
(VMware 5.5, 2GB vRAM + 2GB swap, 51 x IPSEC P1s)
- Status changed from Feedback to Resolved
Switching to with-printf-hooks=vstr resolved the most significant memory leaks.
Also available in: Atom
PDF