Bug #4790


Established IPSec Tunnel refused transporting further traffic out of sudden.. it than refuses any rule based traffic to anywhere!

Added by Ingo-Stefan Schilling almost 7 years ago. Updated almost 7 years ago.

Not a Bug
Target version:
Start date:
Due date:
% Done:


Estimated time:
Plus Target Version:
Release Notes:
Affected Version:
Affected Architecture:



  • In General
  • Everything is IPv4 by now
  • Local office network which is running PFSense in Hyper-V on a quite potent machine with provisioned 2 Cores and 4GB in Memory (non dynamic).
    • Fixed IP on VDSL with 50MBit down/ 10MBit up - this line is for Office 2 DC connectivity, no other traffic is using it.
    • Dynamic Cable-Modem connection which is used for all other traffic
    • LAN Interface for just AD and DSL connection to the internal network
  • DC in which again PFSense is running on Hyper-V but with 12 Cores provisioned, 6GB in Memory (non dynamic) again.
    • Fixed IPs existing on two interfaces of which on is routed through the other, another interface is for LAN transport
    • Bandwith is limited to the DCs connection and hence 'unlimited' ;)

IPSec Configuration

IKE Remote Gateway Mode P1 Protocol P1 Transforms P1 Description
80.x.x.x AES (256 bits) SHA1 IPSec C6@Home

Mode     Local Subnet     Remote Subnet     P2 Protocol     P2 Transforms     P2 Auth Methods      
tunnel LAN 10.x.x.x/24 ESP AES (auto) SHA1
  • And of course the other way round on the recipient side.
  • Rules are set to allow any traffic from within each net to the other..
  • Traffic independent, the tunnel stops transporting after a while (several minutes to several hours), PFSense on one or the other side has to be rebooted! Since starting and stopping services doesn't help - or I am starting/stopping the wrong ones (which isn't unlikely). In most cases the PFSense on the side with the problem has to be rebooted - however, I can't tell from the logs which one this is. I just can tell since the PFSense is unresponsive to anything rule related at all :(

--> this is the real stupid issue, I always have to access the DC via other tools to reboot at least the VM

  • CPU/Memory load is in acceptable range and below < 20% on both sides

I am happy to deliver logs etc - according to your needs

Actions #1

Updated by Chris Buechler almost 7 years ago

  • Status changed from New to Feedback

I'm guessing the IPsec service is one you've restarted in the process? There should be nothing rebooting does that restarting that service doesn't accomplish. Disable AES-NI in that circumstance (#4791) with 2.2.3 since using AES CBC, though that should have left things completely non-functional if that were the cause.

Actions #2

Updated by Ingo-Stefan Schilling almost 7 years ago

Thank you for your Update and Feedback, I found meanwhile that did at least made the IPSec tunnels now work stable for the past 3 days (see my last entry and the case itself) - however, I do not want to believe that this is something that does affect IPSec tunnels, nor? At least this would lead me to believe that there is a bigger issue in the current pfSense implementation which is way beyond IPSec & Snort and that's something I do not even want to think about.

Some Updates to the initial reported incident
  • I initially had both sides running on Hyper-V, the local(branch) office is now on pure PC HW while the DC pfSense is for several reasons still on Hyper-V (see forum link for a few more details). - The change made the IPSec tunnel & pfSense 'crashing' happen more often on the Hyper-V side, but not only. The network cards are set to 'non legacy' in Hyper-V. - Which is something aside your suggestion, I'll investigate during the next week (I mean, exchanging NIC to legacy) - because that's in XEN and ESXi a 'typical' FreeBSD problem when it comes to my next point.
  • It happened (every 2nd / 3rd time) that not only the VM was unresponsive and rebooting PFSense didn't work, but also the Host had to be rebooted - a problem that I reported to MS and they (hopefully) investigating this (they got our full system)...
    • This may also relate to the poor Intel Server motherboard NIC drivers, but that's a different problem which maybe solved with legacy cards.
  • I did restart the IPSec service on shell, however it turned out that nothing happened while rebooting pfSense entirely did often solve the issue -> which makes me believe that the SNORT issue mentioned above may be a problem maker in addition... however I am not deep enough in architecture of freeBSD and pfSense to analyze this further on.
  • I'll also try your suggested configuration change - however, I'll wait until next week since I do have way more time than. Not only but also to cross-check if now everything is stable working.

Thank you for your support,

Ingo-Stefan Schilling

Actions #3

Updated by Chris Buechler almost 7 years ago

  • Status changed from Feedback to Not a Bug
  • Affected Version deleted (2.2.3)

That definitely sounds like you have a Snort signature set enabled that's too touchy, and it blocked the remote endpoint. We've seen a variety of IPsec-related signatures trigger false positives on perfectly normal traffic. It's definitely not an issue with IPsec where stopping and starting the service doesn't fix the issue.

Actions #4

Updated by Ingo-Stefan Schilling almost 7 years ago

Hi Chris,

I know, that's why I did - before I opened this bug - at least tried it for two days without Snort... in the result this issue happened also without Snort. I than turned Snort on and observed the above situation.

So with or without Snort, the issue persists, however, the Snort-Memory thing came on top of it, but I won't say it is the issue.

A very important thing I observed after playing around last night is that I am able to reproduce the behavior only on Hyper-V that fast and easy. I tried this night the same scenario with ESXi and in parallel with XEN and even with VirtualBox, the ladder two had other but somehow similar issues. To be as close as possible to the Hyper-V settings, except for the NIC names, I restored the same configuration

  • In all three scenarios I hadn't turned off snort at all since I tried to concentrate on the IPSec issue. - I was also told that snort filters maybe the issues, as you say, but I can't find a 'catch' in the log that tells me that snort captured and blocked the tunnel.
  • ESXi was the only one - after your configuration change which I herewith confirm as work around to #4791 as suggested - that did work stable
  • XEN, VirtualBox worked longer stable - actually they 'gave up' this morning, ~6h stable tunnel - which is notable and more important, without the suggested configuration change. With the change it seems the tunnels still stable to the moment I am writing this here.
  • on the original Hyper-V, I did turn of IPSec now and left Snort in the described state, but now established OpenVPN tunnels - this works since this morning stable, so it is already > 1h - 4h which was so far the typical IPSec livetime. I want to stress that the snort filters are untouched and the same as with the IPSec scenario on the same machine. I added also the same rules as on the IPSec Tunnel interface on the OpenVPN tunnel interface - so nothing 'different' in configuration here as well, despite the tunnel interface of course.
My conclusion
  • Hyper-V (and except for ESXi is definitely a contributor to this issue - that's what I can say with absolute certainty
  • I can't put my fingers on it much deeper due to the lack of knowledge and time, however I am absolutely sure that here is a bug.
    • this includes that I am not sure if it isn't snort configuration/bug related since I feel I've enough evidence for this.
  • I'll wait to turn on IPSec again when the next pfSense version comes out with the fixed AES functionality
    • I might 'reopen' this issue than.

So thanks for your time and let's see if I have to bug you again on this (bow)


Also available in: Atom PDF