Bug #10465
closedpossible routing performance regression due to non use of ip_tryforward
0%
Description
A few years back Netgate sponsored upstream enhancements to FreeBSD which replaced ip_fastforward() with ip_tryforward() - which subsequently appeared in pfSense 2.3.
Whilst researching FreeBSD tuning (https://bsdrp.net/documentation/examples/forwarding_performance_lab_of_a_pc_engines_apu2) I noticed the recommendation to disable ICMP redirects.
This requirement comes from a patch applied to FreeBSD in August 2018 to fix ICMP redirects ... FreeBSD 11 Stable patch https://svnweb.freebsd.org/base?view=revision&revision=338343
As far as I can tell pfSense 2.4.5 has both IPv4 & IPv6 ICMP redirects defaulting to on - which based on above patch would now appear to disable tryforward path:
net.inet.ip.redirect: 1
net.inet6.ip6.redirect: 1
The workaround is obviously trivial:
sysctl net.inet.ip.redirect=0
sysctl net.inet6.ip6.redirect=0
Like many of us I'm away from my office and so lack test equipment... so I did a rudimentary analysis based on CPU
Sample vmstat 2 output from APU2 (default) - test throughput ~85Mb/s WAN->LAN
procs memory page disks faults cpu r b w avm fre flt re pi po fr sr md0 ad0 in sy cs us sy id 0 0 0 655M 3.4G 4 0 0 0 0 6 0 0 18750 319 38839 2 14 84 0 0 0 655M 3.4G 4 0 0 0 0 6 0 0 16177 276 33722 1 15 84 0 0 0 655M 3.4G 4 0 0 0 0 6 0 0 14841 139 31430 1 13 86 0 0 0 655M 3.4G 4 0 0 0 0 9 0 0 15865 109 33206 0 13 87 0 0 0 655M 3.4G 4 0 0 0 0 6 0 0 16218 130 33776 1 14 85 0 0 0 655M 3.4G 4 0 0 0 0 6 0 0 14252 109 30215 1 10 89 0 0 0 655M 3.4G 4 0 0 0 0 6 0 0 15339 106 32168 1 13 86
Sample vmstat 2 output from APU2 (ICMP redirects disabled) - test throughput ~85Mb/s WAN->LAN
procs memory page disks faults cpu r b w avm fre flt re pi po fr sr md0 ad0 in sy cs us sy id 0 0 0 655M 3.4G 4 0 0 0 0 6 0 0 18644 111 37189 0 15 84 0 0 0 655M 3.4G 2 0 0 0 0 6 0 0 16840 129 34442 1 13 87 0 0 0 655M 3.4G 4 0 0 0 0 6 0 0 17776 118 36026 1 12 87 0 0 0 655M 3.4G 4 0 0 0 0 6 0 0 16221 109 33478 0 13 86 0 0 0 655M 3.4G 6 0 0 0 0 6 0 0 17251 127 35166 1 12 87 0 0 0 655M 3.4G 2 0 0 0 0 6 0 0 18232 136 37086 1 13 86 0 0 0 655M 3.4G 5 0 0 0 0 6 0 1 18724 202 37911 1 14 85
Not entirely scientific but shows a modest 1-2% CPU drop. Clearly for GbE bandwidths the improvement is likely to be larger.
Assuming these results can be confirmed then maybe sysctl defaults can be changed in future release?