Project

General

Profile

Actions

Bug #13154

closed

pfBlocker causing excessive CPU load

Added by Michael Novotny almost 2 years ago. Updated over 1 year ago.

Status:
Resolved
Priority:
Normal
Assignee:
Category:
pfBlockerNG
Target version:
-
Start date:
Due date:
% Done:

0%

Estimated time:
Plus Target Version:
Affected Version:
2.7.0
Affected Plus Version:
22.05
Affected Architecture:
All

Description

After killing that process (/usr/local/bin/php_pfb), my bandwidth & CPU usage was back to normal.
I'm running this on 22.01-Release and this was CPU load was not occurring prior to 3.1.0.4.

Also posted on the forums - https://forum.netgate.com/topic/171527/3-1-0-4-high-cpu-load

I couldn't find where to post a bug pfBlocker's github https://github.com/pfsense/FreeBSD-ports/commits/devel/net/pfSense-pkg-pfBlockerNG-devel


Related issues

Related to Regression #13156: pfBlockerNG IP block stats do not workResolvedMarcos M

Actions
Has duplicate Bug #13180: High CPU Utilization with pfb_filter since pfBlockerNG update to devel 3.1.0_4Duplicate

Actions
Actions #1

Updated by Jim Pingle almost 2 years ago

  • Project changed from pfSense to pfSense Packages
  • Category changed from Unknown to pfBlockerNG
  • Status changed from New to Duplicate
  • Release Notes deleted (Default)

Almost certainly a duplicate of #12827 and not a unique issue.

Actions #2

Updated by Michael Novotny almost 2 years ago

I'm also the OP for that ticket, too.

Actions #3

Updated by Steve Wheeler almost 2 years ago

  • Status changed from Duplicate to Confirmed
  • Affected Version set to 2.7.0
  • Affected Plus Version set to 22.05
  • Affected Architecture All added
  • Affected Architecture deleted (7100)

Still seeing this in 2.7/22.05 so it seems unlikely to be a symptom of #12827 which is mostly fixed there.

The CPU loading does not start immediately after enabling pfBlocker it takes a few minutes as though some resource is being used or a critical number is reached.

last pid: 90144;  load averages:  0.15,  0.13,  0.17                                                               up 0+04:41:38  22:50:56
614 threads:   5 running, 583 sleeping, 26 waiting
CPU 0: 34.3% user,  0.0% nice, 22.0% system,  0.0% interrupt, 43.7% idle
CPU 1: 21.3% user,  0.0% nice, 25.6% system,  0.0% interrupt, 53.1% idle
Mem: 147M Active, 433M Inact, 450M Wired, 866M Free
ARC: 171M Total, 49M MFU, 115M MRU, 32K Anon, 1231K Header, 5189K Other
     107M Compressed, 255M Uncompressed, 2.39:1 Ratio
  PID USERNAME    PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
   11 root        155 ki31     0B    32K RUN      1 266:12  50.19% [idle{idle: cpu1}]
   11 root        155 ki31     0B    32K RUN      0 268:19  46.77% [idle{idle: cpu0}]
97159 root         52    0    60M    42M piperd   1   0:01   2.92% /usr/local/bin/php_pfb -f /usr/local/pkg/pfblockerng/pfblockerng.inc fi
70589 root         20    0   174M   135M nanslp   1   3:54   1.27% /usr/local/bin/suricata -i em0 -D -c /usr/local/etc/suricata/suricata_4
70589 root         20    0   174M   135M RUN      1   2:21   0.82% /usr/local/bin/suricata -i em0 -D -c /usr/local/etc/suricata/suricata_4
   12 root        -60    -     0B   416K WAIT     1   0:33   0.20% [intr{swi4: clock (0)}]
    0 root        -76    -     0B   608K -        1   0:27   0.16% [kernel{if_config_tqg_0}]
    0 root        -92    -     0B   608K -        0   0:22   0.14% [kernel{dummynet}]
45841 root         20    0    14M  5024K CPU0     0   0:24   0.12% top -HaSP

Actions #4

Updated by Denny Page almost 2 years ago

The issue apparently stems from the output of "pfctl -vvsr" changing in 22.05. Due to the change in output, pfBlockNG is unable read the list of it's rules out of the table. The result is a complete scan of all rules for each log entry processed.

The patch below addresses the issue for pfBlockerNG-devel. I make no representation that this is the best way to address the issue.

*** pfblockerng.inc.org    Tue Mar 29 08:31:43 2022
--- pfblockerng.inc    Tue Jun 14 10:05:51 2022
***************
*** 4136,4142 ****
          foreach ($results as $result) {
              if (substr($result, 0, 1) == '@') {

!                 $r = explode(')', $result, 2);

                  // pfSense > v2.6 uses an 'ridentifier' string
                  if (strpos($result, 'ridentifier') != FALSE) {
--- 4136,4142 ----
          foreach ($results as $result) {
              if (substr($result, 0, 1) == '@') {

!                 $r = explode(' ', $result, 2);

                  // pfSense > v2.6 uses an 'ridentifier' string
                  if (strpos($result, 'ridentifier') != FALSE) {

Actions #5

Updated by Marcos M almost 2 years ago

For reference, the redmine for that issue is here:
https://redmine.pfsense.org/issues/13156

Actions #6

Updated by Denny Page almost 2 years ago

Well... seeing that would have saved me a bunch of debugging...

Actions #7

Updated by Steve Wheeler almost 2 years ago

That one change looks to have solved the issue for me.

Testing in:

2.7.0-DEVELOPMENT (amd64)
built on Tue Jun 14 06:29:50 UTC 2022
FreeBSD 12.3-STABLE

With:
pfBlockerNG-devel 3.1.0_4

Actions #8

Updated by Marcos M almost 2 years ago

For reference, the patch to fix it is as follows:

diff --git a/net/usr/local/pkg/pfblockerng/pfblockerng.inc b/net/usr/local/pkg/pfblockerng/pfblockerng.inc
index 7fa8c1d2f8bf..2abbef30578b 100644
--- a/net/usr/local/pkg/pfblockerng/pfblockerng.inc
+++ b/net/usr/local/pkg/pfblockerng/pfblockerng.inc
@@ -4136,7 +4136,7 @@ function pfb_filterrules() {
         foreach ($results as $result) {
             if (substr($result, 0, 1) == '@') {

-                $r = explode(')', $result, 2);
+                $r = explode(' ', $result, 2);

                 // pfSense > v2.6 uses an 'ridentifier' string
                 if (strpos($result, 'ridentifier') != FALSE) {

Restart the pfb_filter service afterwards.

Actions #9

Updated by Michael Novotny over 1 year ago

The high cpu is still occurring with this patch applied and running on 22.05, reboot, reloading package, etc. As stated by me and Steve, the CPU usage does not happen immediately, but when it does, the CPU goes over 7% as shown below. As a temp fix, I have minute cron to killall php_pfb so that we can utilize near full bandwidth with IPsec (~90 Mbps), otherwise throughput is less than half of that. This is on 7100's with over 1270 firewall rules with pfctl.

last pid: 45639;  load averages:  1.90,  1.54,  1.39  up 0+01:15:15    07:46:35
673 threads:   7 running, 624 sleeping, 42 waiting
CPU: 12.8% user,  0.2% nice, 13.9% system,  0.4% interrupt, 72.6% idle
Mem: 777M Active, 325M Inact, 928M Wired, 5831M Free
ARC: 419M Total, 168M MFU, 234M MRU, 8680K Anon, 1865K Header, 6337K Other
     172M Compressed, 497M Uncompressed, 2.88:1 Ratio
Swap: 1024M Total, 1024M Free

  PID USERNAME    PRI NICE   SIZE    RES STATE    C   TIME    WCPU COMMAND
   11 root        155 ki31     0B    64K RUN      1  54:40  68.26% [idle{idle: cpu1}]
   11 root        155 ki31     0B    64K CPU0     0  55:47  66.06% [idle{idle: cpu0}]
   11 root        155 ki31     0B    64K RUN      3  55:24  65.97% [idle{idle: cpu3}]
   11 root        155 ki31     0B    64K CPU2     2  53:14  52.20% [idle{idle: cpu2}]
30485 root         77    0    12M  3232K CPU3     3   0:01  18.55% /sbin/pfctl -vvsr
 5504 root         52    0    71M    52M piperd   0   0:20   8.59% /usr/local/bin/php_pfb -f /usr/local/pkg/pfblockerng/pfblockerng.inc filterlog
    0 root        -92    -     0B   736K -        1   2:23   6.49% [kernel{if_io_tqg_1}]
    0 root        -92    -     0B   736K RUN      3   2:41   5.27% [kernel{if_io_tqg_3}]
    8 root        -16    -     0B    16K e6000s   0   4:28   4.79% [e6000sw tick kproc]
    0 root        -92    -     0B   736K -        0   2:14   3.56% [kernel{if_io_tqg_0}]
88009 root         52    0   150M    87M accept   0   1:54   2.49% php-fpm: pool nginx (php-fpm){php-fpm}
Actions #10

Updated by Denny Page over 1 year ago

Michael Novotny wrote in #note-9:

The high cpu is still occurring with this patch applied and running on 22.05, reboot, reloading package, etc. As stated by me and Steve, the CPU usage does not happen immediately, but when it does, the CPU goes over 7% as shown below.

Probably should confirm that the patch applied correctly. Assuming that you are running pfBlockerNG-devel, shasum output for pfblockerng.inc should be:

f8846925d60c29a4e189b554fe74c60ce6305c6b  /usr/local/pkg/pfblockerng/pfblockerng.inc
Actions #11

Updated by Michael Novotny over 1 year ago

Denny Page wrote in #note-10:

Probably should confirm that the patch applied correctly. Assuming that you are running pfBlockerNG-devel, shasum output for pfblockerng.inc should be:

f8846925d60c29a4e189b554fe74c60ce6305c6b /usr/local/pkg/pfblockerng/pfblockerng.inc

Interesting... I reinstalled pfBlocker (pfBlockerNG-devel 3.1.0_4) as I was not getting that value. The first sum was just after reloading. The second is with the patch, exactly copied from Marco's note 8 above.

[22.05-RELEASE][admin@fw-hq1]/root: shasum /usr/local/pkg/pfblockerng/pfblockerng.inc
fc56abd134d3c35fe66a4736f080c545b42e827b  /usr/local/pkg/pfblockerng/pfblockerng.inc

[22.05-RELEASE][admin@fw-hq1]/root: shasum /usr/local/pkg/pfblockerng/pfblockerng.inc
7d26653c099158e752873c2843c426717dc4d9ed  /usr/local/pkg/pfblockerng/pfblockerng.inc
Actions #12

Updated by Denny Page over 1 year ago

Michael Novotny wrote in #note-11:

Interesting... I reinstalled pfBlocker (pfBlockerNG-devel 3.1.0_4) as I was not getting that value. The first sum was just after reloading. The second is with the patch, exactly copied from Marco's note 8 above.

pfBlockerNG-devel has not (yet) been updated to correct the problem. Whenever you re-install pfBlockerNG, you are overwriting any patch you have applied to it and re-introducing the problem.

Something is wrong though. The shasums you have don't match either my original or patched version:

  e3d57737a2a6ecef4e8d759ca550fa3bf9fe8232  /usr/local/pkg/pfblockerng/pfblockerng.inc.org
  f8846925d60c29a4e189b554fe74c60ce6305c6b  /usr/local/pkg/pfblockerng/pfblockerng.inc

These are from pfBlockerNG-devel, version 3.1.0_4.

I would suggest that you re-install, and then carefully hand edit pfblockerng.inc to make the one character change. Be sure to change the close paren ')' to a single space ' '. Following that, disable and then re-enable pfBlockerNG.

Actions #13

Updated by Marcos M over 1 year ago

Actions #14

Updated by Marcos M over 1 year ago

  • Status changed from Confirmed to Pull Request Review
  • Assignee set to Marcos M
Actions #15

Updated by Denny Page over 1 year ago

This issue is still present in the just released pfBlockerNG-devel 3.1.0_6.

Same fix as before:


*** pfblockerng.inc.org    2022-10-10 14:35:41.159055000 -0700
--- pfblockerng.inc    2022-10-10 14:34:26.122428000 -0700
*************** function pfb_filterrules() {
*** 4136,4142 ****
          foreach ($results as $result) {
              if (substr($result, 0, 1) == '@') {

!                 $r = explode(')', $result, 2);

                  // pfSense > v2.6 uses an 'ridentifier' string
                  if (strpos($result, 'ridentifier') != FALSE) {
--- 4136,4142 ----
          foreach ($results as $result) {
              if (substr($result, 0, 1) == '@') {

!                 $r = explode(' ', $result, 2);

                  // pfSense > v2.6 uses an 'ridentifier' string
                  if (strpos($result, 'ridentifier') != FALSE) {

Actions #16

Updated by OpIT GmbH over 1 year ago

I can confirm this also. Problem still exists in 3.1.0_6

Actions #17

Updated by Marcos M over 1 year ago

  • Status changed from Pull Request Review to Resolved

Fixes are merged and working:
3.1.0_6 for pfSense CE
3.1.0_7 for pfSense Plus

Actions #18

Updated by Marcos M about 1 year ago

  • Has duplicate Bug #13180: High CPU Utilization with pfb_filter since pfBlockerNG update to devel 3.1.0_4 added
Actions

Also available in: Atom PDF