Regression #13026
closedLimiters do not work
0%
Description
SETUP¶
/tmp/rules.limiter
(no change between versions)
pipe 1 config bw 5Mb queue 1000 droptail sched 1 config pipe 1 type fq_codel target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ecn queue 1 config pipe 1 droptail pipe 2 config bw 50Mb queue 1000 droptail sched 2 config pipe 2 type fq_codel target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ecn queue 2 config pipe 2 droptail
grep match /tmp/rules.debug
==================================== 22.01 ==================================== match out on { vmx1 } inet from 192.0.2.0/28 to any ridentifier 1649027215 dnqueue( 1,2) label "USER_RULE" ==================================== 22.05.a.20220403.0600 ==================================== match out on { vmx1 } inet from 192.0.2.0/28 to any ridentifier 1649027215 dnqueue( 1,2) label "id:1649027215" label "USER_RULE"
TEST¶
Speedtest from client device behind LAN
RESULTS¶
Download and Upload limiters do not limit traffic when using floating match rule on pfSense 22.05.a.20220403.0600
; limiters work on 22.01
. Download and Upload results: Diagnostics / Limiter Info
See attached limiter_info.txt
Files
Related issues
Updated by Jim Pingle 3 months ago
- Priority changed from High to Normal
- Target version set to 2.7.0
- Plus Target Version set to 22.05
There is ongoing work here as part of the transition to purely pf based handling of these things. See #12579 for some detail. It's possible this is a side effect of that, or there could be something else contributing. Either way it won't be possible to effectively test or debug this until after the other work is complete.
Updated by Jim Pingle 3 months ago
- Blocked by Bug #12579: Utilize ``dnctl(8)`` to apply limiter changes without a filter reload added
Updated by Marcos Mendoza about 2 months ago
Tested on 22.05.a.20220429.1807
with patch from #12579 applied. Same issue/results.
Updated by Steve Wheeler about 2 months ago
In the most recent 22.05 snapshot (22.05.a.20220505.1727) Limiters now work through a NAT'd connection where they were failing before but only in one direction.
Out and not In in this case:
[22.05-DEVELOPMENT][admin@4100-2.stevew.lan]/root: iperf3 -s ----------------------------------------------------------- Server listening on 5201 (test #1) ----------------------------------------------------------- Accepted connection from 10.23.10.10, port 48502 [ 5] local 172.21.16.206 port 5201 connected to 10.23.10.10 port 48504 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 1.15 MBytes 9.64 Mbits/sec [ 5] 1.00-2.00 sec 1.15 MBytes 9.65 Mbits/sec [ 5] 2.00-3.00 sec 1.15 MBytes 9.65 Mbits/sec [ 5] 3.00-4.00 sec 1.15 MBytes 9.65 Mbits/sec [ 5] 4.00-5.00 sec 1.15 MBytes 9.65 Mbits/sec [ 5] 5.00-6.00 sec 1.15 MBytes 9.65 Mbits/sec [ 5] 6.00-7.00 sec 1.15 MBytes 9.64 Mbits/sec [ 5] 7.00-8.00 sec 1.15 MBytes 9.65 Mbits/sec [ 5] 8.00-9.00 sec 1.15 MBytes 9.66 Mbits/sec [ 5] 9.00-10.00 sec 1.15 MBytes 9.64 Mbits/sec [ 5] 10.00-10.05 sec 63.6 KBytes 9.82 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.05 sec 11.6 MBytes 9.65 Mbits/sec receiver ----------------------------------------------------------- Server listening on 5201 (test #2) ----------------------------------------------------------- Accepted connection from 10.23.10.10, port 48506 [ 5] local 172.21.16.206 port 5201 connected to 10.23.10.10 port 48508 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 114 MBytes 956 Mbits/sec 0 3.00 MBytes [ 5] 1.00-2.00 sec 111 MBytes 929 Mbits/sec 56 1.58 MBytes [ 5] 2.00-3.00 sec 112 MBytes 941 Mbits/sec 0 1.68 MBytes [ 5] 3.00-4.00 sec 112 MBytes 941 Mbits/sec 0 1.77 MBytes [ 5] 4.00-5.00 sec 112 MBytes 942 Mbits/sec 0 1.86 MBytes [ 5] 5.00-6.00 sec 112 MBytes 941 Mbits/sec 0 1.94 MBytes [ 5] 6.00-7.00 sec 112 MBytes 942 Mbits/sec 0 2.02 MBytes [ 5] 7.00-8.00 sec 112 MBytes 941 Mbits/sec 0 2.09 MBytes [ 5] 8.00-9.00 sec 112 MBytes 941 Mbits/sec 0 2.17 MBytes [ 5] 9.00-10.00 sec 112 MBytes 941 Mbits/sec 0 2.24 MBytes [ 5] 10.00-10.00 sec 163 KBytes 894 Mbits/sec 0 2.24 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.10 GBytes 942 Mbits/sec 56 sender ----------------------------------------------------------- Server listening on 5201 (test #3) -----------------------------------------------------------
[22.05-DEVELOPMENT][admin@4100-2.stevew.lan]/root: dnctl pipe show 00001: 5.000 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 10.000 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active [22.05-DEVELOPMENT][admin@4100-2.stevew.lan]/root: dnctl sched show 00001: 5.000 Mbit/s 0 ms burst 0 sched 1 type WF2Q+ flags 0x0 0 buckets 0 active 00002: 10.000 Mbit/s 0 ms burst 0 sched 2 type WF2Q+ flags 0x0 0 buckets 0 active
[22.05-DEVELOPMENT][admin@4100-2.stevew.lan]/root: pfctl -vvsr | grep dnpipe @101 pass in quick on igc1 inet all flags S/SA keep state label "id:1651588187" label "USER_RULE: Allow all Limited" dnpipe(1, 2) ridentifier 1651588187
Updated by Marcos Mendoza about 2 months ago
Using floating match rules as originally described, limiters do not yet work for me in either out/in direction. I am no longer seeing any traffic hit the pipes/queues.
Updated by → luckman212 about 2 months ago
It's being suggested in #9263 to apply the limiter on the LAN interface as a workaround. I guess that wouldn't work well in a multi WAN environment where the WANs have different bandwidth / latency characteristics though, so I hope for a true fix.
Updated by Jim Pingle about 2 months ago
- Status changed from New to Feedback
- Assignee set to Marcos Mendoza
This needs re-tested now that all the new code is in.
Updated by Jim Pingle about 2 months ago
- Release Notes changed from Default to Force Exclusion
Updated by Marcos Mendoza about 1 month ago
- Status changed from Feedback to Resolved
Tested on BETA build with connections initiated from inside and outside the firewall. Limiters now work as expected.