Regression #13026
closedLimiters do not work
Added by Marcos M over 2 years ago. Updated almost 2 years ago.
0%
Description
SETUP¶
/tmp/rules.limiter
(no change between versions)
pipe 1 config bw 5Mb queue 1000 droptail sched 1 config pipe 1 type fq_codel target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ecn queue 1 config pipe 1 droptail pipe 2 config bw 50Mb queue 1000 droptail sched 2 config pipe 2 type fq_codel target 5ms interval 100ms quantum 1514 limit 10240 flows 1024 ecn queue 2 config pipe 2 droptail
grep match /tmp/rules.debug
==================================== 22.01 ==================================== match out on { vmx1 } inet from 192.0.2.0/28 to any ridentifier 1649027215 dnqueue( 1,2) label "USER_RULE" ==================================== 22.05.a.20220403.0600 ==================================== match out on { vmx1 } inet from 192.0.2.0/28 to any ridentifier 1649027215 dnqueue( 1,2) label "id:1649027215" label "USER_RULE"
TEST¶
Speedtest from client device behind LAN
RESULTS¶
Download and Upload limiters do not limit traffic when using floating match rule on pfSense 22.05.a.20220403.0600
; limiters work on 22.01
. Download and Upload results: Diagnostics / Limiter Info
See attached limiter_info.txt
Files
limiter_info.txt (5.43 KB) limiter_info.txt | Marcos M, 04/03/2022 07:16 PM | ||
msedge_VBWcKRijND.png (32.2 KB) msedge_VBWcKRijND.png | Jose Duarte, 07/08/2022 06:15 PM | ||
msedge_YlLAFPAkkS.png (49.7 KB) msedge_YlLAFPAkkS.png | Jose Duarte, 07/08/2022 06:15 PM |
Related issues
Updated by Jim Pingle over 2 years ago
- Priority changed from High to Normal
- Target version set to 2.7.0
- Plus Target Version set to 22.05
There is ongoing work here as part of the transition to purely pf based handling of these things. See #12579 for some detail. It's possible this is a side effect of that, or there could be something else contributing. Either way it won't be possible to effectively test or debug this until after the other work is complete.
Updated by Jim Pingle over 2 years ago
- Blocked by Bug #12579: Utilize ``dnctl(8)`` to apply limiter changes without a filter reload added
Updated by Marcos M over 2 years ago
Tested on 22.05.a.20220429.1807
with patch from #12579 applied. Same issue/results.
Updated by Steve Wheeler over 2 years ago
In the most recent 22.05 snapshot (22.05.a.20220505.1727) Limiters now work through a NAT'd connection where they were failing before but only in one direction.
Out and not In in this case:
[22.05-DEVELOPMENT][admin@4100-2.stevew.lan]/root: iperf3 -s ----------------------------------------------------------- Server listening on 5201 (test #1) ----------------------------------------------------------- Accepted connection from 10.23.10.10, port 48502 [ 5] local 172.21.16.206 port 5201 connected to 10.23.10.10 port 48504 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 1.15 MBytes 9.64 Mbits/sec [ 5] 1.00-2.00 sec 1.15 MBytes 9.65 Mbits/sec [ 5] 2.00-3.00 sec 1.15 MBytes 9.65 Mbits/sec [ 5] 3.00-4.00 sec 1.15 MBytes 9.65 Mbits/sec [ 5] 4.00-5.00 sec 1.15 MBytes 9.65 Mbits/sec [ 5] 5.00-6.00 sec 1.15 MBytes 9.65 Mbits/sec [ 5] 6.00-7.00 sec 1.15 MBytes 9.64 Mbits/sec [ 5] 7.00-8.00 sec 1.15 MBytes 9.65 Mbits/sec [ 5] 8.00-9.00 sec 1.15 MBytes 9.66 Mbits/sec [ 5] 9.00-10.00 sec 1.15 MBytes 9.64 Mbits/sec [ 5] 10.00-10.05 sec 63.6 KBytes 9.82 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.05 sec 11.6 MBytes 9.65 Mbits/sec receiver ----------------------------------------------------------- Server listening on 5201 (test #2) ----------------------------------------------------------- Accepted connection from 10.23.10.10, port 48506 [ 5] local 172.21.16.206 port 5201 connected to 10.23.10.10 port 48508 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 114 MBytes 956 Mbits/sec 0 3.00 MBytes [ 5] 1.00-2.00 sec 111 MBytes 929 Mbits/sec 56 1.58 MBytes [ 5] 2.00-3.00 sec 112 MBytes 941 Mbits/sec 0 1.68 MBytes [ 5] 3.00-4.00 sec 112 MBytes 941 Mbits/sec 0 1.77 MBytes [ 5] 4.00-5.00 sec 112 MBytes 942 Mbits/sec 0 1.86 MBytes [ 5] 5.00-6.00 sec 112 MBytes 941 Mbits/sec 0 1.94 MBytes [ 5] 6.00-7.00 sec 112 MBytes 942 Mbits/sec 0 2.02 MBytes [ 5] 7.00-8.00 sec 112 MBytes 941 Mbits/sec 0 2.09 MBytes [ 5] 8.00-9.00 sec 112 MBytes 941 Mbits/sec 0 2.17 MBytes [ 5] 9.00-10.00 sec 112 MBytes 941 Mbits/sec 0 2.24 MBytes [ 5] 10.00-10.00 sec 163 KBytes 894 Mbits/sec 0 2.24 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.10 GBytes 942 Mbits/sec 56 sender ----------------------------------------------------------- Server listening on 5201 (test #3) -----------------------------------------------------------
[22.05-DEVELOPMENT][admin@4100-2.stevew.lan]/root: dnctl pipe show 00001: 5.000 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 10.000 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active [22.05-DEVELOPMENT][admin@4100-2.stevew.lan]/root: dnctl sched show 00001: 5.000 Mbit/s 0 ms burst 0 sched 1 type WF2Q+ flags 0x0 0 buckets 0 active 00002: 10.000 Mbit/s 0 ms burst 0 sched 2 type WF2Q+ flags 0x0 0 buckets 0 active
[22.05-DEVELOPMENT][admin@4100-2.stevew.lan]/root: pfctl -vvsr | grep dnpipe @101 pass in quick on igc1 inet all flags S/SA keep state label "id:1651588187" label "USER_RULE: Allow all Limited" dnpipe(1, 2) ridentifier 1651588187
Updated by Marcos M over 2 years ago
Using floating match rules as originally described, limiters do not yet work for me in either out/in direction. I am no longer seeing any traffic hit the pipes/queues.
Updated by → luckman212 over 2 years ago
It's being suggested in #9263 to apply the limiter on the LAN interface as a workaround. I guess that wouldn't work well in a multi WAN environment where the WANs have different bandwidth / latency characteristics though, so I hope for a true fix.
Updated by Jim Pingle over 2 years ago
- Status changed from New to Feedback
- Assignee set to Marcos M
This needs re-tested now that all the new code is in.
Updated by Jim Pingle over 2 years ago
- Release Notes changed from Default to Force Exclusion
Updated by Marcos M over 2 years ago
- Status changed from Feedback to Resolved
Tested on BETA build with connections initiated from inside and outside the firewall. Limiters now work as expected.
Updated by Jose Duarte over 2 years ago
- File msedge_VBWcKRijND.png msedge_VBWcKRijND.png added
- File msedge_YlLAFPAkkS.png msedge_YlLAFPAkkS.png added
Not sure if fully related but having limiter issues on final 22.05 release with a netgate 6100.
2 limiters, each with one queue, default Tail drop/WF2Q (see attachment)
Normal LAN rule towards internet, In/Out pipe configured. When "Default" is selected as gateway it works as intended, on Multi-Wan scenario where a specific WAN is chosen, the limiter does not work anymore.
Happy to share more details but should be easily reproduced
Updated by Danilo Zrenjanin about 2 years ago
- Status changed from Resolved to New
I can confirm that limiters work fine until you define a specific gateway in the rule where the limiters are applied. After defining a particular gateway in the rule, only the out pipe (downloading from the LAN perspective) limits the traffic, while the in pipe (uploading from the LAN perspective) does nothing.
22.05-RELEASE (amd64) built on Wed Jun 22 18:56:13 UTC 2022 FreeBSD 12.3-STABLE
Updated by Kristof Provost about 2 years ago
I've tested a recent CE snapshot and see correct limiting both up and down, with a gateway set on the floating rule.
Marcos, can you confirm that?
Updated by Steve Wheeler about 2 years ago
The originally described scenario works fine on current snapshots for me. That is; Limiters applied via a floating outbound match rule on WAN, with or without a gateway set. And using FQ_CoDel or WF2Q+.
What does not work as expected is applying the Limiter via a pass rule on LAN with a gateway set; a policy routing rule. In that scenario the 'In' queue/limit is bypassed (unlimited) but the 'Out' queue/limit works as expected.
Updated by Kristof Provost about 2 years ago
I can replicate the issue Steve describes, but I'm not quite sure if it's a bug or somewhat surprising expected behaviour.
Essentially what happens is that we have two states:
all tcp 10.0.2.1:5201 <- 192.168.1.100:44607 ESTABLISHED:ESTABLISHED [2078351244 + 3221291264] wscale 6 [1276678361 + 2419064832] wscale 6 age 00:00:03, expires in 24:00:00, 122313:60993 pkts, 183465201:3171644 bytes, rule 81 id: a5627f6300000000 creatorid: a17f7a2e gateway: 1.0.2.1 origif: vtnet2 all tcp 1.0.2.78:50878 (192.168.1.100:44607) -> 10.0.2.1:5201 ESTABLISHED:ESTABLISHED [1276678361 + 2419064832] wscale 6 [2078351244 + 3221291264] wscale 6 age 00:00:03, expires in 24:00:00, 122313:60993 pkts, 183465201:3171644 bytes, rule 77 id: a6627f6300000000 creatorid: a17f7a2e gateway: 1.0.2.1 origif: vtnet0
The first state is created by the rule with the limiter, but because that rule also does route-to the packet is passed through pf_test() a second time, which creates the second state. That second state is created by a rule which doesn't have the limiter associated, and that means that when it matches the limiter is not applied. It's that second state that ends up matching incoming packets, so the limiter doesn't get applied there.
Updated by Marcos M almost 2 years ago
- Status changed from New to Resolved
The original issue of limiters not working at all has been resolved. I've created a separate issue for the route-to
issue: https://redmine.pfsense.org/issues/14039