Regression #13150
closedCaptive Portal not applying per user bandwidths
100%
Description
Enabling 'Per-user bandwidth restriction' in the captive portal and setting limits does not apply them to the created dummynet pipes.
For example setting:
<peruserbw></peruserbw> <bwdefaultdn>5000</bwdefaultdn> <bwdefaultup>1000</bwdefaultup>
Results in:
[22.05-BETA][admin@plusdev.stevew.lan]/root: pfctl -a cpzoneid_2_auth/192.168.20.10_32 -se ether pass in quick proto 0x0800 from 3a:d2:8d:84:6e:56 l3 from 192.168.20.10 to any tag cpzoneid_2_auth dnpipe 2002 ether pass out quick proto 0x0800 to 3a:d2:8d:84:6e:56 l3 from any to 192.168.20.10 tag cpzoneid_2_auth dnpipe 2003 [22.05-BETA][admin@plusdev.stevew.lan]/root: dnctl pipe show 00001: 5.000 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 10.000 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active 02002: unlimited 0 ms burst 0 q133074 100 sl. 0 flows (1 buckets) sched 67538 weight 0 lmax 0 pri 0 droptail sched 67538 type FIFO flags 0x0 16 buckets 0 active 02003: unlimited 0 ms burst 0 q133075 100 sl. 0 flows (1 buckets) sched 67539 weight 0 lmax 0 pri 0 droptail sched 67539 type FIFO flags 0x0 16 buckets 0 active
Bandwidth is still 'unlimited'.
Tested: 22.05.b.20220511.0600
Related issues
Updated by Viktor Gurov over 2 years ago
- Related to Todo #13100: Transition Captive Portal from IPFW to PF added
Updated by Viktor Gurov over 2 years ago
- Assignee set to Viktor Gurov
- Release Notes changed from Default to Force Exclusion
- Affected Version set to 2.7.0
Updated by Viktor Gurov over 2 years ago
Updated by Jim Pingle over 2 years ago
- Status changed from New to Pull Request Review
Updated by Viktor Gurov over 2 years ago
- Status changed from Pull Request Review to Feedback
Updated by Steve Wheeler over 2 years ago
- Status changed from Feedback to Confirmed
With that patch the pipes are created correctly:
[22.05-BETA][admin@plusdev.stevew.lan]/root: pfctl -a cpzoneid_2_auth/192.168.20.10_32 -se ether pass in quick proto 0x0800 from 3a:d2:8d:84:6e:56 l3 from 192.168.20.10 to any tag cpzoneid_2_auth dnpipe 2002 ether pass out quick proto 0x0800 to 3a:d2:8d:84:6e:56 l3 from any to 192.168.20.10 tag cpzoneid_2_auth dnpipe 2003 [22.05-BETA][admin@plusdev.stevew.lan]/root: dnctl pipe show 00001: 5.000 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 10.000 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active 02002: 2.500 Mbit/s 0 ms burst 0 q133074 100 sl. 0 flows (1 buckets) sched 67538 weight 0 lmax 0 pri 0 droptail sched 67538 type FIFO flags 0x0 16 buckets 0 active 02003: 5.000 Mbit/s 0 ms burst 0 q133075 100 sl. 0 flows (1 buckets) sched 67539 weight 0 lmax 0 pri 0 droptail sched 67539 type FIFO flags 0x0 16 buckets 0 active
However traffic is only put into the pipe in the upload direction. A download test shows unlimited bandwidth and nothing in the pipe:
02002: 2.500 Mbit/s 0 ms burst 0 q133074 100 sl. 0 flows (1 buckets) sched 67538 weight 0 lmax 0 pri 0 droptail sched 67538 type FIFO flags 0x0 16 buckets 1 active BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 0.0.0.0/0 3895 202540 48 2496 0 02003: 5.000 Mbit/s 0 ms burst 0 q133075 100 sl. 0 flows (1 buckets) sched 67539 weight 0 lmax 0 pri 0 droptail sched 67539 type FIFO flags 0x0 16 buckets 0 active
Updated by Viktor Gurov over 2 years ago
- Assignee changed from Viktor Gurov to Kristof Provost
Looks like dnpipe issue.
Maybe we should use L3-like dnpipe syntax, like:
ether pass in quick proto 0x0800 from 3a:d2:8d:84:6e:56 l3 from 192.168.20.10 to any tag cpzoneid_2_auth dnpipe (2002, 2003)
instead of
ether pass in quick proto 0x0800 from 3a:d2:8d:84:6e:56 l3 from 192.168.20.10 to any tag cpzoneid_2_auth dnpipe 2002 ether pass out quick proto 0x0800 to 3a:d2:8d:84:6e:56 l3 from any to 192.168.20.10 tag cpzoneid_2_auth dnpipe 2003
Updated by Kristof Provost over 2 years ago
No, that won't work on ethernet rules. The 'dnpipe (1, 2)' syntax tell pf to apply pipe 1 on forward traffic, and pipe 2 for replies.
The ethernet rules are stateless, so there's never a link between forward and reply traffic.
I'd recommend re-testing with the fix to #13148 included. That might change things.
Updated by Viktor Gurov over 2 years ago
Kristof Provost wrote in #note-8:
No, that won't work on ethernet rules. The 'dnpipe (1, 2)' syntax tell pf to apply pipe 1 on forward traffic, and pipe 2 for replies.
The ethernet rules are stateless, so there's never a link between forward and reply traffic.I'd recommend re-testing with the fix to #13148 included. That might change things.
That fix doesn't fix it.
I'll move dnpipe
to L3 anchors to fix the issue.
Updated by Viktor Gurov over 2 years ago
- Status changed from Confirmed to In Progress
Updated by Kristof Provost over 2 years ago
Thinking about this a bit more, it's expected that
ether pass out quick proto 0x0800 to 3a:d2:8d:84:6e:56 l3 from any to 192.168.20.10 tag cpzoneid_2_auth dnpipe 2003
doesn't delay this traffic. The current ethernet filter code does not send packets to dummynet. It only marks them, and relies on the L3 pf code to actually send it to dummynet. That works well in the input direction, where we process ethernet rules first and L3 rules later, but not in the output direction where that order is inverted (so L3 rules first, and only then L2/ethernet rules).
That means that with the current implementation the output traffic must be delayed by L3 rules.
Updated by Kristof Provost over 2 years ago
This should be fixed by https://gitlab.netgate.com/pfSense/FreeBSD-src/-/merge_requests/87 , which changed pf ethernet rules to handle dummynet directly.
Updated by Viktor Gurov over 2 years ago
Upload/download bandwidth restrictions works as expected.
Updated by Viktor Gurov over 2 years ago
- Status changed from In Progress to Resolved
Updated by Marcos M about 2 years ago
There still seems to be an issue here when the bandwidth limit values come from RADIUS attributes e.g. WISPr-Bandwidth-Max-Up
or pfSense-Bandwidth-Max-Down
. I can see the attributes sent correctly on a pcap, but the limits are only applied after re-saving the CP zone configuration. Test on 22.05:
Reboot firewall and authenticate a new client:
02000: unlimited 0 ms burst 0 q133072 100 sl. 0 flows (1 buckets) sched 67536 weight 0 lmax 0 pri 0 droptail sched 67536 type FIFO flags 0x0 16 buckets 1 active BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp 0 ip 0.0.0.0/0 0.0.0.0/0 3 311 0 0 0 02001: unlimited 0 ms burst 0 q133073 100 sl. 0 flows (1 buckets) sched 67537 weight 0 lmax 0 pri 0 droptail sched 67537 type FIFO flags 0x0 16 buckets 1 active 0 ip 0.0.0.0/0 0.0.0.0/0 3 497 0 0 0
Re-save CP zone configuration:
02000: 1.000 Mbit/s 0 ms burst 0 q133072 100 sl. 0 flows (1 buckets) sched 67536 weight 0 lmax 0 pri 0 droptail sched 67536 type FIFO flags 0x0 16 buckets 0 active 02001: 10.000 Mbit/s 0 ms burst 0 q133073 100 sl. 0 flows (1 buckets) sched 67537 weight 0 lmax 0 pri 0 droptail sched 67537 type FIFO flags 0x0 16 buckets 0 active
The client bandwidth is indeed limited at this point, but after removing the client from Status / Captive Portal
and re-authenticating, the issue re-occurs.