Bug #13662
openSetting a limiter queue length greater than 100 prevents the limiter from loading
100%
Description
Issue¶
Traffic is not limited based on the weight value within WF2Q+ queues resulting in higher-weighted queue traffic being slower than lower-weighted queue traffic.
Goal¶
Prioritize known traffic over other known traffic which shares the same bandwidth limit. For example:- A
guest
andhome
subnet should share a symmetrical (up/down) bandwidth limit of 10Mbps - Traffic from
home
can use all 10Mbps when no traffic exists fromguest
, and vice versa. - When both
guest
andhome
try to use 10Mbps at the same time,home
has a 75% share (i.e. can use up to 7.5Mbps)
Setup¶
WAN_up limiter- Queue Management Algorithm:
Tail Drop
- Scheduler:
WF2Q+
WAN_upQ_HIGH
child queue:- Mask:
Source addresses
,/32
- Queue Management Algorithm:
Tail Drop
- Weight:
75
- Mask:
WAN_upQ_LOW
child queue:- Mask:
Source addresses
,/32
- Queue Management Algorithm:
Tail Drop
- Weight:
25
- Mask:
- Queue Management Algorithm:
Tail Drop
- Scheduler:
WF2Q+
WAN_downQ_HIGH
child queue:- Mask:
Destination addresses
,/32
- Queue Management Algorithm:
Tail Drop
- Weight:
75
- Mask:
WAN_downQ_LOW
child queue:- Mask:
Destination addresses
,/32
- Queue Management Algorithm:
Tail Drop
- Weight:
25
- Mask:
Rules
match in on $HOME inet from 10.0.1.0/24 to any dnqueue(1,3) label "QoS high" match in on $GUEST inet from 10.0.2.0/24 to any dnqueue(2,4) label "QoS low" pass in quick on $HOME $WAN_GW inet from 10.0.1.0/24 to any keep state label "QoS high" pass in quick on $GUEST $WAN_GW inet from 10.0.2.0/24 to any keep state label "QoS low"
Updated by Kristof Provost about 2 years ago
There's something very odd going on with this. I can reproduce the problem, but only if I set the pipe bandwidth sufficiently high.
At 10Mbps the traffic (mostly) matches the configured weights. At 100Mbps it does not, there it's closer to 60/40 Mbit/s.
At 10Mbps with 5 and 100 weights I see mostly ~460Kbit/s and 9.2Mbit/s, with regular (once every 10 seconds or so) excursions to 9Mbit/s and 2Mbit/s (so almost the inverse of what's expected).
Taken together I suspect there's a scaling or number overflow issue in the scheduling code.
Updated by Jim Pingle about 2 years ago
Does it help to increase the queue length there? Normally we recommend setting it to >= 1000 for 100Mbit/s and even higher for faster links.
Updated by Marcos M about 2 years ago
The bandwidth limits I have are 140 up 9 down and the issue persists there even with a queue length of 1400/90.
Updated by Kristof Provost about 2 years ago
Increasing the queue lengths of the individual queues appears to help. I tested with a queue of 5000 at 100Mbps. Increasing the queue length on the scheduler itself does not affect things, but is required for the GUI to set net.inet.ip.dummynet.pipe_slot_limit.
That is, I think there's a minor issue in the GUI, because it's possible to set a queue size of 5000 on the (child) queue without setting that on the scheduler, and then the dummynet configuration will fail to load, without visible indication of this on the GUI. WF2Q itself appears to work as expected.
Updated by Marcos M about 2 years ago
Setting the queue length on the child queue AND parent scheduler worked! (also have to keep this bug in mind #13158)
Updated by Marcos M 10 days ago
- Subject changed from Limit traffic with weighted queues using WF2Q+ to Configuring a limitter queue length does not set ``net.inet.ip.dummynet.pipe_slot_limit``
- Status changed from New to In Progress
- Assignee set to Marcos M
- Target version set to 2.8.0
- Plus Target Version set to 25.03
Updated by Marcos M 10 days ago
- Status changed from In Progress to Feedback
- % Done changed from 0 to 100
Applied in changeset f79dfc8c6b8d51a7781f9fe886eb69e5bd9dde62.
Updated by Jim Pingle 5 days ago
- Subject changed from Setting a limiterqueue length greater than 100 prevents the limiter from loading to Setting a limiter queue length greater than 100 prevents the limiter from loading