Project

General

Profile

Actions

Bug #13662

open

Limit traffic with weighted queues using WF2Q+

Added by Marcos M over 1 year ago. Updated over 1 year ago.

Status:
New
Priority:
Normal
Assignee:
-
Category:
Traffic Shaper (Limiters)
Target version:
-
Start date:
Due date:
% Done:

0%

Estimated time:
Plus Target Version:
Release Notes:
Default
Affected Version:
Affected Architecture:

Description

Issue

Traffic is not limited based on the weight value within WF2Q+ queues resulting in higher-weighted queue traffic being slower than lower-weighted queue traffic.

Goal

Prioritize known traffic over other known traffic which shares the same bandwidth limit. For example:
  • A guest and home subnet should share a symmetrical (up/down) bandwidth limit of 10Mbps
  • Traffic from home can use all 10Mbps when no traffic exists from guest, and vice versa.
  • When both guest and home try to use 10Mbps at the same time, home has a 75% share (i.e. can use up to 7.5Mbps)

Setup

WAN_up limiter
  • Queue Management Algorithm: Tail Drop
  • Scheduler: WF2Q+
  • WAN_upQ_HIGH child queue:
    • Mask: Source addresses, /32
    • Queue Management Algorithm: Tail Drop
    • Weight: 75
  • WAN_upQ_LOW child queue:
    • Mask: Source addresses, /32
    • Queue Management Algorithm: Tail Drop
    • Weight: 25
WAN_down limiter
  • Queue Management Algorithm: Tail Drop
  • Scheduler: WF2Q+
  • WAN_downQ_HIGH child queue:
    • Mask: Destination addresses, /32
    • Queue Management Algorithm: Tail Drop
    • Weight: 75
  • WAN_downQ_LOW child queue:
    • Mask: Destination addresses, /32
    • Queue Management Algorithm: Tail Drop
    • Weight: 25

Rules

match in on $HOME inet from 10.0.1.0/24 to any dnqueue(1,3) label "QoS high" 
match in on $GUEST inet from 10.0.2.0/24 to any dnqueue(2,4) label "QoS low" 
pass in quick on $HOME $WAN_GW inet from 10.0.1.0/24 to any keep state label "QoS high" 
pass in quick on $GUEST $WAN_GW inet from 10.0.2.0/24 to any keep state label "QoS low" 

Actions #1

Updated by Kristof Provost over 1 year ago

There's something very odd going on with this. I can reproduce the problem, but only if I set the pipe bandwidth sufficiently high.

At 10Mbps the traffic (mostly) matches the configured weights. At 100Mbps it does not, there it's closer to 60/40 Mbit/s.
At 10Mbps with 5 and 100 weights I see mostly ~460Kbit/s and 9.2Mbit/s, with regular (once every 10 seconds or so) excursions to 9Mbit/s and 2Mbit/s (so almost the inverse of what's expected).

Taken together I suspect there's a scaling or number overflow issue in the scheduling code.

Actions #2

Updated by Jim Pingle over 1 year ago

Does it help to increase the queue length there? Normally we recommend setting it to >= 1000 for 100Mbit/s and even higher for faster links.

Actions #3

Updated by Marcos M over 1 year ago

The bandwidth limits I have are 140 up 9 down and the issue persists there even with a queue length of 1400/90.

Actions #4

Updated by Kristof Provost over 1 year ago

Increasing the queue lengths of the individual queues appears to help. I tested with a queue of 5000 at 100Mbps. Increasing the queue length on the scheduler itself does not affect things, but is required for the GUI to set net.inet.ip.dummynet.pipe_slot_limit.

That is, I think there's a minor issue in the GUI, because it's possible to set a queue size of 5000 on the (child) queue without setting that on the scheduler, and then the dummynet configuration will fail to load, without visible indication of this on the GUI. WF2Q itself appears to work as expected.

Actions #5

Updated by Marcos M over 1 year ago

Setting the queue length on the child queue AND parent scheduler worked! (also have to keep this bug in mind #13158)

Actions

Also available in: Atom PDF