Project

General

Profile

Actions

Bug #9024

closed

Ping packet loss under load when using limiters

Added by Dave taht about 6 years ago. Updated almost 2 years ago.

Status:
Closed
Priority:
Normal
Assignee:
-
Category:
Traffic Shaper (Limiters)
Target version:
-
Start date:
Due date:
% Done:

0%

Estimated time:
Plus Target Version:
Release Notes:
Affected Version:
2.4
Affected Architecture:

Description

I think https://forum.netgate.com/topic/112527/playing-with-fq_codel-in-2-4/595 we have confirmed an issue still exists with this.

It's a very long thread.

bug looks similar but not identical to https://redmine.pfsense.org/issues/4326

Actions #1

Updated by Anonymous about 6 years ago

I saw this when only TCP/UDP was being put into the limiter. As soon as I changed it to "all traffic" the loss went away.

Actions #2

Updated by Dave taht about 6 years ago

ok, so we just have a configuration guideline then: "Always put all traffic through the limiter". Do you have a conf that works for https://forum.netgate.com/topic/112527/playing-with-fq_codel-in-2-4/570 ?

Actions #3

Updated by Josh Chilcott about 6 years ago

The conf attached to the example https://forum.netgate.com/topic/112527/playing-with-fq_codel-in-2-4/570 shows that the match rules include all protocols for IPv4. The issue presents itself when match out limiter rules are used on interfaces creating NAT states (ex: WAN). Loading the out limiter to capacity, using a match rule on WAN and testing for roughly 60 seconds which includes ramp up and ramp down, showed an 82% loss of successful pings to hosts on the WAN side. During heavy saturation of the limiter almost all ping is lost. Disabling outgoing NAT remedies the situation. Creating in/out limiters on just the LAN side remedies the situation - this appears to be the most performant workaround for single WAN single LAN setup where you have traffic originating on the WAN and LAN side.

Actions #4

Updated by Steven Brown about 6 years ago

I can confirm this bug. My testing seemed to show that the behaviour was the same no matter which scheduler I assigned to the limiter when the limiter was applied using floating rules. Using a LAN interface firewall rule no longer dropped the pings when fq_codel was assigned.

I had the rules assigned for "all traffic" so this did not fix the issue for me.

Actions #5

Updated by Josh Chilcott about 6 years ago

Using limiters on an interface, with outgoing NAT enabled, causes all ICMP echo reply packets to drop, coming back into WAN, when the limiter is loaded with flows. I can reproduce this issue with the following configuration:

  • limiters created (any scheduler). One limiter for out and one limiter for in.
  • create a single child queue for the out limiter and one for the in limiter.
  • floating match IPv4 any rule on WAN Out, using the out limiter child queue for in and in limiter child queue for out.
  • floating match IPV4 any rule on WAN In, using the in limiter child queue for in and out limiter child queue for out.
  • load the limiter with traffic. (Most recently I've been using a netperf netserver v2.6.0 on the WAN side and a Flent client on the LAN side running RRUL test)
  • start a constant ping from the client to the server during the RRUL test.

Both the flent.gz output and the constant ping will show a high rate of ICMP packets getting dropped. If a separate floating match rule is created for ICMP, then packets will not be dropped. Pushing less pps through pfSense seems to net fewer dropped echo replies.

Actions #6

Updated by Dave taht about 6 years ago

I would try to update this bug to make it more specific to limiters but I don't seem to hav privs

Actions #7

Updated by Patrik Hildingsson almost 6 years ago

I just wanted to chime in that I have the very same exact behaviour on my setup.
Is there any progress on the issue?

Actions #8

Updated by Jim Pingle over 5 years ago

  • Category set to Traffic Shaper (ALTQ)
Actions #9

Updated by Jim Pingle over 5 years ago

  • Category changed from Traffic Shaper (ALTQ) to Traffic Shaper (Limiters)
Actions #10

Updated by Joshua Babb over 4 years ago

I as well can replicate this issue, I have outbound NAT setup and tried to setup a traffic limiter + fq_codel and see major packet loss on heavy load for download and outbound traffic has atleast 50% packeloss.

Actions #11

Updated by Joshua Babb over 4 years ago

Well I turned off the Open VPN client and it worked. The traffic shaper is working normally. For some reason Open VPN is causing an issue.

Actions #12

Updated by Thomas Pilgaard over 4 years ago

Problem also seems to be related to download limiter only, as traceroute is displayed correctly if fq-codel is applied on upstream limiter only on WAN. Tested on 2.5.0.a.20200919.0050.

Actions #13

Updated by → luckman212 over 2 years ago

I believe I'm hitting this bug now on 22.05 snaps. Is there any workaround or status update on this one? Tried following the simple setup instructions from https://docs.netgate.com/pfsense/en/latest/recipes/codel-limiters.html but all that happens is when running the bufferbloat test, packet loss spikes to 10-20% and my WAN1 gets marked as "Offline" (1Gbit FIOS connection)

more discussion: https://forum.netgate.com/topic/171158/qos-traffic-shaping-limiters-fq_codel-on-22-0x

Actions #15

Updated by Marcos M over 2 years ago

This seems to be resolved with 22.05. Testing with iperf3 client behind the firewall, and an iperf3 server a couple of hops from WAN. Running iperf3 -t 10 -c 198.51.100.7 (and reversed) introduced latency, but no packets were dropped (different pipe queue lengths/bw tested):

--- 198.51.100.7 ping statistics ---
305 packets transmitted, 305 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.359/31.768/110.134/39.079 ms
Actions #16

Updated by Marcos M over 2 years ago

  • Subject changed from nat + a limiter + fq_codel dropping near all ping traffic under load to Ping packet loss under load when using limiters
  • Status changed from New to Feedback
Actions #17

Updated by Jim Pingle almost 2 years ago

  • Status changed from Feedback to Closed
  • Start date deleted (10/07/2018)
Actions

Also available in: Atom PDF