Project

General

Profile

Actions

Bug #1734

closed

Traffic Shaper Issues in resent builds

Added by Chris Mirchandani over 13 years ago. Updated over 13 years ago.

Status:
Closed
Priority:
High
Assignee:
-
Category:
Traffic Shaper (ALTQ)
Target version:
Start date:
07/31/2011
Due date:
% Done:

0%

Estimated time:
Plus Target Version:
Release Notes:
Affected Version:
2.0
Affected Architecture:
amd64

Description

I am using the AMD64 builds of pfSense 2.0 RC3. I have the same build running on dedicated hardware and in a VM with CARP configured on both. The dedicated hardware was installed with a build of AMD64 RC2. When I setup CARP, I updated the dedicated hardware to the build in the VM and installed the VM using pfSense-2.0-RC3-amd64-20110708-1843.iso. I have updated four times since the install. The current build on both is 2.0-RC3 (amd64)built on Fri Jul 29 22:14:50 EDT 2011 and the previous version I tested was 2.0-RC3 Built On: Sun Jul 24 04:39:44 EDT 2011.

I noticed that my transfers were slow on the build I was on which was a build between pfSense-2.0-RC3-amd64-20110708-1843 and pfSense-Full-Update-2.0-RC3-amd64-20110724-0242. Connections were limited to 5Mbps (512KBps). This limitation was the same for uploads and downloads between all interfaces and the WAN as well as other interface to interface and VLAN to VLAN communication. My WAN connection is a physical link to to my gateway device and I have a 50/5 internet connection. My other interfaces are VLANs provided via 2 teamed GigE connections to a Cisco switch teamed with LACP. I do have one other interface that is a direct GigE connection to the Cisco switch. This is my management interface. This transfer limitation was the same no matter the interface I was on and was the same for the primary and backup pfSense devices.

My first step was to upgrade to pfSense-Full-Update-2.0-RC3-amd64-20110724-0242, but the issue remained. I could find no obvious reason for this limitation, but it only occurred when traffic has to pass through pfSense to get to the destination. Therefore, I removed the traffic shaper entries and my performance was at expected levels. I could pass 1.2Gbps (150Mbps) or more through the teamed connection with my VLANs while downloading at 50Mbps from my WAN connection.

I updated to 2.0-RC3 (amd64)built on Fri Jul 29 22:14:50 EDT 2011. The performance issue is still there, but I am limited to 10Mbps rather than 5Mbps.

I am not having this issue with build 2.0-RC3 (amd64) built on Fri Jul 8 02:49:42 EDT 2011 which I have running on hardware I have at home. I used the same options in the traffic shaper wizard on both devices. This build still uses the qDefault as the default queue which is set to a priority of 3 for all interfaces. The only other obvious difference is that this firewall is not setup to use Outbound NAT or CARP Virtual IPs.

The issues listed below with the conflicting queue priorities and the traffic shaper entries that could not be applied relate to the previous builds I tested.

I figured I may just need to reset my traffic shaper entries, so I ran the traffic shaper wizard, Single Wan multi Lan, and I get errors.

1) After it creates the entries, I find that all the interfaces other than WAN interface have a qLink queue rather than the qDefault queue. The qLink queue is set as the default queue, but is set to priority 1 which conflicts with the qP2P queue. I would receive an alert telling me about this conflict.

2) I changed the priority to 3 for all the qLink queues and applied the changes. After a bit I would receive an alert that the rules need to be explicit.

On another note, I am unclear on the benefit of the change from qDefault to qLink on all interfaces except the WAN interface in that the qLink queue is given a lower priority than qOthersLow. I would think that the purpose of qOthersLow is to set something lower than the default.

Actions #1

Updated by Ermal Luçi over 13 years ago

  • Status changed from New to Feedback

Seems like you are using PRIQ as a discipline.
Can you please check that putting the bandwidth of the physical interface on the root queues(with interface names) on all the LAN interfaces helps you with this issue?

The issue with priorities should be fixed in latest snapshots.
The qLink queue has its effect with other disciplines with PRIQ it is the same as qDefault.

Actions #2

Updated by Chris Mirchandani over 13 years ago

ermal,

You are correct, I am using the PRIQ discipline.

Initial testing indicates that adding a value to the Bandwidth field for an interface does allow increased performance. Again this is initial testing, so I am not 100% sure that there are no performance issues. I will do more testing ASAP.

My question about the qLink is not related its existance or its name, but the priority given to it. Now qOthersHigh is set to 4, qOthersLow is set to 3 and qLink is set to 2. Before qLink replaced qDefault, the priority settings were qOthersHigh 4, qDefault 3 and qOthersLow 2. To be this was logical as the default priority was 3 and then you could set others to higher or lower than the default. Therefore, my question of this change is related to the logic behind it.

As far as the bug or config argument, I understand your point of view. However, from my perspective, something has changed as I do not have to set the bandwidth field on my install of 2.0-RC3 (amd64) built on Fri Jul 8 02:49:42 EDT 2011 to have good performance with Traffic Shaper enabled. So the questions are as follows.

1) What has changed that requires the bandwidth field for interfaces to be set in resent builds, but not in 2.0-RC3 (amd64) built on Fri Jul 8 02:49:42 EDT 2011?

2) Is this change intended?

3) If filling in this field is going to be required to allow more than 10Mbps of traffic going forward, shouldn't setting a value for the bandwidth field be added to the traffic shaper wizards?

Perhaps not everyone is experiencing this issue, so maybe this issue is limited to users with my type of setup. All the interfaces that I have tested are VLANs provided by a trunk that is provided by an LACP team made up of 2x GigE connetions between the pfSense hardware and a Cisco switch.

I have a connection that is direct, not a trunk, and it seems like traffic to that link was not limited, but I will need to verify that via more testing.

Actions #3

Updated by Ermal Luçi over 13 years ago

  • Status changed from Feedback to Closed

Please followup in the forum

Actions

Also available in: Atom PDF