Project

General

Profile

Actions

Bug #4326

closed

Limiters on firewall rules where NAT applies drop all traffic

Added by Adam Hirsch about 9 years ago. Updated over 7 years ago.

Status:
Resolved
Priority:
High
Assignee:
Category:
Traffic Shaper (Limiters)
Target version:
Start date:
01/27/2015
Due date:
% Done:

100%

Estimated time:
Plus Target Version:
Release Notes:
Affected Version:
2.2.x
Affected Architecture:

Description

A PASS filter rule with In / Out limiters set will pass traffic until bandwidth in a limited direction hits the limit rate, at which point all further traffic in that direction is silently discarded. Traffic in a direction which has not reached its limit rate or does not have a limit applied continues passing, unaffected. Rate-limited rules which had been working under 2.1.5 broke badly with the 2.2 upgrade.

Steps to reproduce:
  1. Firewall -> Traffic Shaper -> Limiter
  2. Create two limiters, labeled "in" and "out", each at 1Mb/s , all other values left at defaults
  3. Create a NAT filter rule (in my case, passing port 7500 on the WAN interface through to port 7500 on a host inside the firewall)
  4. Under "Advanced features", apply the limiters as In / Out limits for this filter rule
  5. On the host inside the firewall, set up a listener on port 7500 using "netcat -l 7500"
  6. From outside the firewall, connect through, using "netcat <WAN ADDRESS> 7500"
  7. Type at each netcat session, verifying that slow traffic passes in both directions
  8. From whichever host is closer to you, paste in a large block of text all at once (I used Lewis Carroll's "Jabberwocky")
  9. Note the block of text get cut off partway through.
  10. Attempt to send further traffic from that host, and note it no longer reaches the other side ... but that the tcp connection remains up
  11. Type in slow traffic from the other side to observe that traffic which has not reached the limited rate can still pass, but only in that direction.

Files

no-reply-to.png (40.9 KB) no-reply-to.png Filter rule -> Advanced Adam Hirsch, 01/29/2015 07:50 AM
Actions #1

Updated by Adam Hirsch about 9 years ago

I'm seeing this when the limiter is applied to a filter on the WAN interface, but not the LAN interface. Odd.

Actions #2

Updated by Chris Buechler about 9 years ago

  • Subject changed from In/Out Limiter on filter rule silently discards all traffic once limit rate reached to In/Out Limiter on rule w/reply-to silently discards all traffic once limit rate reached
  • Status changed from New to Confirmed
  • Assignee set to Ermal Luçi
  • Priority changed from Normal to High
  • Target version set to 2.2.1

I believe it only happens where the matching rule with limiter includes reply-to.

Actions #3

Updated by Adam Hirsch about 9 years ago

I suppose that's possible, although manually checking the box to disable the generated reply-to doesn't seem to change the behavior. (I have only a single WAN link, however, so reply-to has not been a consideration for me before this.)

... looking in /tmp/rules.debug after removing the reply-to shows that it's not listed there, but the behavior continues.

Actions #4

Updated by Travis Kreikemeier about 9 years ago

I this affected us at PAX South. We had limiters in place and had certain downloads dropping to 0 bytes/sec until we restarted them. I guess we'll have to go back to 2.1.5 for events and wait for 2.2.1 which hopefully comes out soon.

Actions #5

Updated by Travis Kreikemeier about 9 years ago

Have we confirmed if having reply-to enabled or disabled affects if the limiter works correctly? As well, what about if the limiter has source or destination hash enabled? I believe at the event, the limiters that were not working were ones that did not have source/destination has enabled. Much like your reproduction steps above.

Actions #6

Updated by Chris Buechler about 9 years ago

I haven't had a chance to get back to testing this scenario yet, but will soon. Seems like it may not be specific to reply-to, that seemed a likely culprit given history of reply-to/route-to related issues in this area and the fact it only applied to WAN rules, but Adam's testing seems to indicate otherwise. I'm pretty tied up with our training this week, so might be this weekend before I can get back to it.

If any of you want to try things out in the mean time to help narrow down the issue, here are the things I'll be looking to test:

- does it definitely happen with or without reply-to?
- is it specific to traffic hitting rdr (a port forward)? based on testing I've already done, and what's been reported by others here, I'm thinking this is probably the likely root problem area
- does the mask configuration have any impact?

make sure to reset all states between making config changes in this scenario, to make really sure all your changes are being applied. Review the output of "ipfw pipe show" while testing for details there.

Actions #7

Updated by Adam Hirsch about 9 years ago

I can verify that turning off reply-to doesn't seem to make a difference, here:

The rule:

/tmp/rules.debug:rdr on vr2 proto tcp from any to 68.XXX.170.XXX port 7500 -> 172.17.1.11
/tmp/rules.debug:pass  in  quick  on $WAN inet proto tcp  from any to 172.17.1.11 port 7500 
      tracker 1422416208 flags S/SA keep state  dnpipe ( 1,2)  label "USER_RULE: NAT testing 7500 forward" 

I've got 1Mb/s limits on both inbound and outbound sides of that WAN rule.

[2.2-RELEASE][admin@rampart]/root: ipfw pipe show
00001:   1.000 Mbit/s    0 ms burst 0
q131073  50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
 sched 65537 type FIFO flags 0x0 0 buckets 0 active
00002:   1.000 Mbit/s    0 ms burst 0
q131074  50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
 sched 65538 type FIFO flags 0x0 0 buckets 0 active
Actions #8

Updated by Ermal Luçi about 9 years ago

Does net.inet.ip.dummynet.io_pkt_drop increase during this time?

Actions #9

Updated by Adam Hirsch about 9 years ago

Nope! Stays at 0 throughout.

[2.2-RELEASE][admin@rampart]/root: sysctl net.inet.ip.dummynet.io_pkt_drop
net.inet.ip.dummynet.io_pkt_drop: 0
Actions #10

Updated by Ermal Luçi about 9 years ago

Can you do another test to have full information?

Do the usual breaking test you have reported and show the output of:
sysctl net.inet.ip.dummynet

than set
sysctl net.inet.ip.dummynet.io_fast=1

Run the test and see if the issue happens again and show again
sysctl net.inet.ip.dummynet

Actions #11

Updated by Travis Kreikemeier about 9 years ago

Finally able to get around to building a VM lab for this. Here is what I have found.

  • Appears to only be an issue on a NAT rule, I was unable to reproduce this issue on a LAN limiter
  • Turning off reply-to does not resolve the issue, I even turned it off globally in Advanced settings
  • Enabling source or destination mask does not resolve the issue
  • Taildrop does not increase, stays at 0 during testing
  • dummynet.io_pkt_drop stays the same number during testing
  • net.inet.ip.dummynet.io_fast was already set to 1 for me, I changed it to 0 and my test no longer worked at all (no connection)

I used iperf for my testing:

No NAT limiter
iperf.exe -c 192.168.11.131 -w 256k -i 1
------------------------------------------------------------
Client connecting to 192.168.11.131, TCP port 5001
TCP window size: 256 KByte
------------------------------------------------------------
[ 3] local 192.168.11.1 port 55351 connected with 192.168.11.131 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 73.5 MBytes 617 Mbits/sec
[ 3] 1.0- 2.0 sec 69.5 MBytes 583 Mbits/sec
[ 3] 2.0- 3.0 sec 68.1 MBytes 571 Mbits/sec
[ 3] 3.0- 4.0 sec 67.4 MBytes 565 Mbits/sec
[ 3] 4.0- 5.0 sec 58.6 MBytes 492 Mbits/sec
[ 3] 5.0- 6.0 sec 66.1 MBytes 555 Mbits/sec
[ 3] 6.0- 7.0 sec 76.6 MBytes 643 Mbits/sec
[ 3] 7.0- 8.0 sec 78.2 MBytes 656 Mbits/sec
[ 3] 8.0- 9.0 sec 62.4 MBytes 523 Mbits/sec
[ 3] 9.0-10.0 sec 80.6 MBytes 676 Mbits/sec
[ 3] 0.0-10.0 sec 701 MBytes 588 Mbits/sec

Enabled a 20Mb limiter
iperf.exe -c 192.168.11.131 -i 1 -w 256k
------------------------------------------------------------
Client connecting to 192.168.11.131, TCP port 5001
TCP window size: 256 KByte
------------------------------------------------------------
[ 3] local 192.168.11.1 port 55401 connected with 192.168.11.131 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 384 KBytes 3.15 Mbits/sec
[ 3] 1.0- 2.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 2.0- 3.0 sec 0.00 Bytes 0.00 bits/sec
[ 3] 3.0- 4.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 4.0- 5.0 sec 0.00 Bytes 0.00 bits/sec
[ 3] 5.0- 6.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 6.0- 7.0 sec 0.00 Bytes 0.00 bits/sec
[ 3] 7.0- 8.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 8.0- 9.0 sec 0.00 Bytes 0.00 bits/sec
[ 3] 9.0-10.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 10.0-11.0 sec 0.00 Bytes 0.00 bits/sec
[ 3] 0.0-13.2 sec 1.12 MBytes 715 Kbits/sec

During testing
ipfw pipe show
00001: 20.000 Mbit/s 0 ms burst 0
q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
sched 65537 type FIFO flags 0x0 0 buckets 0 active
00002: 20.000 Mbit/s 0 ms burst 0
q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
sched 65538 type FIFO flags 0x0 0 buckets 0 active
[2.2-RELEASE][admin@pfSense.localdomain]/root: ipfw pipe show
00001: 20.000 Mbit/s 0 ms burst 0
q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
sched 65537 type FIFO flags 0x0 0 buckets 1 active
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
0 ip 0.0.0.0/0 0.0.0.0/0 11 524 0 0 0
00002: 20.000 Mbit/s 0 ms burst 0
q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
sched 65538 type FIFO flags 0x0 0 buckets 1 active
0 ip 0.0.0.0/0 0.0.0.0/0 20 30000 0 0 0
[2.2-RELEASE][admin@pfSense.localdomain]/root: ipfw pipe show
00001: 20.000 Mbit/s 0 ms burst 0
q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
sched 65537 type FIFO flags 0x0 0 buckets 1 active
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
0 ip 0.0.0.0/0 0.0.0.0/0 28 1276 0 0 0
00002: 20.000 Mbit/s 0 ms burst 0
q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
sched 65538 type FIFO flags 0x0 0 buckets 1 active
0 ip 0.0.0.0/0 0.0.0.0/0 60 90000 0 0 0
[2.2-RELEASE][admin@pfSense.localdomain]/root: ipfw pipe show
00001: 20.000 Mbit/s 0 ms burst 0
q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
sched 65537 type FIFO flags 0x0 0 buckets 1 active
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
0 ip 0.0.0.0/0 0.0.0.0/0 12 552 0 0 0
00002: 20.000 Mbit/s 0 ms burst 0
q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
sched 65538 type FIFO flags 0x0 0 buckets 1 active
0 ip 0.0.0.0/0 0.0.0.0/0 22 33000 0 0 0

Outbound from LAN test with no limiter

iperf.exe -c 192.168.11.1 -w 256k
-i 1 -M 1460
WARNING: attempt to set TCP maximum segment size to 1460, but got 1281
------------------------------------------------------------
Client connecting to 192.168.11.1, TCP port 5001
TCP window size: 256 KByte
------------------------------------------------------------
[ 3] local 192.168.12.41 port 49353 connected with 192.168.11.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 55.4 MBytes 465 Mbits/sec
[ 3] 1.0- 2.0 sec 52.9 MBytes 444 Mbits/sec
[ 3] 2.0- 3.0 sec 39.5 MBytes 331 Mbits/sec
[ 3] 3.0- 4.0 sec 38.0 MBytes 319 Mbits/sec
[ 3] 4.0- 5.0 sec 41.8 MBytes 350 Mbits/sec
[ 3] 5.0- 6.0 sec 40.9 MBytes 343 Mbits/sec
[ 3] 6.0- 7.0 sec 37.9 MBytes 318 Mbits/sec
[ 3] 7.0- 8.0 sec 40.5 MBytes 340 Mbits/sec
[ 3] 8.0- 9.0 sec 42.8 MBytes 359 Mbits/sec
[ 3] 9.0-10.0 sec 44.5 MBytes 373 Mbits/sec
[ 3] 0.0-10.0 sec 434 MBytes 364 Mbits/sec

Outbound LAN test with 20Mb limiter
iperf.exe -c 192.168.11.1 -w 256k
-i 1 -M 1460
WARNING: attempt to set TCP maximum segment size to 1460, but got 1281
------------------------------------------------------------
Client connecting to 192.168.11.1, TCP port 5001
TCP window size: 256 KByte
------------------------------------------------------------
[ 3] local 192.168.12.41 port 49354 connected with 192.168.11.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 2.62 MBytes 22.0 Mbits/sec
[ 3] 1.0- 2.0 sec 2.38 MBytes 19.9 Mbits/sec
[ 3] 2.0- 3.0 sec 2.25 MBytes 18.9 Mbits/sec
[ 3] 3.0- 4.0 sec 2.25 MBytes 18.9 Mbits/sec
[ 3] 4.0- 5.0 sec 2.38 MBytes 19.9 Mbits/sec
[ 3] 5.0- 6.0 sec 2.38 MBytes 19.9 Mbits/sec
[ 3] 6.0- 7.0 sec 2.25 MBytes 18.9 Mbits/sec
[ 3] 7.0- 8.0 sec 2.38 MBytes 19.9 Mbits/sec
[ 3] 8.0- 9.0 sec 2.25 MBytes 18.9 Mbits/sec
[ 3] 9.0-10.0 sec 2.38 MBytes 19.9 Mbits/sec
[ 3] 0.0-10.1 sec 23.6 MBytes 19.6 Mbits/sec

Actions #12

Updated by Travis Kreikemeier about 9 years ago

I also increased the limiter to 700Mb, higher than throughput without limiter and it worked without issue, got the normal speed I had without a limiter. So it's not that a limiter is in place, it is when the limit is exceeded that it starts to drop packets excessively. iperf might be opening a new connection is why I start getting a little bit of traffic again, then it drops and repeats.

Actions #13

Updated by Ermal Luçi about 9 years ago

Ok thank you i think i know where the issue is now.

I will update here when the issue is fixed but will need a kernel rebuild.

Actions #14

Updated by Ermal Luçi about 9 years ago

  • Subject changed from In/Out Limiter on rule w/reply-to silently discards all traffic once limit rate reached to In/Out Limiter silently discards all traffic once limit rate reached
Actions #15

Updated by Steve Wheeler about 9 years ago

Just to add some further information. This bug is hit if you use Limiters on LAN and are also running Squid in transparent mode, presumably because it adds rules to forward the traffic.

Actions #16

Updated by Chris Buechler about 9 years ago

  • Target version changed from 2.2.1 to 2.2.2
Actions #17

Updated by Chris Buechler about 9 years ago

  • Target version changed from 2.2.2 to 2.2.3
Actions #18

Updated by Ermal Luçi almost 9 years ago

  • Status changed from Confirmed to Feedback

This seems affecting only NAT with limiters.
It should be handled properly now in 2.2.3 i will re-test this again as i did for a similar report.

If anyone can confirm it works for them on 2.2.3 as well it would be good.

Actions #19

Updated by Chris Buechler almost 9 years ago

  • Target version changed from 2.2.3 to 2.3
Actions #20

Updated by Ryan Clough almost 9 years ago

Ermal Luçi wrote:

This seems affecting only NAT with limiters.
It should be handled properly now in 2.2.3 i will re-test this again as i did for a similar report.

If anyone can confirm it works for them on 2.2.3 as well it would be good.

I had disabled the limiter by turning off the associated Firewall Rule. I have upgraded to 2.2.3 and tried re-enabling the Firewall rule and once the threshold limit is reached traffic is no longer passed.

I also tried deleting the Firewall Rules and the Limiters then recreating them from scratch. But alas the same result. When the threshold is reached traffic captured by the Firewall rule that feeds the Limiter just stops passing traffic.

If you would like any further information just let me know.

Actions #21

Updated by Adam Hirsch almost 9 years ago

Like Ryan, I'm still seeing the issue after upgrading to 2.2.3.

Actions #22

Updated by Chris Buechler almost 9 years ago

  • Subject changed from In/Out Limiter silently discards all traffic once limit rate reached to Limiters on firewall rules where NAT applies drop all traffic
  • Status changed from Feedback to Confirmed
  • Assignee deleted (Ermal Luçi)
  • Affected Version changed from 2.2 to 2.2.x

updated subject to root problem, closing out #4596 as duplicate of this.

Actions #23

Updated by Srdjan Jovanovich over 8 years ago

I'm still seeing the issue after upgrading to 2.2.3. NAT with limiters means no traffic. Once the rule is saved with limiters there's basically no traffic.

P.S. I can not believe that this problem is already extending through several versions or better from stepping up to FreeBSD 10?

Actions #24

Updated by zuber ahmed over 8 years ago

Hi

Traffic limiter still not working with squid3 (transparent mode) + squidgaurd on version 2.2.4.
Is there any time line to fix this as this issue is due for a long time?

thanks in advance for fixing this issue or any temp fix or patch for this.

Actions #25

Updated by Albert Yang over 8 years ago

I just wanted to re-add to what zuber ahmed, and that NAT reflection gets broken while having limiters on the LAN

Thank you

Actions #26

Updated by Bryan Bercero over 8 years ago

I just want to update that this bug is still present. Any developments? I have tested with 2.2.2.

Actions #27

Updated by Cristian Ciceri over 8 years ago

Issue still present on version 2.2.5.

Actions #28

Updated by Sergio Handal over 8 years ago

Issue still present on version 2.2.5.

When Limiter is enable in a Fiwerwll Rule NAT Reflection is not working.

Actions #29

Updated by Jim Thompson over 8 years ago

  • Assignee set to Luiz Souza
Actions #30

Updated by Hamilton Calixto about 8 years ago

Problem still present in version 2.2.6. What is the deadline for solving this problem?

Actions #31

Updated by Luiz Souza about 8 years ago

  • Target version changed from 2.3 to 2.3.1
Actions #32

Updated by Albert Yang about 8 years ago

Limiters work with squid proxy(transparent and WPAD)+squidguard
Hopefully soon with Nat reflection :)

Thanks to Riroxi

http://postimg.org/gallery/1plhek6mq/

Actions #33

Updated by Riroxi . about 8 years ago

Albert Yang wrote:

Limiters work with squid proxy(transparent and WPAD)+squidguard
Hopefully soon with Nat reflection :)

Thanks to Riroxi

http://postimg.org/gallery/1plhek6mq/

Hello Albert!

I tested this workaround for a few days and some apps like download managers can bypass limiters with this rule set :(

We need to w8 for a definitive solution

Cya

Actions #34

Updated by Chris Buechler about 8 years ago

  • Target version changed from 2.3.1 to 2.3.2
Actions #35

Updated by Aaron McDiarmid about 8 years ago

How come this problem keeps getting pushed back to later versions? Is there an underlying issue that prevents it from being fixed?

Actions #36

Updated by Chris Buechler about 8 years ago

Aaron McDiarmid wrote:

How come this problem keeps getting pushed back to later versions? Is there an underlying issue that prevents it from being fixed?

It's not an easy problem to fix, any OS changes like this are risky, and we're looking to release a 2.3.1 in a couple weeks which isn't enough time.

Actions #37

Updated by gmar almnsoor almost 8 years ago

Arab world
________

definitive solution

https://forum.pfsense.org/index.php?topic=106640.0

enjoy <<<<<<<<<<<< _

Actions #38

Updated by Hamilton Calixto almost 8 years ago

We have this bug for 1 year. When a solution is presented? I am dismayed by this as it is an extremely important feature in my datacenter.

Actions #39

Updated by Luca De Andreis almost 8 years ago

Hamilton Calixto wrote:

We have this bug for 1 year. When a solution is presented? I am dismayed by this as it is an extremely important feature in my datacenter.

+1, I'm forced to use PfSense 2.1.5, limiters on NAT 1:1 are absolutely essential for me, the use of workaround is not a solution.

Actions #40

Updated by Matt Smith almost 8 years ago

+1 I have dozens of 2.1.5 boxes because of this critical bug.

Crossed my fingers but seems 2.3 still not production ready, obviously 2.2 was not either.

Actions #41

Updated by Roman Spörk almost 8 years ago

For me is that bug a big problem.
The traffic shaping feature was one decision to use pfsense.
I baught a XG-1540 with two SSD, 10 GE and 32GB RAM. To use traffic shaping with Squid proxy, I had to create a very complicated solution with cascaded virtual pfsense appliances.
I hope, there will be a solution for that bug.

Actions #42

Updated by Chris Buechler almost 8 years ago

  • Target version changed from 2.3.2 to 2.4.0
Actions #43

Updated by oscar velazquez almost 8 years ago

Since we are not getting a solution any time soon, i guess we can use 2 pfsense boxes in line one with limiter and the other with cache

Problem is i cant get for the life of me make a trasnparent proxy cache in between my limiter box and my lan, anyone know a good guide or easy way to make a transparent proxy cache box?

WAN---limiter/dhcp (192.168.0.1)---- transparent web proxy cache/no dhcp(192.168.0.2)-----lan

i have tried many tutorials and configs but it never sends data trough when i activate the caching (not limiter) as a transparent box

i think its what roman spork did?

thanks

Actions #44

Updated by Luca De Andreis almost 8 years ago

OMG.

The NAT 1:1 problem using limiters persist.
Works well on 2.1.5, 2.2.x = BAD, 2.3.x = BAD sigh ! We are forced to user 2.1.5 with many SECURITY HOLES.

Luca

Actions #45

Updated by → luckman212 almost 8 years ago

Now that the target version bumped to 2.4 (FREEBSD-11) can anyone at least say whether the bug has been fixed in FreeBSD? If this bug is indeed an upstream problem, has a bug been filed with the FreeBSD project? If not, I'm afraid this can is going to keep getting kicked.

Actions #46

Updated by Jose Duarte over 7 years ago

Have you guys tried using a queue inside the limiter instead of the limiter itself? It could make a difference since in my scenario it's the workaround I use to don't have kernel panic on the second firewall. (limiters in HA)

Actions #47

Updated by Andrew Maslin over 7 years ago

Can someone share the FreeBSD bug # so we can track the progress of the root of the issue? Like Luke, I would like to know the status and timeframe of the underlying issue. Thanks!

Actions #48

Updated by Chris Buechler over 7 years ago

Andrew Maslin wrote:

Can someone share the FreeBSD bug # so we can track the progress of the root of the issue? Like Luke, I would like to know the status and timeframe of the underlying issue. Thanks!

There isn't one because the code/feature in question doesn't exist there.

Actions #49

Updated by → luckman212 over 7 years ago

Chris Buechler wrote:

There isn't one because the code/feature in question doesn't exist there.

Now I'm confused: so is the bug in FreeBSD, or somehow in pfSense? Are you saying this bug is only replicable in pfSense, but it's due to an underlying bug in fBSD that can't be replicated in stock so therefore there is no way to file a proper bugreport? Seems we are in a pickle.

Actions #50

Updated by Chris Buechler over 7 years ago

this is from the use of dummynet in pf, which doesn't exist in stock FreeBSD. And the implementation apparently leaves a lot to be desired, which is why it's fallen apart in so many edge cases with newer base OS versions. It was never perfect to begin with, either.

Actions #51

Updated by Steve Tibbetts over 7 years ago

Using pfsense 2.3.2-RELEASE (amd64)

I can confirm disabling the upload limiter solves an issue with limiters and 1:1 NAT.

We don't use squid or any addons for that matter. Our issue was / is.... WAN / LAN interface. Aliases setup for each IP in the DHCP scope on LAN. Limiters (up/down) setup for those aliases as a firewall rule on LAN. No issues with connectivity local or WAN. The issue is with accessing servers that are WAN to LAN NATed 1:1. It's worth noting the local IP's of the servers are NOT part of the limiter IP range. The UL limiter on LAN breaks NAT reflection.

To get around this we disabled the UL limiter. This is a temporary fix.

I really hope this can be resolved. Seems like an issue that has been ongoing for a while.

One thing I would like to mention... Prior to our current pfsense setup we had dual pfsense boxes using carp. Same versions, same setup. And that worked. I have a backup of that somewhere.

Actions #52

Updated by Luca De Andreis over 7 years ago

Yes, it is exactly as you described.

ONLY PFSense 2.1.5 works fine on this configuration. I use several CRITICAL firewalls on 2.1.5 but... 2.1.5 is very old and with critical security holes (?)

I can't use on these firewalls on 2.2 or 2.3.

:(

On other firewalls, with limiters from LAN to WAN and NO NAT there are no problems. But WAN TO LAN limiters && 1:1 NAT is broken.

Luca

Actions #53

Updated by → luckman212 over 7 years ago

Now that 2.4 dev builds have started, is there any reason to expect that this bug might get some lovin' in the next release? Or will the target get bumped to 3.0

Actions #54

Updated by Toronto B2 over 7 years ago

I am interested to know if limiters will ever work again?
It's annoying that they still show in the GUI and not working. Very disappointing!

Actions #55

Updated by Anders Tillebeck over 7 years ago

I also use limiters and NAT reflection in combination. So I am stuck on 2.1.4 and 2.1.5 until a release where this combination is working again. I just write this as info to tell that more than one company is affected :-)

Actions #56

Updated by jake keeys over 7 years ago

Also affected... is there any plan to fix this in an upcoming release as it's a common use case

Actions #57

Updated by gmar almnsoor over 7 years ago

Solution

fix Limiters on firewall rules where NAT applies drop all traffic

and

Problem Limiter blocks internet Squid transparent proxy

this Solution

work to (pfsense 2.2.*)(2.3.*)(2.4.*)

my now Solution

here

https://forum.pfsense.org/index.php?topic=106640.0

Actions #58

Updated by Luca De Andreis over 7 years ago

This is a workaround, not a clean solution.
Better than nothing, but a native, specific and definitive resolution is desirable.

Luca

Actions #59

Updated by Luiz Souza over 7 years ago

  • Status changed from Confirmed to Feedback
  • % Done changed from 0 to 100

Fixed in 2.4.

Actions #60

Updated by Phillip Davis over 7 years ago

I guess the fix is in the pf port or...?
Is it something that easily applies back to 2.3.* FreeBSD 10.3 and thus could be back-ported so it will also appear fixed in 2.3.3-DEVELOPMENT snapshots?

Actions #61

Updated by Jim Pingle over 7 years ago

Given all the work that's happened on 2.4 with IPFW, I'd say it's best to not attempt a backport. 2.4 is not that far off.

I ran a quick test of this with a floating rule on WAN matching outbound with a limiter. It was broken on 2.3, but works for me on 2.4.

Could use some more testing with port forwards and LAN-side redirects (e.g. transparent squid) but so far, so good.

Actions #62

Updated by Phillip Davis over 7 years ago

OK. I don't use this so it doesn't effect systems that I have that will be stuck on 2.3.* (32-bit Alix). If it is not simple to back-port then I guess there will be only a few people with old 32-bit systems that also use this, and thus will not be able to get the fix. And if you are using stuff like this then the site is at least a bit complex and is likely to upgrade hardware anyway.

Actions #63

Updated by Jim Pingle over 7 years ago

  • Status changed from Feedback to Resolved

All indications are that this is fixed now, from my own tests and from user feedback.

Actions

Also available in: Atom PDF