Bug #4326
closedLimiters on firewall rules where NAT applies drop all traffic
100%
Description
A PASS filter rule with In / Out limiters set will pass traffic until bandwidth in a limited direction hits the limit rate, at which point all further traffic in that direction is silently discarded. Traffic in a direction which has not reached its limit rate or does not have a limit applied continues passing, unaffected. Rate-limited rules which had been working under 2.1.5 broke badly with the 2.2 upgrade.
Steps to reproduce:- Firewall -> Traffic Shaper -> Limiter
- Create two limiters, labeled "in" and "out", each at 1Mb/s , all other values left at defaults
- Create a NAT filter rule (in my case, passing port 7500 on the WAN interface through to port 7500 on a host inside the firewall)
- Under "Advanced features", apply the limiters as In / Out limits for this filter rule
- On the host inside the firewall, set up a listener on port 7500 using "netcat -l 7500"
- From outside the firewall, connect through, using "netcat <WAN ADDRESS> 7500"
- Type at each netcat session, verifying that slow traffic passes in both directions
- From whichever host is closer to you, paste in a large block of text all at once (I used Lewis Carroll's "Jabberwocky")
- Note the block of text get cut off partway through.
- Attempt to send further traffic from that host, and note it no longer reaches the other side ... but that the tcp connection remains up
- Type in slow traffic from the other side to observe that traffic which has not reached the limited rate can still pass, but only in that direction.
Files
Updated by Adam Hirsch almost 10 years ago
I'm seeing this when the limiter is applied to a filter on the WAN interface, but not the LAN interface. Odd.
Updated by Chris Buechler almost 10 years ago
- Subject changed from In/Out Limiter on filter rule silently discards all traffic once limit rate reached to In/Out Limiter on rule w/reply-to silently discards all traffic once limit rate reached
- Status changed from New to Confirmed
- Assignee set to Ermal Luçi
- Priority changed from Normal to High
- Target version set to 2.2.1
I believe it only happens where the matching rule with limiter includes reply-to.
Updated by Adam Hirsch almost 10 years ago
- File no-reply-to.png no-reply-to.png added
I suppose that's possible, although manually checking the box to disable the generated reply-to doesn't seem to change the behavior. (I have only a single WAN link, however, so reply-to has not been a consideration for me before this.)
... looking in /tmp/rules.debug after removing the reply-to shows that it's not listed there, but the behavior continues.
Updated by Travis Kreikemeier almost 10 years ago
I this affected us at PAX South. We had limiters in place and had certain downloads dropping to 0 bytes/sec until we restarted them. I guess we'll have to go back to 2.1.5 for events and wait for 2.2.1 which hopefully comes out soon.
Updated by Travis Kreikemeier over 9 years ago
Have we confirmed if having reply-to enabled or disabled affects if the limiter works correctly? As well, what about if the limiter has source or destination hash enabled? I believe at the event, the limiters that were not working were ones that did not have source/destination has enabled. Much like your reproduction steps above.
Updated by Chris Buechler over 9 years ago
I haven't had a chance to get back to testing this scenario yet, but will soon. Seems like it may not be specific to reply-to, that seemed a likely culprit given history of reply-to/route-to related issues in this area and the fact it only applied to WAN rules, but Adam's testing seems to indicate otherwise. I'm pretty tied up with our training this week, so might be this weekend before I can get back to it.
If any of you want to try things out in the mean time to help narrow down the issue, here are the things I'll be looking to test:
- does it definitely happen with or without reply-to?
- is it specific to traffic hitting rdr (a port forward)? based on testing I've already done, and what's been reported by others here, I'm thinking this is probably the likely root problem area
- does the mask configuration have any impact?
make sure to reset all states between making config changes in this scenario, to make really sure all your changes are being applied. Review the output of "ipfw pipe show" while testing for details there.
Updated by Adam Hirsch over 9 years ago
I can verify that turning off reply-to doesn't seem to make a difference, here:
The rule:
/tmp/rules.debug:rdr on vr2 proto tcp from any to 68.XXX.170.XXX port 7500 -> 172.17.1.11 /tmp/rules.debug:pass in quick on $WAN inet proto tcp from any to 172.17.1.11 port 7500 tracker 1422416208 flags S/SA keep state dnpipe ( 1,2) label "USER_RULE: NAT testing 7500 forward"
I've got 1Mb/s limits on both inbound and outbound sides of that WAN rule.
[2.2-RELEASE][admin@rampart]/root: ipfw pipe show 00001: 1.000 Mbit/s 0 ms burst 0 q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail sched 65537 type FIFO flags 0x0 0 buckets 0 active 00002: 1.000 Mbit/s 0 ms burst 0 q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail sched 65538 type FIFO flags 0x0 0 buckets 0 active
Updated by Ermal Luçi over 9 years ago
Does net.inet.ip.dummynet.io_pkt_drop increase during this time?
Updated by Adam Hirsch over 9 years ago
Nope! Stays at 0 throughout.
[2.2-RELEASE][admin@rampart]/root: sysctl net.inet.ip.dummynet.io_pkt_drop net.inet.ip.dummynet.io_pkt_drop: 0
Updated by Ermal Luçi over 9 years ago
Can you do another test to have full information?
Do the usual breaking test you have reported and show the output of:
sysctl net.inet.ip.dummynet
than set
sysctl net.inet.ip.dummynet.io_fast=1
Run the test and see if the issue happens again and show again
sysctl net.inet.ip.dummynet
Updated by Travis Kreikemeier over 9 years ago
Finally able to get around to building a VM lab for this. Here is what I have found.
- Appears to only be an issue on a NAT rule, I was unable to reproduce this issue on a LAN limiter
- Turning off reply-to does not resolve the issue, I even turned it off globally in Advanced settings
- Enabling source or destination mask does not resolve the issue
- Taildrop does not increase, stays at 0 during testing
- dummynet.io_pkt_drop stays the same number during testing
- net.inet.ip.dummynet.io_fast was already set to 1 for me, I changed it to 0 and my test no longer worked at all (no connection)
I used iperf for my testing:
No NAT limiteriperf.exe -c 192.168.11.131 -w 256k -i 1
------------------------------------------------------------
Client connecting to 192.168.11.131, TCP port 5001
TCP window size: 256 KByte
------------------------------------------------------------
[ 3] local 192.168.11.1 port 55351 connected with 192.168.11.131 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 73.5 MBytes 617 Mbits/sec
[ 3] 1.0- 2.0 sec 69.5 MBytes 583 Mbits/sec
[ 3] 2.0- 3.0 sec 68.1 MBytes 571 Mbits/sec
[ 3] 3.0- 4.0 sec 67.4 MBytes 565 Mbits/sec
[ 3] 4.0- 5.0 sec 58.6 MBytes 492 Mbits/sec
[ 3] 5.0- 6.0 sec 66.1 MBytes 555 Mbits/sec
[ 3] 6.0- 7.0 sec 76.6 MBytes 643 Mbits/sec
[ 3] 7.0- 8.0 sec 78.2 MBytes 656 Mbits/sec
[ 3] 8.0- 9.0 sec 62.4 MBytes 523 Mbits/sec
[ 3] 9.0-10.0 sec 80.6 MBytes 676 Mbits/sec
[ 3] 0.0-10.0 sec 701 MBytes 588 Mbits/sec
Enabled a 20Mb limiteriperf.exe -c 192.168.11.131 -i 1 -w 256k
------------------------------------------------------------
Client connecting to 192.168.11.131, TCP port 5001
TCP window size: 256 KByte
------------------------------------------------------------
[ 3] local 192.168.11.1 port 55401 connected with 192.168.11.131 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 384 KBytes 3.15 Mbits/sec
[ 3] 1.0- 2.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 2.0- 3.0 sec 0.00 Bytes 0.00 bits/sec
[ 3] 3.0- 4.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 4.0- 5.0 sec 0.00 Bytes 0.00 bits/sec
[ 3] 5.0- 6.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 6.0- 7.0 sec 0.00 Bytes 0.00 bits/sec
[ 3] 7.0- 8.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 8.0- 9.0 sec 0.00 Bytes 0.00 bits/sec
[ 3] 9.0-10.0 sec 128 KBytes 1.05 Mbits/sec
[ 3] 10.0-11.0 sec 0.00 Bytes 0.00 bits/sec
[ 3] 0.0-13.2 sec 1.12 MBytes 715 Kbits/sec
During testingipfw pipe show
00001: 20.000 Mbit/s 0 ms burst 0
q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
sched 65537 type FIFO flags 0x0 0 buckets 0 active
00002: 20.000 Mbit/s 0 ms burst 0
q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
sched 65538 type FIFO flags 0x0 0 buckets 0 active
[2.2-RELEASE][admin@pfSense.localdomain]/root: ipfw pipe show
00001: 20.000 Mbit/s 0 ms burst 0
q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
sched 65537 type FIFO flags 0x0 0 buckets 1 active
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
0 ip 0.0.0.0/0 0.0.0.0/0 11 524 0 0 0
00002: 20.000 Mbit/s 0 ms burst 0
q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
sched 65538 type FIFO flags 0x0 0 buckets 1 active
0 ip 0.0.0.0/0 0.0.0.0/0 20 30000 0 0 0
[2.2-RELEASE][admin@pfSense.localdomain]/root: ipfw pipe show
00001: 20.000 Mbit/s 0 ms burst 0
q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
sched 65537 type FIFO flags 0x0 0 buckets 1 active
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
0 ip 0.0.0.0/0 0.0.0.0/0 28 1276 0 0 0
00002: 20.000 Mbit/s 0 ms burst 0
q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
sched 65538 type FIFO flags 0x0 0 buckets 1 active
0 ip 0.0.0.0/0 0.0.0.0/0 60 90000 0 0 0
[2.2-RELEASE][admin@pfSense.localdomain]/root: ipfw pipe show
00001: 20.000 Mbit/s 0 ms burst 0
q131073 50 sl. 0 flows (1 buckets) sched 65537 weight 0 lmax 0 pri 0 droptail
sched 65537 type FIFO flags 0x0 0 buckets 1 active
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
0 ip 0.0.0.0/0 0.0.0.0/0 12 552 0 0 0
00002: 20.000 Mbit/s 0 ms burst 0
q131074 50 sl. 0 flows (1 buckets) sched 65538 weight 0 lmax 0 pri 0 droptail
sched 65538 type FIFO flags 0x0 0 buckets 1 active
0 ip 0.0.0.0/0 0.0.0.0/0 22 33000 0 0 0
Outbound from LAN test with no limiter
iperf.exe -c 192.168.11.1 -w 256k
-i 1 -M 1460
WARNING: attempt to set TCP maximum segment size to 1460, but got 1281
------------------------------------------------------------
Client connecting to 192.168.11.1, TCP port 5001
TCP window size: 256 KByte
------------------------------------------------------------
[ 3] local 192.168.12.41 port 49353 connected with 192.168.11.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 55.4 MBytes 465 Mbits/sec
[ 3] 1.0- 2.0 sec 52.9 MBytes 444 Mbits/sec
[ 3] 2.0- 3.0 sec 39.5 MBytes 331 Mbits/sec
[ 3] 3.0- 4.0 sec 38.0 MBytes 319 Mbits/sec
[ 3] 4.0- 5.0 sec 41.8 MBytes 350 Mbits/sec
[ 3] 5.0- 6.0 sec 40.9 MBytes 343 Mbits/sec
[ 3] 6.0- 7.0 sec 37.9 MBytes 318 Mbits/sec
[ 3] 7.0- 8.0 sec 40.5 MBytes 340 Mbits/sec
[ 3] 8.0- 9.0 sec 42.8 MBytes 359 Mbits/sec
[ 3] 9.0-10.0 sec 44.5 MBytes 373 Mbits/sec
[ 3] 0.0-10.0 sec 434 MBytes 364 Mbits/sec
Outbound LAN test with 20Mb limiteriperf.exe -c 192.168.11.1 -w 256k
-i 1 -M 1460
WARNING: attempt to set TCP maximum segment size to 1460, but got 1281
------------------------------------------------------------
Client connecting to 192.168.11.1, TCP port 5001
TCP window size: 256 KByte
------------------------------------------------------------
[ 3] local 192.168.12.41 port 49354 connected with 192.168.11.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 2.62 MBytes 22.0 Mbits/sec
[ 3] 1.0- 2.0 sec 2.38 MBytes 19.9 Mbits/sec
[ 3] 2.0- 3.0 sec 2.25 MBytes 18.9 Mbits/sec
[ 3] 3.0- 4.0 sec 2.25 MBytes 18.9 Mbits/sec
[ 3] 4.0- 5.0 sec 2.38 MBytes 19.9 Mbits/sec
[ 3] 5.0- 6.0 sec 2.38 MBytes 19.9 Mbits/sec
[ 3] 6.0- 7.0 sec 2.25 MBytes 18.9 Mbits/sec
[ 3] 7.0- 8.0 sec 2.38 MBytes 19.9 Mbits/sec
[ 3] 8.0- 9.0 sec 2.25 MBytes 18.9 Mbits/sec
[ 3] 9.0-10.0 sec 2.38 MBytes 19.9 Mbits/sec
[ 3] 0.0-10.1 sec 23.6 MBytes 19.6 Mbits/sec
Updated by Travis Kreikemeier over 9 years ago
I also increased the limiter to 700Mb, higher than throughput without limiter and it worked without issue, got the normal speed I had without a limiter. So it's not that a limiter is in place, it is when the limit is exceeded that it starts to drop packets excessively. iperf might be opening a new connection is why I start getting a little bit of traffic again, then it drops and repeats.
Updated by Ermal Luçi over 9 years ago
Ok thank you i think i know where the issue is now.
I will update here when the issue is fixed but will need a kernel rebuild.
Updated by Ermal Luçi over 9 years ago
- Subject changed from In/Out Limiter on rule w/reply-to silently discards all traffic once limit rate reached to In/Out Limiter silently discards all traffic once limit rate reached
Updated by Steve Wheeler over 9 years ago
Just to add some further information. This bug is hit if you use Limiters on LAN and are also running Squid in transparent mode, presumably because it adds rules to forward the traffic.
Updated by Chris Buechler over 9 years ago
- Target version changed from 2.2.1 to 2.2.2
Updated by Chris Buechler over 9 years ago
- Target version changed from 2.2.2 to 2.2.3
Updated by Ermal Luçi over 9 years ago
- Status changed from Confirmed to Feedback
This seems affecting only NAT with limiters.
It should be handled properly now in 2.2.3 i will re-test this again as i did for a similar report.
If anyone can confirm it works for them on 2.2.3 as well it would be good.
Updated by Chris Buechler over 9 years ago
- Target version changed from 2.2.3 to 2.3
Updated by Ryan Clough over 9 years ago
Ermal Luçi wrote:
This seems affecting only NAT with limiters.
It should be handled properly now in 2.2.3 i will re-test this again as i did for a similar report.If anyone can confirm it works for them on 2.2.3 as well it would be good.
I had disabled the limiter by turning off the associated Firewall Rule. I have upgraded to 2.2.3 and tried re-enabling the Firewall rule and once the threshold limit is reached traffic is no longer passed.
I also tried deleting the Firewall Rules and the Limiters then recreating them from scratch. But alas the same result. When the threshold is reached traffic captured by the Firewall rule that feeds the Limiter just stops passing traffic.
If you would like any further information just let me know.
Updated by Adam Hirsch over 9 years ago
Like Ryan, I'm still seeing the issue after upgrading to 2.2.3.
Updated by Chris Buechler over 9 years ago
- Subject changed from In/Out Limiter silently discards all traffic once limit rate reached to Limiters on firewall rules where NAT applies drop all traffic
- Status changed from Feedback to Confirmed
- Assignee deleted (
Ermal Luçi) - Affected Version changed from 2.2 to 2.2.x
updated subject to root problem, closing out #4596 as duplicate of this.
Updated by Srdjan Jovanovich over 9 years ago
I'm still seeing the issue after upgrading to 2.2.3. NAT with limiters means no traffic. Once the rule is saved with limiters there's basically no traffic.
P.S. I can not believe that this problem is already extending through several versions or better from stepping up to FreeBSD 10?
Updated by zuber ahmed about 9 years ago
Hi
Traffic limiter still not working with squid3 (transparent mode) + squidgaurd on version 2.2.4.
Is there any time line to fix this as this issue is due for a long time?
thanks in advance for fixing this issue or any temp fix or patch for this.
Updated by Albert Yang about 9 years ago
I just wanted to re-add to what zuber ahmed, and that NAT reflection gets broken while having limiters on the LAN
Thank you
Updated by Bryan Bercero about 9 years ago
I just want to update that this bug is still present. Any developments? I have tested with 2.2.2.
Updated by Cristian Ciceri almost 9 years ago
Issue still present on version 2.2.5.
Updated by Sergio Handal almost 9 years ago
Issue still present on version 2.2.5.
When Limiter is enable in a Fiwerwll Rule NAT Reflection is not working.
Updated by Hamilton Calixto over 8 years ago
Problem still present in version 2.2.6. What is the deadline for solving this problem?
Updated by Luiz Souza over 8 years ago
- Target version changed from 2.3 to 2.3.1
Updated by Albert Yang over 8 years ago
Limiters work with squid proxy(transparent and WPAD)+squidguard
Hopefully soon with Nat reflection :)
Thanks to Riroxi
Updated by Riroxi . over 8 years ago
Albert Yang wrote:
Limiters work with squid proxy(transparent and WPAD)+squidguard
Hopefully soon with Nat reflection :)Thanks to Riroxi
Hello Albert!
I tested this workaround for a few days and some apps like download managers can bypass limiters with this rule set :(
We need to w8 for a definitive solution
Cya
Updated by Chris Buechler over 8 years ago
- Target version changed from 2.3.1 to 2.3.2
Updated by Aaron McDiarmid over 8 years ago
How come this problem keeps getting pushed back to later versions? Is there an underlying issue that prevents it from being fixed?
Updated by Chris Buechler over 8 years ago
Aaron McDiarmid wrote:
How come this problem keeps getting pushed back to later versions? Is there an underlying issue that prevents it from being fixed?
It's not an easy problem to fix, any OS changes like this are risky, and we're looking to release a 2.3.1 in a couple weeks which isn't enough time.
Updated by gmar almnsoor over 8 years ago
Arab world
________
definitive solution
https://forum.pfsense.org/index.php?topic=106640.0
enjoy <<<<<<<<<<<< _
Updated by Hamilton Calixto over 8 years ago
We have this bug for 1 year. When a solution is presented? I am dismayed by this as it is an extremely important feature in my datacenter.
Updated by Luca De Andreis over 8 years ago
Hamilton Calixto wrote:
We have this bug for 1 year. When a solution is presented? I am dismayed by this as it is an extremely important feature in my datacenter.
+1, I'm forced to use PfSense 2.1.5, limiters on NAT 1:1 are absolutely essential for me, the use of workaround is not a solution.
Updated by Matt Smith over 8 years ago
+1 I have dozens of 2.1.5 boxes because of this critical bug.
Crossed my fingers but seems 2.3 still not production ready, obviously 2.2 was not either.
Updated by Roman Spörk over 8 years ago
For me is that bug a big problem.
The traffic shaping feature was one decision to use pfsense.
I baught a XG-1540 with two SSD, 10 GE and 32GB RAM. To use traffic shaping with Squid proxy, I had to create a very complicated solution with cascaded virtual pfsense appliances.
I hope, there will be a solution for that bug.
Updated by Chris Buechler over 8 years ago
- Target version changed from 2.3.2 to 2.4.0
Updated by oscar velazquez over 8 years ago
Since we are not getting a solution any time soon, i guess we can use 2 pfsense boxes in line one with limiter and the other with cache
Problem is i cant get for the life of me make a trasnparent proxy cache in between my limiter box and my lan, anyone know a good guide or easy way to make a transparent proxy cache box?
WAN---limiter/dhcp (192.168.0.1)---- transparent web proxy cache/no dhcp(192.168.0.2)-----lan
i have tried many tutorials and configs but it never sends data trough when i activate the caching (not limiter) as a transparent box
i think its what roman spork did?
thanks
Updated by Luca De Andreis over 8 years ago
OMG.
The NAT 1:1 problem using limiters persist.
Works well on 2.1.5, 2.2.x = BAD, 2.3.x = BAD sigh ! We are forced to user 2.1.5 with many SECURITY HOLES.
Luca
Updated by → luckman212 over 8 years ago
Now that the target version bumped to 2.4 (FREEBSD-11) can anyone at least say whether the bug has been fixed in FreeBSD? If this bug is indeed an upstream problem, has a bug been filed with the FreeBSD project? If not, I'm afraid this can is going to keep getting kicked.
Updated by Jose Duarte over 8 years ago
Have you guys tried using a queue inside the limiter instead of the limiter itself? It could make a difference since in my scenario it's the workaround I use to don't have kernel panic on the second firewall. (limiters in HA)
Updated by Andrew Maslin over 8 years ago
Can someone share the FreeBSD bug # so we can track the progress of the root of the issue? Like Luke, I would like to know the status and timeframe of the underlying issue. Thanks!
Updated by Chris Buechler over 8 years ago
Andrew Maslin wrote:
Can someone share the FreeBSD bug # so we can track the progress of the root of the issue? Like Luke, I would like to know the status and timeframe of the underlying issue. Thanks!
There isn't one because the code/feature in question doesn't exist there.
Updated by → luckman212 over 8 years ago
Chris Buechler wrote:
There isn't one because the code/feature in question doesn't exist there.
Now I'm confused: so is the bug in FreeBSD, or somehow in pfSense? Are you saying this bug is only replicable in pfSense, but it's due to an underlying bug in fBSD that can't be replicated in stock so therefore there is no way to file a proper bugreport? Seems we are in a pickle.
Updated by Chris Buechler over 8 years ago
this is from the use of dummynet in pf, which doesn't exist in stock FreeBSD. And the implementation apparently leaves a lot to be desired, which is why it's fallen apart in so many edge cases with newer base OS versions. It was never perfect to begin with, either.
Updated by Steve Tibbetts about 8 years ago
Using pfsense 2.3.2-RELEASE (amd64)
I can confirm disabling the upload limiter solves an issue with limiters and 1:1 NAT.
We don't use squid or any addons for that matter. Our issue was / is.... WAN / LAN interface. Aliases setup for each IP in the DHCP scope on LAN. Limiters (up/down) setup for those aliases as a firewall rule on LAN. No issues with connectivity local or WAN. The issue is with accessing servers that are WAN to LAN NATed 1:1. It's worth noting the local IP's of the servers are NOT part of the limiter IP range. The UL limiter on LAN breaks NAT reflection.
To get around this we disabled the UL limiter. This is a temporary fix.
I really hope this can be resolved. Seems like an issue that has been ongoing for a while.
One thing I would like to mention... Prior to our current pfsense setup we had dual pfsense boxes using carp. Same versions, same setup. And that worked. I have a backup of that somewhere.
Updated by Luca De Andreis about 8 years ago
Yes, it is exactly as you described.
ONLY PFSense 2.1.5 works fine on this configuration. I use several CRITICAL firewalls on 2.1.5 but... 2.1.5 is very old and with critical security holes (?)
I can't use on these firewalls on 2.2 or 2.3.
:(
On other firewalls, with limiters from LAN to WAN and NO NAT there are no problems. But WAN TO LAN limiters && 1:1 NAT is broken.
Luca
Updated by → luckman212 about 8 years ago
Now that 2.4 dev builds have started, is there any reason to expect that this bug might get some lovin' in the next release? Or will the target get bumped to 3.0
Updated by Toronto B2 about 8 years ago
I am interested to know if limiters will ever work again?
It's annoying that they still show in the GUI and not working. Very disappointing!
Updated by Anders Tillebeck about 8 years ago
I also use limiters and NAT reflection in combination. So I am stuck on 2.1.4 and 2.1.5 until a release where this combination is working again. I just write this as info to tell that more than one company is affected :-)
Updated by jake keeys about 8 years ago
Also affected... is there any plan to fix this in an upcoming release as it's a common use case
Updated by gmar almnsoor about 8 years ago
Solution
fix Limiters on firewall rules where NAT applies drop all traffic
and
Problem Limiter blocks internet Squid transparent proxy
this Solution
work to (pfsense 2.2.*)(2.3.*)(2.4.*)
my now Solution
here
Updated by Luca De Andreis almost 8 years ago
This is a workaround, not a clean solution.
Better than nothing, but a native, specific and definitive resolution is desirable.
Luca
Updated by Luiz Souza almost 8 years ago
- Status changed from Confirmed to Feedback
- % Done changed from 0 to 100
Fixed in 2.4.
Updated by Phillip Davis almost 8 years ago
I guess the fix is in the pf port or...?
Is it something that easily applies back to 2.3.* FreeBSD 10.3 and thus could be back-ported so it will also appear fixed in 2.3.3-DEVELOPMENT snapshots?
Updated by Jim Pingle almost 8 years ago
Given all the work that's happened on 2.4 with IPFW, I'd say it's best to not attempt a backport. 2.4 is not that far off.
I ran a quick test of this with a floating rule on WAN matching outbound with a limiter. It was broken on 2.3, but works for me on 2.4.
Could use some more testing with port forwards and LAN-side redirects (e.g. transparent squid) but so far, so good.
Updated by Phillip Davis almost 8 years ago
OK. I don't use this so it doesn't effect systems that I have that will be stuck on 2.3.* (32-bit Alix). If it is not simple to back-port then I guess there will be only a few people with old 32-bit systems that also use this, and thus will not be able to get the fix. And if you are using stuff like this then the site is at least a bit complex and is likely to upgrade hardware anyway.
Updated by Jim Pingle almost 8 years ago
- Status changed from Feedback to Resolved
All indications are that this is fixed now, from my own tests and from user feedback.