Bug #10436
opensoftflowd no longer sends flow data after upgrade (v0.9.9_1 -> v1.0.0)
0%
Description
Hi, after upgrading pfsense from v2.4.4_3 -> v2.4.5 (which included an upgrade of softflowd from v0.9.9_1 -> v1.0), softflowd no longer sends flows to receiver. Running softflowd with -D produces debug output of adding flows, but the netflow receiver never receives data. I've confirmed this with tcpdump on the source netgate device and destination netflow receiver - softflowd isn't generating reporting packets on the wire.
Updated by Manuel Piovan almost 3 years ago
me too
can you try with the flag -P udp from console and report back?
example /usr/local/bin/softflowd -D -i 1:vmx1 -n 192.168.10.202:2055 -v 5 -T ether -A sec -p /var/run/softflowd.vmx1.pid -c /var/run/softflowd.vmx1.ctl -P udp
-P udp|tcp|sctp Specify transport layer protocol for exporting packets
Updated by Mark Hassman almost 3 years ago
Manuel Piovan wrote:
me too
can you try with the flag -P udp from console and report back?
example /usr/local/bin/softflowd -D -i 1:vmx1 -n 192.168.10.202:2055 -v 5 -T ether -A sec -p /var/run/softflowd.vmx1.pid -c /var/run/softflowd.vmx1.ctl -P udp-P udp|tcp|sctp Specify transport layer protocol for exporting packets
Hi Manuel, unfortunately, no change - still zero netflow packets sent to receiver:
/usr/local/bin/softflowd -i 1:mvneta1 -n 192.168.x.x:9995 -v 9 -T full -A sec -p /var/run/softflowd.mvneta1.pid -c /var/run/softflowd.mvneta1.ctl -P udp
I also noticed after a day of running, softflowd processes are dying. I run softflow on 3 vlans - checked today, only one was still active. So, decided to run it for longer duration in debug mode: -D:
...
ADD FLOW seq:300 [192.168.x.x]:161 <> [192.168.x.x]:56916 proto:17 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:301 [192.168.x.x]:161 <> [192.168.x.x]:56917 proto:17 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:302 [192.168.x.x]:37596 <> [x.x.x.x]:993 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
Starting expiry scan: mode 0
Queuing flow seq:6 (0x206a0640) for expiry reason 2
Queuing flow seq:21 (0x206a10e0) for expiry reason 2
Queuing flow seq:43 (0x206aed20) for expiry reason 2
Queuing flow seq:44 (0x206aec80) for expiry reason 2
Queuing flow seq:46 (0x206aeb40) for expiry reason 2
Queuing flow seq:47 (0x206aeaa0) for expiry reason 2
Queuing flow seq:48 (0x206aea00) for expiry reason 2
Queuing flow seq:49 (0x206ae960) for expiry reason 2
Queuing flow seq:50 (0x206ae8c0) for expiry reason 2
Queuing flow seq:52 (0x206ae780) for expiry reason 2
Queuing flow seq:77 (0x206c1fe0) for expiry reason 2
Queuing flow seq:78 (0x206c1f40) for expiry reason 2
Queuing flow seq:79 (0x206c1ea0) for expiry reason 2
Queuing flow seq:82 (0x206c1cc0) for expiry reason 2
Queuing flow seq:83 (0x206c1c20) for expiry reason 2
Queuing flow seq:84 (0x206c1b80) for expiry reason 2
Queuing flow seq:86 (0x206c1a40) for expiry reason 2
Queuing flow seq:87 (0x206c19a0) for expiry reason 2
Queuing flow seq:89 (0x206c1860) for expiry reason 2
Finished scan 19 flow(s) to be evicted
Flow 2/0: r 0 offset 371 ie 0004 len 84(0x0054)
Flow 2/1: r 0 offset 451 ie 0004 len 164(0x00a4)
Flow 2/2: r 0 offset 531 ie 0004 len 244(0x00f4)
Flow 2/3: r 0 offset 611 ie 0004 len 324(0x0144)
Flow 2/4: r 0 offset 691 ie 0004 len 404(0x0194)
Flow 2/5: r 0 offset 771 ie 0004 len 484(0x01e4)
Flow 2/6: r 0 offset 851 ie 0004 len 564(0x0234)
Flow 2/7: r 0 offset 931 ie 0004 len 644(0x0284)
Flow 2/8: r 0 offset 1011 ie 0004 len 724(0x02d4)
Flow 2/9: r 0 offset 1091 ie 0004 len 804(0x0324)
Flow 2/10: r 0 offset 1171 ie 0004 len 884(0x0374)
Flow 2/11: r 0 offset 1251 ie 0004 len 964(0x03c4)
Flow 2/12: r 0 offset 1331 ie 0004 len 1044(0x0414)
Flow 2/13: r 0 offset 1411 ie 0004 len 1124(0x0464)
Segmentation fault (core dumped)
Updated by Chris Norris over 2 years ago
Mark Hassman wrote:
Hi, after upgrading pfsense from v2.4.4_3 -> v2.4.5 (which included an upgrade of softflowd from v0.9.9_1 -> v1.0), softflowd no longer sends flows to receiver. Running softflowd with -D produces debug output of adding flows, but the netflow receiver never receives data. I've confirmed this with tcpdump on the source netgate device and destination netflow receiver - softflowd isn't generating reporting packets on the wire.
Same issue for me. Packet capture shows no Netflow packets being sent by the firewall.
Updated by Nigel Smith almost 2 years ago
Same issue for me also. No flows being exported from the firewall as reported by capture on the firewall. Any ideas on next steps to nudge this forward?
Below is the -D output (I've replaced the IPs manually before posting).
# /usr/local/bin/softflowd -D -P udp -i pppoe0 -n x.x.x.x:2055 -v 9 -T proto -A milli -p /var/run/softflowd.pppoe0.pid -c /var/run/softflowd.pppoe0.ctl Using pppoe0 (idx: 0) softflowd v1.0.0 starting data collection Exporting flows to [x.x.x.x]:2055 ADD FLOW seq:1 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:2 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:1 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:3 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:4 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:1 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:5 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:6 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:7 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:8 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:9 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:10 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:11 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:12 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:13 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:14 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:15 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:16 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:17 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:18 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:19 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:20 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:21 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:58 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:22 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:23 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:24 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:25 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:26 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:27 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:28 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:29 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:30 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:31 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:32 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:33 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:34 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:35 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:36 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:37 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:38 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:39 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:40 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:41 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:42 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:43 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:44 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:45 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:46 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:47 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:48 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:49 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:50 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:51 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:52 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:53 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:54 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:55 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:56 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:57 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:58 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:59 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:60 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:61 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 ADD FLOW seq:62 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 Starting expiry scan: mode 0 Queuing flow seq:11 (0x4027d370) for expiry reason 4 Finished scan 1 flow(s) to be evicted Flow 2/0: r 0 offset 387 ie 0004 len 100(0x0064) Segmentation fault (core dumped)
Updated by Viktor Gurov almost 2 years ago
Updated by Viktor Gurov over 1 year ago
same crash on pfSense 21.02-p2 (SG-3100):
/usr/local/bin/softflowd -D -i 1:mvneta1 -n 192.168.88.41:9995 -v 9 -T full -A sec -p /var/run/softflowd.mvneta1.pid -c /var/run/softflowd.mvneta1.ctl -P udp Using ue0 (idx: 1) softflowd v1.0.0 starting data collection Exporting flows to [192.168.88.41]:9995 ADD FLOW seq:1 ***** ... Starting expiry scan: mode 0 Queuing flow seq:49 (0x20365960) for expiry reason 2 Finished scan 1 flow(s) to be evicted Flow 1/0: r 0 offset 331 ie 0004 len 44(0x002c) Segmentation fault (core dumped)
works fine on FreeBSD 13:
Frame 2: 734 bytes on wire (5872 bits), 734 bytes captured (5872 bits) Ethernet II, Src: aa:94:a6:cd:4b:42 (aa:94:a6:cd:4b:42), Dst: 4e:49:f0:fe:bf:12 (4e:49:f0:fe:bf:12) Internet Protocol Version 4, Src: 192.168.88.201, Dst: 192.168.88.41 User Datagram Protocol, Src Port: 52611, Dst Port: 9995 Source Port: 52611 Destination Port: 9995 Length: 700 Checksum: 0x406e [unverified] [Checksum Status: Unverified] [Stream index: 0] Cisco NetFlow/IPFIX Version: 9 Count: 692 SysUptime: 502.695000000 seconds Timestamp: Jun 2, 2021 17:04:01.000000000 MSK CurrentSecs: 1622642641 FlowSequence: 89 SourceId: 0 FlowSet 1 [id=1024] FlowSet 2 [id=2049] FlowSet 3 [id=1024]
FreeBSD fbsd 13.0-RELEASE FreeBSD 13.0-RELEASE #0 releng/13.0-n244733-ea31abc261f: Fri Apr 9 04:24:09 UTC 2021 root@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64
Updated by Danilo Zrenjanin 12 months ago
Tested on the SG-3100 (21.05.2). I got the same results.
/usr/local/bin/softflowd -D -i 1:mvneta1.4094 -n x.x.x.x:4433 -v 9 -T full -A sec -p /var/run/softflowd.mvneta1.pid Using mvneta1.4094 (idx: 1) softflowd v1.0.0 starting data collection Exporting flows to [x.x.x.x]:4433 ADD FLOW seq:1 [x.x.x.x]:22 <> [x.x.x.x]:53021 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00 Starting expiry scan: mode 0 Queuing flow seq:20 (0x2035c180) for expiry reason 2 Finished scan 1 flow(s) to be evicted Flow 2/0: r 0 offset 371 ie 0004 len 84(0x0054) Segmentation fault (core dumped)
Updated by → luckman212 8 months ago
I'm starting down a path that involves softflowd. Does anyone know if this issue persists with the latest snaps?
Updated by Marcelo Cury 5 months ago
If you set the Netflow version to PSAMP, it seems to work but I don't have a collector to analyze the data.
All other Netflow versions, the firewall doesn't send any data, udp port 1517 capture shows no packets.
I have tested using a SG-3100 22.05, softflowd version: 1.2.6_1 (softflowd-1.0.0_1)
[22.05-RELEASE][root@pfsense.home.arpa]/root: tcpdump -ni mvneta1.100 udp port 1517
18:53:48.560490 IP 192.168.255.249.24232 > 192.168.255.253.1517: UDP, length 1428
18:53:48.560504 IP 192.168.255.249.24232 > 192.168.255.253.1517: UDP, length 1428
18:53:48.560518 IP 192.168.255.249.24232 > 192.168.255.253.1517: UDP, length 1428
18:53:48.560532 IP 192.168.255.249.24232 > 192.168.255.253.1517: UDP, length 1428
18:53:48.560546 IP 192.168.255.249.24232 > 192.168.255.253.1517: UDP, length 1428
18:53:48.560560 IP 192.168.255.249.24232 > 192.168.255.253.1517: UDP, length 1428
^C18:53:48.560611 IP 192.168.255.249.24232 > 192.168.255.253.1517: UDP, length 1428
9931 packets captured
57021 packets received by filter
33916 packets dropped by kernel
Updated by Marcelo Cury 7 days ago
Can someone test this with 23.01 snaps on the SG-3100 ?