Project

General

Profile

Actions

Bug #10436

open

softflowd no longer sends flow data after upgrade (v0.9.9_1 -> v1.0.0)

Added by Mark Hassman about 4 years ago. Updated 7 months ago.

Status:
Feedback
Priority:
Normal
Assignee:
-
Category:
softflowd
Target version:
-
Start date:
04/06/2020
Due date:
% Done:

0%

Estimated time:
Plus Target Version:
Affected Version:
2.4.5
Affected Plus Version:
Affected Architecture:
SG-3100

Description

Hi, after upgrading pfsense from v2.4.4_3 -> v2.4.5 (which included an upgrade of softflowd from v0.9.9_1 -> v1.0), softflowd no longer sends flows to receiver. Running softflowd with -D produces debug output of adding flows, but the netflow receiver never receives data. I've confirmed this with tcpdump on the source netgate device and destination netflow receiver - softflowd isn't generating reporting packets on the wire.

Actions #1

Updated by Manuel Piovan about 4 years ago

me too
can you try with the flag -P udp from console and report back?
example /usr/local/bin/softflowd -D -i 1:vmx1 -n 192.168.10.202:2055 -v 5 -T ether -A sec -p /var/run/softflowd.vmx1.pid -c /var/run/softflowd.vmx1.ctl -P udp

-P udp|tcp|sctp         Specify transport layer protocol for exporting packets
Actions #2

Updated by Mark Hassman about 4 years ago

Manuel Piovan wrote:

me too
can you try with the flag -P udp from console and report back?
example /usr/local/bin/softflowd -D -i 1:vmx1 -n 192.168.10.202:2055 -v 5 -T ether -A sec -p /var/run/softflowd.vmx1.pid -c /var/run/softflowd.vmx1.ctl -P udp

-P udp|tcp|sctp Specify transport layer protocol for exporting packets

Hi Manuel, unfortunately, no change - still zero netflow packets sent to receiver:
/usr/local/bin/softflowd -i 1:mvneta1 -n 192.168.x.x:9995 -v 9 -T full -A sec -p /var/run/softflowd.mvneta1.pid -c /var/run/softflowd.mvneta1.ctl -P udp

I also noticed after a day of running, softflowd processes are dying. I run softflow on 3 vlans - checked today, only one was still active. So, decided to run it for longer duration in debug mode: -D:
...
ADD FLOW seq:300 [192.168.x.x]:161 <> [192.168.x.x]:56916 proto:17 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:301 [192.168.x.x]:161 <> [192.168.x.x]:56917 proto:17 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:302 [192.168.x.x]:37596 <> [x.x.x.x]:993 proto:6 vlan>:0 vlan<:0 ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
Starting expiry scan: mode 0
Queuing flow seq:6 (0x206a0640) for expiry reason 2
Queuing flow seq:21 (0x206a10e0) for expiry reason 2
Queuing flow seq:43 (0x206aed20) for expiry reason 2
Queuing flow seq:44 (0x206aec80) for expiry reason 2
Queuing flow seq:46 (0x206aeb40) for expiry reason 2
Queuing flow seq:47 (0x206aeaa0) for expiry reason 2
Queuing flow seq:48 (0x206aea00) for expiry reason 2
Queuing flow seq:49 (0x206ae960) for expiry reason 2
Queuing flow seq:50 (0x206ae8c0) for expiry reason 2
Queuing flow seq:52 (0x206ae780) for expiry reason 2
Queuing flow seq:77 (0x206c1fe0) for expiry reason 2
Queuing flow seq:78 (0x206c1f40) for expiry reason 2
Queuing flow seq:79 (0x206c1ea0) for expiry reason 2
Queuing flow seq:82 (0x206c1cc0) for expiry reason 2
Queuing flow seq:83 (0x206c1c20) for expiry reason 2
Queuing flow seq:84 (0x206c1b80) for expiry reason 2
Queuing flow seq:86 (0x206c1a40) for expiry reason 2
Queuing flow seq:87 (0x206c19a0) for expiry reason 2
Queuing flow seq:89 (0x206c1860) for expiry reason 2
Finished scan 19 flow(s) to be evicted
Flow 2/0: r 0 offset 371 ie 0004 len 84(0x0054)
Flow 2/1: r 0 offset 451 ie 0004 len 164(0x00a4)
Flow 2/2: r 0 offset 531 ie 0004 len 244(0x00f4)
Flow 2/3: r 0 offset 611 ie 0004 len 324(0x0144)
Flow 2/4: r 0 offset 691 ie 0004 len 404(0x0194)
Flow 2/5: r 0 offset 771 ie 0004 len 484(0x01e4)
Flow 2/6: r 0 offset 851 ie 0004 len 564(0x0234)
Flow 2/7: r 0 offset 931 ie 0004 len 644(0x0284)
Flow 2/8: r 0 offset 1011 ie 0004 len 724(0x02d4)
Flow 2/9: r 0 offset 1091 ie 0004 len 804(0x0324)
Flow 2/10: r 0 offset 1171 ie 0004 len 884(0x0374)
Flow 2/11: r 0 offset 1251 ie 0004 len 964(0x03c4)
Flow 2/12: r 0 offset 1331 ie 0004 len 1044(0x0414)
Flow 2/13: r 0 offset 1411 ie 0004 len 1124(0x0464)
Segmentation fault (core dumped)

Actions #3

Updated by Chris Norris over 3 years ago

Mark Hassman wrote:

Hi, after upgrading pfsense from v2.4.4_3 -> v2.4.5 (which included an upgrade of softflowd from v0.9.9_1 -> v1.0), softflowd no longer sends flows to receiver. Running softflowd with -D produces debug output of adding flows, but the netflow receiver never receives data. I've confirmed this with tcpdump on the source netgate device and destination netflow receiver - softflowd isn't generating reporting packets on the wire.

Same issue for me. Packet capture shows no Netflow packets being sent by the firewall.

Actions #4

Updated by Nigel Smith about 3 years ago

Same issue for me also. No flows being exported from the firewall as reported by capture on the firewall. Any ideas on next steps to nudge this forward?

Below is the -D output (I've replaced the IPs manually before posting).

# /usr/local/bin/softflowd -D -P udp -i pppoe0 -n x.x.x.x:2055 -v 9 -T proto -A milli -p /var/run/softflowd.pppoe0.pid -c /var/run/softflowd.pppoe0.ctl
Using pppoe0 (idx: 0)
softflowd v1.0.0 starting data collection
Exporting flows to [x.x.x.x]:2055
ADD FLOW seq:1 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:2 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:1 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:3 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:4 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:1 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:5 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:6 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:7 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:8 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:9 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:10 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:11 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:12 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:13 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:14 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:15 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:16 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:17 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:18 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:19 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:20 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:21 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:58 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:22 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:23 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:24 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:25 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:26 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:27 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:28 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:29 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:30 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:31 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:32 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:33 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:34 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:35 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:36 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:37 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:38 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:39 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:40 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:41 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:42 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:43 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:44 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:17 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:45 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:46 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:47 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:48 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:49 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:50 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:51 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:52 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:53 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:54 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:55 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:56 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:57 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:58 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:59 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:60 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:61 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
ADD FLOW seq:62 [x.x.x.x]:0 <> [x.x.x.x]:0 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
Starting expiry scan: mode 0
Queuing flow seq:11 (0x4027d370) for expiry reason 4
Finished scan 1 flow(s) to be evicted
Flow 2/0: r 0 offset 387 ie 0004 len 100(0x0064)
Segmentation fault (core dumped)

Actions #6

Updated by Viktor Gurov almost 3 years ago

same crash on pfSense 21.02-p2 (SG-3100):

/usr/local/bin/softflowd -D -i 1:mvneta1 -n 192.168.88.41:9995 -v 9 -T full -A sec -p /var/run/softflowd.mvneta1.pid -c /var/run/softflowd.mvneta1.ctl -P udp
Using ue0 (idx: 1)
softflowd v1.0.0 starting data collection
Exporting flows to [192.168.88.41]:9995
ADD FLOW seq:1 *****
...
Starting expiry scan: mode 0
Queuing flow seq:49 (0x20365960) for expiry reason 2
Finished scan 1 flow(s) to be evicted
Flow 1/0: r 0 offset 331 ie 0004 len 44(0x002c)
Segmentation fault (core dumped)

works fine on FreeBSD 13:

Frame 2: 734 bytes on wire (5872 bits), 734 bytes captured (5872 bits)
Ethernet II, Src: aa:94:a6:cd:4b:42 (aa:94:a6:cd:4b:42), Dst: 4e:49:f0:fe:bf:12 (4e:49:f0:fe:bf:12)
Internet Protocol Version 4, Src: 192.168.88.201, Dst: 192.168.88.41
User Datagram Protocol, Src Port: 52611, Dst Port: 9995
    Source Port: 52611
    Destination Port: 9995
    Length: 700
    Checksum: 0x406e [unverified]
    [Checksum Status: Unverified]
    [Stream index: 0]
Cisco NetFlow/IPFIX
    Version: 9
    Count: 692
    SysUptime: 502.695000000 seconds
    Timestamp: Jun  2, 2021 17:04:01.000000000 MSK
        CurrentSecs: 1622642641
    FlowSequence: 89
    SourceId: 0
    FlowSet 1 [id=1024]
    FlowSet 2 [id=2049]
    FlowSet 3 [id=1024]

FreeBSD fbsd 13.0-RELEASE FreeBSD 13.0-RELEASE #0 releng/13.0-n244733-ea31abc261f: Fri Apr 9 04:24:09 UTC 2021 :/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64

Actions #7

Updated by Danilo Zrenjanin about 2 years ago

Tested on the SG-3100 (21.05.2). I got the same results.

/usr/local/bin/softflowd -D -i 1:mvneta1.4094 -n x.x.x.x:4433 -v 9 -T full -A sec -p /var/run/softflowd.mvneta1.pid
Using mvneta1.4094 (idx: 1)
softflowd v1.0.0 starting data collection
Exporting flows to [x.x.x.x]:4433
ADD FLOW seq:1 [x.x.x.x]:22 <> [x.x.x.x]:53021 proto:6 vlan>:0 vlan<:0  ether:00:00:00:00:00:00 <> 00:00:00:00:00:00
Starting expiry scan: mode 0
Queuing flow seq:20 (0x2035c180) for expiry reason 2
Finished scan 1 flow(s) to be evicted
Flow 2/0: r 0 offset 371 ie 0004 len 84(0x0054)
Segmentation fault (core dumped)
Actions #8

Updated by → luckman212 almost 2 years ago

I'm starting down a path that involves softflowd. Does anyone know if this issue persists with the latest snaps?

Actions #9

Updated by Marcelo Cury over 1 year ago

If you set the Netflow version to PSAMP, it seems to work but I don't have a collector to analyze the data.
All other Netflow versions, the firewall doesn't send any data, udp port 1517 capture shows no packets.
I have tested using a SG-3100 22.05, softflowd version: 1.2.6_1 (softflowd-1.0.0_1)

[22.05-RELEASE][]/root: tcpdump -ni mvneta1.100 udp port 1517
18:53:48.560490 IP 192.168.255.249.24232 > 192.168.255.253.1517: UDP, length 1428
18:53:48.560504 IP 192.168.255.249.24232 > 192.168.255.253.1517: UDP, length 1428
18:53:48.560518 IP 192.168.255.249.24232 > 192.168.255.253.1517: UDP, length 1428
18:53:48.560532 IP 192.168.255.249.24232 > 192.168.255.253.1517: UDP, length 1428
18:53:48.560546 IP 192.168.255.249.24232 > 192.168.255.253.1517: UDP, length 1428
18:53:48.560560 IP 192.168.255.249.24232 > 192.168.255.253.1517: UDP, length 1428
^C18:53:48.560611 IP 192.168.255.249.24232 > 192.168.255.253.1517: UDP, length 1428

9931 packets captured
57021 packets received by filter
33916 packets dropped by kernel

Actions #10

Updated by Marcelo Cury about 1 year ago

Can someone test this with 23.01 snaps on the SG-3100 ?

Actions #11

Updated by Mark Hassman about 1 year ago

Marcelo Cury wrote in #note-10:

Can someone test this with 23.01 snaps on the SG-3100 ?

Confirmed - softflowd is working in 23.01 on SG-3100.
:-)

Actions #12

Updated by Marcelo Cury about 1 year ago

I tested and noticed that softflowd processes are dying.
So, I decided to test an older version, but doing this not recommended, I'm doing because I can perform a clean install anytime.

fetch https://firmware.netgate.com/pkg/pfSense_factory-v2_4_1_armv6-pfSense_factory-v2_4_1/All/softflowd-0.9.9_1.txz
fetch https://firmware.netgate.com/pkg/pfSense_factory-v2_4_1_armv6-pfSense_factory-v2_4_1/All/pfSense-pkg-softflowd-1.2.2.txz
pkg install pfSense-pkg-softflowd-1.2.2.txz
pkg install softflowd-0.9.9_1.txz

It is working for around 30 minutes without problems, but the package doesn't show in package manager but it shows in the menu where I can change the configuration, select interfaces, enable and disable and etc.

I'll be monitoring it.

Actions #13

Updated by Mark Hassman about 1 year ago

Mark Hassman wrote in #note-11:

Marcelo Cury wrote in #note-10:

Can someone test this with 23.01 snaps on the SG-3100 ?

Confirmed - softflowd is working in 23.01 on SG-3100.
:-)

Argh.. after ~18hrs, softflowd died: Exiting immediately on unexpected signal 11.
It was still shown as active via the web gui, but no longer sending data. Restarting the softflowd service and it starting sending again.

fyi.. softflowd does show under package manager for me (v1.2.6_1), but i also never removed it from multiple versions ago.

Actions #14

Updated by Marcelo Cury about 1 year ago

I downgraded softflowd, so I'm not using 1.2.6_1, this is the reason for it not showing in my package manager.
I'm using softflowd-1.2.2 on a SG-3100 with 23.01 firmware, and so far so good, everything working.
To remove it, I would need to use CLI and type pkg remove commands.

[23.01-RELEASE][root@pfsense.home.arpa]/root: ps aux | grep softflowd
nobody 68055 0.0 0.2 6296 3584 - Is 01:28 0:01.80 /usr/local/sbin/softflowd -i mvneta1 -n 192.168.255.253:2055 -v 9 -T full -p /var/run/softflowd.mvneta1.pid -c /var/run/softflowd.mvneta1.ctl
nobody 68795 0.0 0.2 6172 3476 - Ss 01:28 0:01.33 /usr/local/sbin/softflowd -i mvneta1.100 -n 192.168.255.253:2055 -v 9 -T full -p /var/run/softflowd.mvneta1.100.pid -c /var/run/softflowd.mvneta1.100.ctl
nobody 69437 0.0 0.2 6296 3584 - Is 01:28 0:00.37 /usr/local/sbin/softflowd -i mvneta1.10 -n 192.168.255.253:2055 -v 9 -T full -p /var/run/softflowd.mvneta1.10.pid -c /var/run/softflowd.mvneta1.10.ctl
root 10009 0.0 0.1 4696 2356 0 S+ 09:02 0:00.01 grep softflowd

[23.01-RELEASE][root@pfsense.home.arpa]/root: uptime
9:00AM up 8:36, 1 user, load averages: 0.69, 0.42, 0.37

[23.01-RELEASE][root@pfsense.home.arpa]/root: tcpdump -ni mvneta1.100 udp port 2055
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on mvneta1.100, link-type EN10MB (Ethernet), capture size 262144 bytes
09:08:01.666188 IP 192.168.255.249.15825 > 192.168.255.253.2055: UDP, length 496
09:08:01.666227 IP 192.168.255.249.15825 > 192.168.255.253.2055: UDP, length 504
09:08:01.666248 IP 192.168.255.249.15825 > 192.168.255.253.2055: UDP, length 464

Edit: Removed that package, which was working perfectly, to try a newer one, from version 2.44. It is already installed and working, I'll be monitoring it.
softflowd from version 2.45 and newer are already softflowd v1.0.0, which fails with Exiting immediately on unexpected signal 11.
So I will test up to pfsense 2.44 package repository.

So far, so good:

[23.01-RELEASE][root@pfsense.home.arpa]/root: softflowctl -c /var/run/softflowd.mvneta1.100.ctl statistics
softflowd[91476]: Accumulated statistics since 2023-03-01T14:47:58 UTC:
Number of active flows: 95
Packets processed: 1607735
Fragments: 0
Ignored packets: 126 (126 non-IP, 0 too short)
Flows expired: 575 (0 forced)
Flows exported: 575 (1157 records) in 106 packets (0 failures)
Packets received by libpcap: 1609719
Packets dropped by libpcap: 0
Packets dropped by interface: 0

Expired flow statistics: minimum average maximum
Flow bytes: 48 2126013 421039486
Flow packets: 1 1767 331179
Duration: 0.00s 24.69s 1122.58s

Expired flow reasons:
tcp = 0 tcp.rst = 94 tcp.fin = 279
udp = 200 icmp = 2 general = 0
maxlife = 0
over 2 GiB = 0
maxflows = 0
flushed = 0

Per-protocol statistics: Octets Packets Avg Life Max Life
icmp (1): 113 1 0.00s 0.00s
tcp (6): 1221146507 1013797 35.37s 1122.58s
udp (17): 1310628 2067 5.01s 914.00s
ipv6-icmp (58): 48 1 0.00s 0.00s

Actions #15

Updated by Marcelo Cury about 1 year ago

I suppose that this redmine issue 10436 could be closed if Netgate make available the previous version (from pfsense 2.4.4) of softflowd in package manager. They could write something like this in the package description, some devices are not working properly with the newer version of softflowd, in that case, you can try this version.
This would be an "easy fix", wouldn't need much effort from Netgate engineers I believe, correct if I'm wrong in this statement.

I have been testing that version and it's working perfectly.
If Netgate wishes to access my device by any means to check if it is working, I would be more than happy to provide access.

My actions to install previous version:
[23.01-RELEASE][root@pfsense.home.arpa]/root/2.44: fetch https://firmware.netgate.com/pkg/pfSense_factory-v2_4_4_armv6-pfSense_factory-v2_4_4/All/pfSense-pkg-softflowd-1.2.3.txz
[23.01-RELEASE][root@pfsense.home.arpa]/root/2.44: fetch https://firmware.netgate.com/pkg/pfSense_factory-v2_4_4_armv6-pfSense_factory-v2_4_4/All/softflowd-0.9.9_1.txz
[23.01-RELEASE][root@pfsense.home.arpa]/root/2.44: pkg install softflowd-0.9.9_1.txz
[23.01-RELEASE][root@pfsense.home.arpa]/root/2.44: pkg install pfSense-pkg-softflowd-1.2.3.txz

Once it is installed, configure it as you wish then enable it, it is definitely working perfectly.

Thanks.

Edit: In case you wish to remove this package, just ssh into pfsense and type: pkg remove softflowd

Actions #16

Updated by Mark Hassman 10 months ago

fyi.. after upgrading to pfsense 23.05 & softflowd 1.2.6_1, stability has returned.. two weeks of uptime so far.

Actions #17

Updated by Azamat Khakimyanov 7 months ago

  • Status changed from New to Resolved

Tested on 23.05_1 with SoftFlowD 1.2.6_1

I run SoftFlowd on different interfaces (WAN, LAN and Bridge) and generated lost of different types of traffic (web, TCP and UDP iperf test, ICMP etc) and on my NetFlow collector (Ubuntu 23.04 and nfcapd package) I saw correct NetFlow reports.
I tested NetFlow version 5, 9 and 10 (IPFIX).

I marked this Bug as resolved.

Actions #18

Updated by Azamat Khakimyanov 7 months ago

  • Status changed from Resolved to Feedback

My fault - I tested it on KVM with vtnet NICs. I'm afraid I don't have SG-3100.

If anyone can run this test on SG-3100 with 23.05_1 and SoftFlowd 1.2.6_1 it will be perfect.

Actions

Also available in: Atom PDF