Bug #15413
closedKernel panic in HA nodes when under high load
0%
Description
Two 1541s running 23.09.1 in this example:
db:1:pfs> bt Tracing pid 12 tid 100124 td 0xfffffe00e24d1560 kdb_enter() at kdb_enter+0x32/frame 0xfffffe01066f8820 vpanic() at vpanic+0x163/frame 0xfffffe01066f8950 panic() at panic+0x43/frame 0xfffffe01066f89b0 trap_fatal() at trap_fatal+0x40c/frame 0xfffffe01066f8a10 trap_pfault() at trap_pfault+0x4f/frame 0xfffffe01066f8a70 calltrap() at calltrap+0x8/frame 0xfffffe01066f8a70 --- trap 0xc, rip = 0xffffffff80fb5e29, rsp = 0xfffffe01066f8b40, rbp = 0xfffffe01066f8ba0 --- pf_test_state_udp() at pf_test_state_udp+0x2a9/frame 0xfffffe01066f8ba0 pf_test() at pf_test+0x110a/frame 0xfffffe01066f8d40 pf_check_in() at pf_check_in+0x27/frame 0xfffffe01066f8d60 pfil_mbuf_in() at pfil_mbuf_in+0x38/frame 0xfffffe01066f8d90 ip_input() at ip_input+0x3ae/frame 0xfffffe01066f8df0 swi_net() at swi_net+0x128/frame 0xfffffe01066f8e60 ithread_loop() at ithread_loop+0x257/frame 0xfffffe01066f8ef0 fork_exit() at fork_exit+0x7f/frame 0xfffffe01066f8f30 fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe01066f8f30 --- trap 0, rip = 0, rsp = 0, rbp = 0 --- db:1:pfs> show registers cs 0x20 ds 0x3b es 0x3b fs 0x13 gs 0x1b ss 0x28 rax 0x12 rcx 0xffffffff814591d9 rdx 0xffffffff844195ff rbx 0x100 rsp 0xfffffe01066f8820 rbp 0xfffffe01066f8820 rsi 0xfffffe01066f8290 rdi 0xffffffff82d40298 vt_conswindow+0x10 r8 0x10 r9 0x10 r10 0xf r11 0x10 r12 0 r13 0 r14 0xffffffff813dc4fe r15 0xfffffe00e24d1560 rip 0xffffffff80d38d62 kdb_enter+0x32 rflags 0x82 kdb_enter+0x32: movq $0,0x2344aa3(%rip) db:1:pfs> show pcpu cpuid = 8 dynamic pcpu = 0xfffffe00b4a6af00 curthread = 0xfffffe00e24d1560: pid 12 tid 100124 critnest 1 "swi1: netisr 11" curpcb = 0xfffffe00e24d1a80 fpcurthread = none idlethread = 0xfffffe00e23fce40: tid 100011 "idle: cpu8" self = 0xffffffff84018000 curpmap = 0xffffffff83021ab0 tssp = 0xffffffff84018384 rsp0 = 0xfffffe01066f9000 kcr3 = 0x8000000003f6f002 ucr3 = 0xffffffffffffffff scr3 = 0x69eb1e858 gs32p = 0xffffffff84018404 ldt = 0xffffffff84018444 tss = 0xffffffff84018434 curvnet = 0xfffff80005383440
Crashing repeatedly. Multiple crash reports available.
Also see: https://netgate.slack.com/archives/C4GUL8CKF/p1671555158444819
Similar crash in 22.05.
Updated by Steve Wheeler 8 months ago
- Subject changed from Kernal panic in HA nodes when under high load to Kernel panic in HA nodes when under high load
Updated by Kristof Provost 8 months ago
- Assignee set to Kristof Provost
The backtrace address shows we're crashing in `if (PF_ANEQ(pd->src, &nk->addr[pd->sidx], pd->af) ||`. That likely means that one or both of the state keys were NULL.
Those get set to NULL when the state is being detached, in pf_detach_state(). That's done with the key lock held, but doesn't hold the id lock.
That doesn't automatically mean that pf_test_state_udp() could be processing a state while pf_detach_state() is removing the keys however, as pf_unlink_state() sets the state timeout to PFTM_UNLINKED, which causes pf_find_state() to pretend the state doesn't exist. In other words, once pf_unlink_state() is called on a state we should not find the state in pf_test_state_udp() any more.,
There are two scenarios where this could still happen, and that's when the state creation fails. pf_insert_state() can also call pf_detach_state(), but does not set the timeout to PFTM_UNLINKED, the same is true for pf_state_key_attach(), so perhaps we wound up in one of those error paths.
That would leave a (brief) window where other threads could find the state (because timeout != PFTM_UNLINKED) and acquire a state lock, while simultaneously pf_detach_state() sets the keys to NULL, leading to the above crash.
That's all speculative, because there's really not enough information here to be sure. Still, if that really is the cause there's a very simple and low risk patch to fix it: https://reviews.freebsd.org/D45101
Updated by Jim Pingle 8 months ago
- Target version set to 2.8.0
- Plus Target Version set to 24.07
Updated by Kristof Provost 8 months ago
- Status changed from New to Feedback
What is hoped to be the fix has been merged to our branches.
Updated by Jim Pingle 7 months ago
- Plus Target Version changed from 24.07 to 24.08
Updated by Jim Pingle 2 months ago
- Plus Target Version changed from 24.08 to 24.11
Updated by Jim Pingle about 1 month ago
- Status changed from Feedback to Resolved
Patch has been in for a while and there have been public builds since. No further reports and no reports of other regressions, so I think we can call this solved for now. If someone hits the same panic again we can reopen/retarget the issue.