Bug #5498
closedflowworker: Assertion in CheckWorkQueue
Added by Timothy Gilbert over 2 years ago. Updated over 1 year ago.
Description
suricata: flow-worker.c:184: CheckWorkQueue: Assertion `!(f->use_cnt > 0)' failed.
Assertion happening every day if not every other day. Currently running 6.0.6 across multiple Centos7 Servers.
Files
suricata-config-dump.txt (18.5 KB) suricata-config-dump.txt | Timothy Gilbert, 09/15/2022 08:27 PM |
Updated by Timothy Gilbert over 2 years ago
suricata: flow-worker.c:184: CheckWorkQueue: Assertion `!(f->use_cnt > 0)' failed.
Assertion happening every day if not every other day. Currently running 6.0.6 across multiple Centos7 Servers. Also seeing this on Suricata 6.0.5
Updated by Timothy Gilbert over 2 years ago
Build info for our 6.0.5 Suricata Servers
This is Suricata version 6.0.5 RELEASE Features: NFQ PCAP_SET_BUFF AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK PCRE_JIT HAVE_NSS HAVE_LUA HAVE_LIBJANSSON TLS TLS_GNU MAGIC RUST SIMD support: none Atomic intrinsics: 1 2 4 8 byte(s) 64-bits, Little-endian architecture GCC version 4.8.5 20150623 (Red Hat 4.8.5-44), C version 199901 compiled with _FORTIFY_SOURCE=2 L1 cache line size (CLS)=64 thread local storage method: __thread compiled with LibHTP v0.5.40, linked against LibHTP v0.5.40 Suricata Configuration: AF_PACKET support: yes eBPF support: no XDP support: no PF_RING support: no NFQueue support: yes NFLOG support: no IPFW support: no Netmap support: no DAG enabled: no Napatech enabled: no WinDivert enabled: no Unix socket enabled: yes Detection enabled: yes Libmagic support: yes libnss support: yes libnspr support: yes libjansson support: yes hiredis support: yes hiredis async with libevent: yes Prelude support: no PCRE jit: yes LUA support: yes libluajit: no GeoIP2 support: yes Non-bundled htp: no Hyperscan support: no Libnet support: yes liblz4 support: yes HTTP2 decompression: no Rust support: yes Rust strict mode: no Rust compiler path: /usr/bin/rustc Rust compiler version: rustc 1.61.0 (Red Hat 1.61.0-2.el7) Cargo path: /usr/bin/cargo Cargo version: cargo 1.61.0 Cargo vendor: yes Python support: yes Python path: /usr/bin/python3 Python distutils yes Python yaml yes Install suricatactl: yes Install suricatasc: yes Install suricata-update: yes Profiling enabled: no Profiling locks enabled: no Plugin support (experimental): yes Development settings: Coccinelle / spatch: no Unit tests enabled: no Debug output enabled: no Debug validation enabled: no Generic build parameters: Installation prefix: /usr Configuration directory: /etc/suricata/ Log directory: /var/log/suricata/ --prefix /usr --sysconfdir /etc --localstatedir /var --datarootdir /usr/share Host: x86_64-redhat-linux-gnu Compiler: gcc (exec name) / g++ (real) GCC Protect enabled: yes GCC march native enabled: no GCC Profile enabled: no Position Independent Executable enabled: yes CFLAGS -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -std=gnu99 -I${srcdir}/../rust/gen -I${srcdir}/../rust/dist PCAP_CFLAGS SECCFLAGS -fstack-protector -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security
and here is Build Info for the 6.0.6 boxes
This is Suricata version 6.0.6 RELEASE Features: NFQ PCAP_SET_BUFF AF_PACKET HAVE_PACKET_FANOUT LIBCAP_NG LIBNET1.1 HAVE_HTP_URI_NORMALIZE_HOOK PCRE_JIT HAVE_NSS HAVE_LUA HAVE_LIBJANSSON TLS TLS_GNU MAGIC RUST SIMD support: none Atomic intrinsics: 1 2 4 8 byte(s) 64-bits, Little-endian architecture GCC version 4.8.5 20150623 (Red Hat 4.8.5-44), C version 199901 compiled with _FORTIFY_SOURCE=2 L1 cache line size (CLS)=64 thread local storage method: __thread compiled with LibHTP v0.5.40, linked against LibHTP v0.5.40 Suricata Configuration: AF_PACKET support: yes eBPF support: no XDP support: no PF_RING support: no NFQueue support: yes NFLOG support: no IPFW support: no Netmap support: no DAG enabled: no Napatech enabled: no WinDivert enabled: no Unix socket enabled: yes Detection enabled: yes Libmagic support: yes libnss support: yes libnspr support: yes libjansson support: yes hiredis support: yes hiredis async with libevent: yes Prelude support: no PCRE jit: yes LUA support: yes libluajit: no GeoIP2 support: yes Non-bundled htp: no Hyperscan support: no Libnet support: yes liblz4 support: yes HTTP2 decompression: no Rust support: yes Rust strict mode: no Rust compiler path: /usr/bin/rustc Rust compiler version: rustc 1.62.1 (Red Hat 1.62.1-1.el7) Cargo path: /usr/bin/cargo Cargo version: cargo 1.62.1 Cargo vendor: yes Python support: yes Python path: /usr/bin/python3 Python distutils yes Python yaml yes Install suricatactl: yes Install suricatasc: yes Install suricata-update: yes Profiling enabled: no Profiling locks enabled: no Plugin support (experimental): yes Development settings: Coccinelle / spatch: no Unit tests enabled: no Debug output enabled: no Debug validation enabled: no Generic build parameters: Installation prefix: /usr Configuration directory: /etc/suricata/ Log directory: /var/log/suricata/ --prefix /usr --sysconfdir /etc --localstatedir /var --datarootdir /usr/share Host: x86_64-redhat-linux-gnu Compiler: gcc (exec name) / g++ (real) GCC Protect enabled: yes GCC march native enabled: no GCC Profile enabled: no Position Independent Executable enabled: yes CFLAGS -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -std=gnu99 -I${srcdir}/../rust/gen -I${srcdir}/../rust/dist PCAP_CFLAGS SECCFLAGS -fstack-protector -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security
Updated by Timothy Gilbert over 2 years ago
Core Dump BT
(gdb) bt #0 0x00007fb29c75e387 in raise () from /lib64/libc.so.6 #1 0x00007fb29c75fa78 in abort () from /lib64/libc.so.6 #2 0x00007fb29c7571a6 in __assert_fail_base () from /lib64/libc.so.6 #3 0x00007fb29c757252 in __assert_fail () from /lib64/libc.so.6 #4 0x0000563e56c0960f in CheckWorkQueue (tv=tv@entry=0x563e58c3c1d0, fw=fw@entry=0x7fb27c2708c0, detect_thread=detect_thread@entry=0x7fb27c35adf0, counters=counters@entry=0x7fb298a50ad0, fq=fq@entry=0x7fb298a50ae0) at flow-worker.c:184 #5 0x0000563e56c09909 in FlowWorkerProcessInjectedFlows (p=0x7fb27c017180, detect_thread=0x7fb27c35adf0, fw=0x7fb27c2708c0, tv=0x563e58c3c1d0) at flow-worker.c:459 #6 FlowWorker (tv=0x563e58c3c1d0, p=0x7fb27c017180, data=0x7fb27c2708c0) at flow-worker.c:588 #7 0x0000563e56c5ef7e in TmThreadsSlotVarRun (tv=tv@entry=0x563e58c3c1d0, p=p@entry=0x7fb27c017180, slot=slot@entry=0x563e6039ba20) at tm-threads.c:117 #8 0x0000563e56c60d32 in TmThreadsSlotVar (td=0x563e58c3c1d0) at tm-threads.c:463 #9 0x00007fb29cf17ea5 in start_thread () from /lib64/libpthread.so.0 #10 0x00007fb29c826b0d in clone () from /lib64/libc.so.6
Updated by Timothy Gilbert over 2 years ago
- Priority changed from Normal to Urgent
https://redmine.openinfosecfoundation.org/issues/3484
I think this might have some relation? When suricata is reloading the rules its crashing. However, we don't have any custom rule sets like the previous bug.
Updated by Timothy Gilbert over 2 years ago
- Priority changed from Urgent to Immediate
Moving to Immediate, can't find a resolution. Tried suricata-update --no-reload and still creating the same exact dumps. Always happens at 3:15am.
Updated by Victor Julien over 2 years ago
- Priority changed from Immediate to Normal
The BUG_ON triggers on a condition that should be impossible, but clearly isn't in your case. Is this an unmodified Suricata?
In the 4th frame of the bt above you have access to the flow. Can you share `print *f` from there?
Updated by Timothy Gilbert over 2 years ago
Getting a print *f now. Unmodified Suricata as in build yes. Only thing modified is the suricata.yaml and we disabled some default rulesets with disable.conf only
Updated by Timothy Gilbert over 2 years ago
(gdb) f 4 #4 0x000056101b33760f in CheckWorkQueue (tv=tv@entry=0x56101ea753b0, fw=fw@entry=0x7ffb502708c0, detect_thread=detect_thread@entry=0x7ffb5035ad30, counters=counters@entry=0x7ffb77161ad0, fq=fq@entry=0x7ffb77161ae0) at flow-worker.c:184 184 in flow-worker.c (gdb) print *f $4 = {src = {address = {address_un_data32 = {646030346, 0, 0, 0}, address_un_data16 = {41994, 9857, 0, 0, 0, 0, 0, 0}, address_un_data8 = "\n\244\201&", '\000' <repeats 11 times>}}, dst = { address = {address_un_data32 = {47014184, 0, 0, 0}, address_un_data16 = { 24872, 717, 0, 0, 0, 0, 0, 0}, address_un_data8 = "(a\315\002", '\000' <repeats 11 times>}}, { sp = 33728, icmp_s = {type = 192 '\300', code = 131 '\203'}}, {dp = 587, icmp_d = {type = 75 'K', code = 2 '\002'}}, proto = 6 '\006', recursion_level = 0 '\000', vlan_id = {0, 0}, use_cnt = 1, vlan_idx = 0 '\000', {{ffr_ts = 0 '\000', ffr_tc = 1 '\001'}, ffr = 16 '\020'}, timeout_at = 1659025733, thread_id = {6, 6}, next = 0x0, livedev = 0x0, flow_hash = 367080059, lastts = {tv_sec = 1659025673, tv_usec = 103402}, timeout_policy = 60, flow_state = 2, tenant_id = 0, probing_parser_toserver_alproto_masks = 0, probing_parser_toclient_alproto_masks = 0, flags = 1647451, file_flags = 1023, protodetect_dp = 443, parent_id = 0, m = {__data = { __lock = 1, __count = 0, __owner = 1666, __nusers = 1, __kind = 0, __spins = 0, __elision = 0, __list = {__prev = 0x0, __next = 0x0}}, __size = "\001\000\000\000\000\000\000\000\202\006\000\000\001", '\000' <repeats 26 times>, __align = 1}, protoctx = 0x7ffb50359c40, protomap = 0 '\000', flow_end_flags = 80 'P', alproto = 4, alproto_ts = 4, alproto_tc = 4, alproto_orig = 3, alproto_expect = 4, de_ctx_version = 99, min_ttl_toserver = 64 '@', max_ttl_toserver = 64 '@', ---Type <return> to continue, or q <return> to quit--- min_ttl_toclient = 238 '\356', max_ttl_toclient = 238 '\356', alparser = 0x7ffb5059e770, alstate = 0x7ffb5059bf00, sgh_toclient = 0x56102556a7c0, sgh_toserver = 0x56101db2baf0, flowvar = 0x0, fb = 0x0, startts = {tv_sec = 1659025671, tv_usec = 156834}, todstpktcnt = 25, tosrcpktcnt = 28, todstbytecnt = 4215, tosrcbytecnt = 6690}
Updated by Timothy Gilbert over 2 years ago
- Priority changed from Normal to Urgent
Anything else needed? Sorry having this issue across multiple customers now.
Updated by Timothy Gilbert over 2 years ago
- Priority changed from Urgent to Immediate
Updated by Victor Julien over 2 years ago
- Priority changed from Immediate to Normal
There are no obvious clues in the bt. I think it would be helpful if there was more info about the setups at which you're experiencing this, what traffic looks like, when does it happen and is there some common theme (e.g. what happens at 3:15am). Ideally you'd try to reproduce it in a controlled env.
Updated by Timothy Gilbert over 2 years ago
After more investigation, it seems like a random chance it does it on Signal Received. Stopping engine.
Sep 13 17:53:04 localhost fail2ban.server[1155]: INFO Reload jail 'suricata' Sep 13 17:53:05 localhost fail2ban.server[1155]: INFO Jail 'suricata' reloaded Sep 13 17:55:18 localhost suricata: 13/9/2022 -- 17:55:18 - <Notice> - Signal Received. Stopping engine. Sep 13 17:55:19 localhost suricata: suricata: flow-worker.c:184: CheckWorkQueue: Assertion `!(f->use_cnt > 0)' failed. Sep 13 17:55:22 localhost systemd: suricata.service: main process exited, code=killed, status=6/ABRT Sep 13 17:55:22 localhost systemd: Unit suricata.service entered failed state. Sep 13 17:55:22 localhost systemd: suricata.service failed.
/var/log/messages-20220911:Sep 8 03:29:26 localhost suricata: 8/9/2022 -- 03:29:26 - <Notice> - Signal Received. Stopping engine. /var/log/messages-20220911:Sep 8 03:29:27 localhost suricata: suricata: flow-worker.c:184: CheckWorkQueue: Assertion `!(f->use_cnt > 0)' failed. /var/log/messages-20220911:Sep 8 03:29:31 localhost systemd: suricata.service: main process exited, code=killed, status=6/ABRT /var/log/messages-20220911:Sep 8 03:29:31 localhost systemd: Unit suricata.service entered failed state. /var/log/messages-20220911:Sep 8 03:29:31 localhost systemd: suricata.service failed.
But other times, its just fine and it comes right back up.
/var/log/messages-20220904:Sep 2 03:27:26 localhost suricata: 2/9/2022 -- 03:27:26 - <Notice> - Signal Received. Stopping engine. /var/log/messages-20220904:Sep 2 03:27:27 localhost suricata: 2/9/2022 -- 03:27:27 - <Notice> - (RX-NFQ#0) Treated: Pkts 8090, Bytes 4928439, Errors 0
Updated by Timothy Gilbert over 2 years ago
- File suricata-config-dump.txt suricata-config-dump.txt added
- File suricata-spec.txt added
Here is the config, used <hidden> for private info. Tried 6.0.5 Suricata and 6.0.6. We do build it our self (sorry for the mistaken info earlier), attached is the spec. We have a separate RPM that sets our application for the config and a cron that runs daily for suricata-update.
Updated by Victor Julien over 2 years ago
Perhaps I clue I missed initially alproto = 4, alproto_ts = 4, alproto_tc = 4, alproto_orig = 3
. This means this is a SMTP session that upgraded to TLS (STARTTLS). Can you check if this is true every case?
Updated by Timothy Gilbert over 2 years ago
Yep. Just verified with multiple cores. alproto_orig = 3 on all of them and 4's on alproto, alproto_ts, alproto_tc
Updated by Timothy Gilbert about 2 years ago
Currently have it to turn off core dump logging. Any ideas on next steps?
Updated by Timothy Gilbert about 2 years ago
- Affected Versions 6.0.8 added
Installed 6.0.8 on some test servers. Same issues as above.
Updated by Timothy Gilbert about 2 years ago
Dec 5 09:26:45 localhost suricata[1292901]: 5/12/2022 -- 09:26:45 - <Notice> - Signal Received. Stopping engine. Dec 5 09:26:46 localhost suricata[1292901]: suricata: flow-worker.c:184: CheckWorkQueue: Assertion `!(f->use_cnt > 0)' failed. Dec 5 09:26:52 localhost systemd-coredump[1380846]: Process 1292901 (Suricata-Main) of user 0 dumped core.#012#012Stack trace of thread 1292937:#012#0 0x00007f5d72ce7a9f raise (libc.so.6)#012#1 0x00007f5d72cbae05 abort (libc.so.6)#012#2 0x00007f5d72cbacd9 __assert_fail_base.cold.0 (libc.so.6)#012#3 0x00007f5d72ce03f6 __assert_fail (libc.so.6)#012#4 0x000055aa56b19eba CheckWorkQueue.isra.3.constprop.6 (suricata)#012#5 0x000055aa56b1a27e FlowWorker (suricata)#012#6 0x000055aa56b6fd56 TmThreadsSlotVarRun (suricata)#012#7 0x000055aa56b71aed TmThreadsSlotVar (suricata)#012#8 0x00007f5d734821cf start_thread (libpthread.so.0)#012#9 0x00007f5d72cd2dd3 __clone (libc.so.6)#012#012Stack trace of thread 1292901:#012#0 0x00007f5d7348b8ba __lll_unlock_wake (libpthread.so.0)#012#1 0x00007f5d734862d6 __pthread_mutex_unlock_usercnt (libpthread.so.0)#012#2 0x000055aa56b18e73 FlowForceReassembly (suricata)#012#3 0x000055aa56b6c2c9 PostRunDeinit (suricata)#012#4 0x000055aa56b6dbbc SuricataMain (suricata)#012#5 0x00007f5d72cd3cf3 __libc_start_main (libc.so.6)#012#6 0x000055aa56a661ce _start (suricata)#012#012Stack trace of thread 1292939:#012#0 0x00007f5d7348844c pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0)#012#1 0x000055aa56b6f3ec TmqhInputSimple (suricata)#012#2 0x000055aa56b71ad3 TmThreadsSlotVar (suricata)#012#3 0x00007f5d734821cf start_thread (libpthread.so.0)#012#4 0x00007f5d72cd2dd3 __clone (libc.so.6)#012#012Stack trace of thread 1292941:#012#0 0x00007f5d7348844c pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0)#012#1 0x000055aa56b6f3ec TmqhInputSimple (suricata)#012#2 0x000055aa56b71ad3 TmThreadsSlotVar (suricata)#012#3 0x00007f5d734821cf start_thread (libpthread.so.0)#012#4 0x00007f5d72cd2dd3 __clone (libc.so.6)#012#012Stack trace of thread 1292946:#012#0 0x00007f5d72dc01ff __select (libc.so.6)#012#1 0x000055aa56b74795 UnixManager (suricata)#012#2 0x000055aa56b71206 TmThreadsManagement (suricata)#012#3 0x00007f5d734821cf start_thread (libpthread.so.0)#012#4 0x00007f5d72cd2dd3 __clone (libc.so.6)#012#012Stack trace of thread 1292935:#012#0 0x00007f5d72d93658 __nanosleep (libc.so.6)#012#1 0x00007f5d72dc0938 usleep (libc.so.6)#012#2 0x000055aa56b71112 TmThreadWaitForFlag (suricata)#012#3 0x000055aa56b71874 TmThreadsSlotPktAcqLoop (suricata)#012#4 0x00007f5d734821cf start_thread (libpthread.so.0)#012#5 0x00007f5d72cd2dd3 __clone (libc.so.6)#012#012Stack trace of thread 1292936:#012#0 0x00007f5d7348844c pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0)#012#1 0x000055aa56b6e1cc TmqhInputFlow (suricata)#012#2 0x000055aa56b71ad3 TmThreadsSlotVar (suricata)#012#3 0x00007f5d734821cf start_thread (libpthread.so.0)#012#4 0x00007f5d72cd2dd3 __clone (libc.so.6)#012#012Stack trace of thread 1292933:#012#0 0x00007f5d72d93658 __nanosleep (libc.so.6)#012#1 0x00007f5d72dc0938 usleep (libc.so.6)#012#2 0x000055aa56b71112 TmThreadWaitForFlag (suricata)#012#3 0x000055aa56b71874 TmThreadsSlotPktAcqLoop (suricata)#012#4 0x00007f5d734821cf start_thread (libpthread.so.0)#012#5 0x00007f5d72cd2dd3 __clone (libc.so.6)#012#012Stack trace of thread 1292934:#012#0 0x00007f5d72d93658 __nanosleep (libc.so.6)#012#1 0x00007f5d72dc0938 usleep (libc.so.6)#012#2 0x000055aa56b71112 TmThreadWaitForFlag (suricata)#012#3 0x000055aa56b71874 TmThreadsSlotPktAcqLoop (suricata)#012#4 0x00007f5d734821cf start_thread (libpthread.so.0)#012#5 0x00007f5d72cd2dd3 __clone (libc.so.6)#012#012Stack trace of thread 1292943:#012#0 0x00007f5d7348879a pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0)#012#1 0x000055aa56b16249 FlowRecycler (suricata)#012#2 0x000055aa56b71206 TmThreadsManagement (suricata)#012#3 0x00007f5d734821cf start_thread (libpthread.so.0)#012#4 0x00007f5d72cd2dd3 __clone (libc.so.6)#012#012Stack trace of thread 1292940:#012#0 0x00007f5d7348844c pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0)#012#1 0x000055aa56b6f3ec TmqhInputSimple (suricata)#012#2 0x000055aa56b71ad3 TmThreadsSlotVar (suricata)#012#3 0x00007f5d734821cf start_thread (libpthread.so.0)#012#4 0x00007f5d72cd2dd3 __clone (libc.so.6)#012#012Stack trace of thread 1292938:#012#0 0x00007f5d7348844c pthread_cond_wait@@GLIBC_2.3.2 (libpthread.so.0)#012#1 0x000055aa56b6f3ec TmqhInputSimple (suricata)#012#2 0x000055aa56b71ad3 TmThreadsSlotVar (suricata)#012#3 0x00007f5d734821cf start_thread (libpthread.so.0)#012#4 0x00007f5d72cd2dd3 __clone (libc.so.6)#012#012Stack trace of thread 1292944:#012#0 0x00007f5d7348879a pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0)#012#1 0x000055aa56a980ec StatsWakeupThread (suricata)#012#2 0x00007f5d734821cf start_thread (libpthread.so.0)#012#3 0x00007f5d72cd2dd3 __clone (libc.so.6)#012#012Stack trace of thread 1292945:#012#0 0x00007f5d7348879a pthread_cond_timedwait@@GLIBC_2.3.2 (libpthread.so.0)#012#1 0x000055aa56a98733 StatsMgmtThread (suricata)#012#2 0x00007f5d734821cf start_thread (libpthread.so.0)#012#3 0x00007f5d72cd2dd3 __clone (libc.so.6)#012#012Stack trace of thread 1292942:#012#0 0x00007f5d72d93658 __nanosleep (libc.so.6)#012#1 0x00007f5d72dc0938 usleep (libc.so.6)#012#2 0x000055aa56b71112 TmThreadWaitForFlag (suricata)#012#3 0x000055aa56b7123e TmThreadsManagement (suricata)#012#4 0x00007f5d734821cf start_thread (libpthread.so.0)#012#5 0x00007f5d72cd2dd3 __clone (libc.so.6)#012#012Stack trace of thread 1292932:#012#0 0x00007f5d72d93658 __nanosleep (libc.so.6)#012#1 0x00007f5d72dc0938 usleep (libc.so.6)#012#2 0x000055aa56b71112 TmThreadWaitForFlag (suricata)#012#3 0x000055aa56b71874 TmThreadsSlotPktAcqLoop (suricata)#012#4 0x00007f5d734821cf start_thread (libpthread.so.0)#012#5 0x00007f5d72cd2dd3 __clone (libc.so.6) Dec 5 09:26:52 localhost systemd[1]: suricata.service: Main process exited, code=killed, status=6/ABRT Dec 5 09:26:52 localhost systemd[1]: suricata.service: Failed with result 'signal'.
Updated by Timothy Gilbert almost 2 years ago
- Affected Versions 6.0.7, 6.0.9 added
https://github.com/jasonish/suricata-rpms/tree/master/6.0 using as our spec / build.
Updated by Victor Julien almost 2 years ago
We're not able to reproduce it. A reproducer would be really helpful. Until that happens, I don't think we can expect any progress on this issue.
Updated by Victor Julien almost 2 years ago
I just merged a patch that is possibly related:
https://github.com/OISF/suricata/pull/8525/commits/d13bb7f5a7e02e51e7628ae92bb4f4e8be12db69
This patch will go into 6.0.11 (no ETA yet), but it would be great if you can test it earlier.
Updated by Timothy Gilbert almost 2 years ago
Victor Julien wrote in #note-23:
I just merged a patch that is possibly related:
https://github.com/OISF/suricata/pull/8525/commits/d13bb7f5a7e02e51e7628ae92bb4f4e8be12db69This patch will go into 6.0.11 (no ETA yet), but it would be great if you can test it earlier.
I've applied this fix to 6.0.9 and so far no crashes! Big Win so far!
Updated by Victor Julien over 1 year ago
- Subject changed from Core crash with flow-worker.c Assertion CheckWorkQueue to flowworker: Assertion in CheckWorkQueue
- Status changed from New to Resolved
- Assignee changed from OISF Dev to Victor Julien
Updated by Victor Julien over 1 year ago
- Target version changed from TBD to 7.0.0-rc2
- Label Needs backport to 6.0 added
Updated by OISF Ticketbot over 1 year ago
- Label deleted (
Needs backport to 6.0)
Updated by Victor Julien over 1 year ago
- Status changed from Resolved to Closed