Actions
Bug #4000
closedHang while processing HTTP traffic
Affected Versions:
Effort:
Difficulty:
Label:
Description
Customer system "hangs" in HTTP processing.
Suricata 5.03; traffic 250Kpps, 1.5Gbps
All worker threads, except for one, at blocked in
(gdb) bt #0 __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:103 #1 0x00007f13c38e83e5 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x7f13bd262d10) at ../nptl/pthread_mutex_lock.c:78 #2 0x00005629c4c3a05f in FlowGetFlowFromHash (tv=tv@entry=0x7f13c00418c0, dtv=0x7f13a94741c0, p=p@entry=0x7f13a9473600, dest=dest@entry=0x7f13a9473638) at flow-hash.c:611 #3 0x00005629c4c3792d in FlowHandlePacket (tv=tv@entry=0x7f13c00418c0, dtv=<optimized out>, p=p@entry=0x7f13a9473600) at flow.c:518 #4 0x00005629c4c4021f in FlowWorker (tv=0x7f13c00418c0, p=0x7f13a9473600, data=0x7f13a947b0a0, preq=0x7f13c0041cc0, unused=<optimized out>) at flow-worker.c:210 #5 0x00005629c4cd4e75 in TmThreadsSlotVarRun (tv=tv@entry=0x7f13c00418c0, p=p@entry=0x7f13a9473600, slot=slot@entry=0x7f13c0041b40) at tm-threads.c:143 #6 0x00005629c4cad7be in TmThreadsSlotProcessPkt (p=0x7f13a9473600, s=0x7f13c0041b40, tv=0x7f13c00418c0) at tm-threads.h:163 #7 NapatechPacketLoopZC (tv=0x7f13c00418c0, data=<optimized out>, slot=<optimized out>) at source-napatech.c:454 #8 0x00005629c4cd98d7 in TmThreadsSlotPktAcqLoop (td=0x7f13c00418c0) at tm-threads.c:369 #9 0x00007f13c38e5c9a in start_thread (arg=0x7f13ab5ff700) at pthread_create.c:486 #10 0x00007f13c1fc6f5f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
The thread hung in HTTP:
(gdb) bt #0 0x00005629c4b3f702 in HTPStateGetTx (alstate=<optimized out>, tx_id=23500474252195) at app-layer-htp.c:2988 #1 0x00005629c4b4b0d7 in AppLayerParserGetTx (tx_id=23500474252195, alstate=0x7f1232b2f5c0, alproto=1, ipproto=6 '\006') at app-layer-parser.c:1057 #2 AppLayerDefaultGetTxIterator (ipproto=ipproto@entry=6 '\006', alproto=alproto@entry=1, alstate=alstate@entry=0x7f1232b2f5c0, min_tx_id=min_tx_id@entry=1, max_tx_id=max_tx_id@entry=18446744073709551615, state=0x7f138fcf9f38) at app-layer-parser.c:673 #3 0x00005629c4b4c788 in AppLayerParserSetTransactionInspectId (f=f@entry=0x7f1211a1e280, pstate=0x7f11532d1d70, alstate=0x7f1232b2f5c0, flags=flags@entry=4 '\004', tag_txs_as_inspected=true) at app-layer-parser.c:778 #4 0x00005629c4be060b in DeStateUpdateInspectTransactionId (f=f@entry=0x7f1211a1e280, flags=flags@entry=4 '\004', tag_txs_as_inspected=<optimized out>) at detect-engine-state.c:257 #5 0x00005629c4b8b7f0 in DetectRunPostRules (scratch=0x7f138fcfa050, scratch=0x7f138fcfa050, pflow=<optimized out>, p=0x7f138dc73600, det_ctx=0x7f138d337000, de_ctx=0x7f13bcbb2800, tv=0x7f13c0045740) at detect.c:929 #6 DetectRun (th_v=th_v@entry=0x7f13c0045740, de_ctx=<optimized out>, det_ctx=0x7f138d337000, p=p@entry=0x7f138dc73600) at detect.c:141 #7 0x00005629c4b8d6d8 in DetectRun (p=0x7f138dc73600, det_ctx=<optimized out>, de_ctx=<optimized out>, th_v=0x7f13c0045740) at detect.c:1583 #8 DetectNoFlow (p=<optimized out>, det_ctx=<optimized out>, de_ctx=<optimized out>, tv=<optimized out>) at detect.c:1590 #9 Detect (tv=tv@entry=0x7f13c0045740, p=p@entry=0x7f138dc73600, data=data@entry=0x7f138d337000, pq=pq@entry=0x0, postpq=postpq@entry=0x0) at detect.c:1650 #10 0x00005629c4c400b7 in FlowWorker (tv=0x7f13c0045740, p=0x7f138dc73600, data=0x7f138dc7b0a0, preq=0x7f13c0045b40, unused=<optimized out>) at flow-worker.c:289 #11 0x00005629c4cd4e75 in TmThreadsSlotVarRun (tv=tv@entry=0x7f13c0045740, p=p@entry=0x7f138dc73600, slot=slot@entry=0x7f13c00459c0) at tm-threads.c:143 #12 0x00005629c4cad7be in TmThreadsSlotProcessPkt (p=0x7f138dc73600, s=0x7f13c00459c0, tv=0x7f13c0045740) at tm-threads.h:163 #13 NapatechPacketLoopZC (tv=0x7f13c0045740, data=<optimized out>, slot=<optimized out>) at source-napatech.c:454 #14 0x00005629c4cd98d7 in TmThreadsSlotPktAcqLoop (td=0x7f13c0045740) at tm-threads.c:369 #15 0x00007f13c38e5c9a in start_thread (arg=0x7f138fcff700) at pthread_create.c:486 #16 0x00007f13c1fc6f5f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
From frame 2 of the "hung" worker thread:
(gdb) p max_tx_id $3 = 18446744073709551615 (gdb) p min_tx_id $4 = 1 (gdb) p tx_id $5 = 23500474252195
The application layer state object (frame 2)
(gdb) p *((HtpState*)(alstate))->conn $9 = {client_addr = 0x0, client_port = 0, server_addr = 0x0, server_port = 0, transactions = 0x0, messages = 0x0, flags = 0 '\000', open_timestamp = {tv_sec = 0, tv_usec = 0}, close_timestamp = {tv_sec = 0, tv_usec = 0}, in_data_counter = 0, out_data_counter = 0} (gdb) p *(HtpState*)(alstate) $10 = {connp = 0x7f1152af4d80, conn = 0x7f10acc4f700, f = 0x7f1211a1e280, transaction_cnt = 1, store_tx_id = 0, files_ts = 0x0, files_tc = 0x7f1152debe40, cfg = 0x5629c522f2e0 <cfglist>, flags = 0, events = 0, htp_messages_offset = 0, file_track_id = 1, last_request_data_stamp = 240, last_response_data_stamp = 209}
Note that
connis a valid pointer and each member's value is 0. Also note that
transaction_cnt = 1
From the flow:
todstpktcnt = 35, tosrcpktcnt = 32, todstbytecnt = 2620, tosrcbytecnt = 2385
Much traffic on this system is asymmetric but this flow looks reasonably well balanced.
Actions