Bug #1806
closedPacket loss performance is worse in 3.1RC1 vs 3.0
Description
On a test server inspecting between 5.5 to 7 Gb/s, we've upgraded from Suricata v3.0 to v3.1RC1 and noticed that packet loss went from ~25% on v3.0 to ~45% on v3.1RC1. Suricata was built with the same compile options on both versions, which are:
--prefix=/usr --sysconfdir=/etc --localstatedir=/var --enable-gccprotect --disable-gccmarch-native --disable-coccinelle --enable-nfqueue --enable-af-packet --enable-jemalloc --with-libnspr-includes=/usr/include/nspr4 --with-libnss-includes=/usr/include/nss3 --enable-jansson --enable-geoip --enable-luajit --enable-hiredis
The exact same ruleset and config is used between both versions, and has been repeated over and over again. The config file being used is attached. The test system being used for this test is a Dell R610 with 4 physical processors of model "Intel(R) Xeon(R) CPU L5506 @ 2.13GHz" (8 with Hyperthreading, which we have enabled) and 24 GB of RAM.
Aside from the noticeable increase in packet loss, we have noticed a drastic reduction in the amount of time Suricata takes to start inspection traffic after the process starts, which has been a reduction from ~60 seconds to less than 3 seconds. It should also be noted that Suricata is running within a docker container for both 3.0 and 3.1RC1, each based on the same CentOS 7.2 base image.
Files