Bug #1178
closedtcp.reassembly_memuse missprint in stats.log
Description
Using 2.0dev (rev ab50387)
In my suricata.yaml I have:
reassembly: memcap: 30gb
2 min after start tcp.reassembly_memuse shows that the whole amount (30GB) is used for every thread :
root@suricata:~# grep memuse /var/log/suricata/stats.log | tail -64 tcp.memuse | AFPacketeth31 | 302434368 tcp.reassembly_memuse | AFPacketeth31 | 32212254681 http.memuse | AFPacketeth31 | 1385239 dns.memuse | AFPacketeth32 | 14468098 tcp.memuse | AFPacketeth32 | 302137200 tcp.reassembly_memuse | AFPacketeth32 | 32212254681 http.memuse | AFPacketeth32 | 1387903 dns.memuse | AFPacketeth33 | 14017181 tcp.memuse | AFPacketeth33 | 302169856 tcp.reassembly_memuse | AFPacketeth33 | 32212254683 http.memuse | AFPacketeth33 | 1385537 dns.memuse | AFPacketeth34 | 14019163 tcp.memuse | AFPacketeth34 | 302172560 tcp.reassembly_memuse | AFPacketeth34 | 32212254683 http.memuse | AFPacketeth34 | 1380014 .... tcp.memuse | AFPacketeth316 | 302176576 tcp.reassembly_memuse | AFPacketeth316 | 32212254683 http.memuse | AFPacketeth316 | 1377703
However htop (attached) shows 12 GB mem usage in total,
but in stats.log - tcp.reassembly_memuse shows that 30GB are used right away for every thread, which is not true.
Files
Updated by Andreas Moe about 10 years ago
Is this a missprint, or "wrongly labeled"? If memcap is set to 30gb, and tcp.reasembly_memuse is not memory in use, but the cap?
Updated by Peter Manev about 10 years ago
I think this should depict "current" usage on a global level (not per thread)- the cap is already known in yaml.
Updated by Ken Steele almost 10 years ago
Looking at the Suricata source code, it appears that the memcaps are global, but reported by every thread. There is only one global for TCP reassembly (ra_memuse in stream-tcp-reassemble.c).
The reporting is confusing, since it looks like each worker thread is using the memuse amount of memory.
Updated by Victor Julien almost 10 years ago
Since the memuse is global, we should probably not update it from each of the threads either. Maybe the flow manager could register the counter instead. Or we need different method completely.
Updated by Ken Steele almost 10 years ago
I agree that it should only be reported once, not per-thread, given it is a global number.
I also see that the value in the global ra_memuse is only copied into the stats counter by StreamTcpReassembleMemuseCounter() which is only called at the end of StreamTcpReassembleHandleSegment(). This means that the stats value does not get reduced when flows expire. It also requires extra work to copy the value into the stat.
It would be better if the stats reporting thread could simply read the value from ra_memuse.
Updated by Andreas Herz over 8 years ago
- Assignee set to OISF Dev
- Target version set to TBD
Updated by Victor Julien over 8 years ago
- Status changed from New to Closed
- Assignee deleted (
OISF Dev) - Target version deleted (
TBD)
This should be fixed in 3.0.