Project

General

Profile

Actions

Bug #329

closed

Mem leak maybe

Added by Peter Manev about 13 years ago. Updated about 13 years ago.

Status:
Closed
Priority:
Normal
Assignee:
Target version:
-
Affected Versions:
Effort:
Difficulty:
Label:

Description

Well about the situation here that we have (as pointed out by Delta Yeh):
My scenario - Set up - 2 machines:
(1) - with Suricata current git and Apache on it as well. No rules for Suri. and a just regular default web server installation.
(2) - "testing node" - for the successful reproduction of the tests we need "ab" installed which comes by default with the apache package (you might need one more apache web server install here)

So then:

1. Start Suri on node (1) 
2. Make sure the web server on node (1) is up and running
3. From node (2) in a shell execute:
" ab -c 1 -n 60000 http://x.x.x.x/ " 
- x.x.x.x  is the IP of the apache server (1)

The result is (at least in my case) - Suri does not release the memory after the test from node (2) is completed. If you run consecutive tests it will exhaust the memory and crash exit.

I have tested that on Debina/Ubuntu and BSD, with or without rules, with different mpm_alg, with different flow timeout options - the result is the same.

Things that I have noticed:
The "ab" test does not make "proper" Fin-Ack tcp tear down - The connections are just left to time out. Even so, 3 hrs after the tests Suri still didn't release the mem resources.

Actions #1

Updated by Victor Julien about 13 years ago

  • Description updated (diff)
  • Status changed from New to Assigned
  • Assignee set to Anoop Saldanha
  • Target version set to 1.1beta3
  • Estimated time set to 6.00 h

This is odd: the flows should have been timed out within an hour even with the default config.

Actions #2

Updated by Victor Julien about 13 years ago

  • Assignee changed from Anoop Saldanha to Victor Julien
  • Priority changed from Normal to High
  • Target version changed from 1.1beta3 to 1.1rc1

I'll take this on after the beta3 release.

Actions #3

Updated by Victor Julien about 13 years ago

  • Status changed from Assigned to Closed
  • Priority changed from High to Normal
  • Target version deleted (1.1rc1)

I can confirm the behavior, however it's not a memory leak. The part of the code responsible for the HTTP handling, libhtp, is not leaking memory. So what is happening? Lets look at what malloc_stats() tells us:

After the engine initialized, this is the memory profile:

Arena 0:
system bytes     =   34742272
in use bytes     =   34609232
Arena 1:
system bytes     =     135168
in use bytes     =      93288
Total (incl. mmap):
system bytes     =   36737024
in use bytes     =   36562104
max mmap regions =          7
max mmap bytes   =    1863680

Then we fire up the "ab" test, sending 20.000 sessions. The profile changes radically:

Arena 0:
system bytes     =  502730752
in use bytes     =  502724184
Arena 1:
system bytes     =  349511680
in use bytes     =  349510128
Total (incl. mmap):
system bytes     =  854102016
in use bytes     =  854093896
max mmap regions =          7
max mmap bytes   =    1863680

Here we see a problem that needs to be addressed: libhtp uses too much memory. For example, it alloc's 2 18kb buffers per session (see https://github.com/ironbee/libhtp/issues/15).

Then we wait for the sessions to time out (lowering the flow-timeouts helps):

Arena 0:
system bytes     =  502730752
in use bytes     =   35828432
Arena 1:
system bytes     =  349511680
in use bytes     =     805200
Total (incl. mmap):
system bytes     =  854102016
in use bytes     =   38493216
max mmap regions =          7
max mmap bytes   =    1863680

Now something interesting becomes visible. Even though the "in use" counters are nearly down to what they started at, the "system" bytes remain high. The memory is not released back to the OS. The memory allocator keeps it available to the process for future reuse. Rerunning the "ab" test shows that this works: the memory does not increase any further, unless the number of sessions is increased.

The reason the memory is not released to the OS seems to be that not all memory blocks on the heap can be easily returned. The Linux Journal has an explanation on how the glibc allocator works: http://www.linuxjournal.com/article/6390?page=0,0 In figure 1 see "free but blocked chunk".

In my testing the memory is not returned to the OS at all. It remains available to Suricata though. Still this may be undesirable especially if the host runs more than just Suricata.

I think reducing the memory footprint of libhtp would be a big step in the right direction.

Closing the issue as it's not a memory leak and the libhtp improvements have tickets in the upstream project.

Actions

Also available in: Atom PDF