Security #7300
openlog: too big record lead to invalid json
Updated by Philippe Antoine 29 days ago
Summing up the relevant part of the parent issue :
A first fix is to simply check the return value of MemBufferExpand in OutputJsonBuilderBuffer
how do we handle a log record that is over the buffer size ? (10 MB)
Also, should we try to avoid these ? I think so...
The JSON buffer is already complete, so we'd have to lightly parse it to know how to do that. Funny that its already in memory, but its the abstraction to underlying writers where we hit this limit.
So, looks like we should enforce this limit sooner, right ?
IMO, if we decode it we should probably log it, enforce limits in the decoding, but if it makes it to the logger, and then deem it too big maybe some special engine event.. Just thinking out loud. Or better yet, if it's logging a JsonBuilder, using the built buffer directly instead of copying?
For HTTP2, the solution was to log only log once a repeated header.
I know that DNS model is not the same... But still gives ideas
Updated by Jason Ish 29 days ago ยท Edited
This was just a quick POC to use the JsonBuilder buffer directly instead of memcpy, which does fix this. But still needs more investigation: https://github.com/OISF/suricata/pull/11864
Updated by Philippe Antoine 29 days ago
This was just a quick POC to use the JsonBuilder buffer directly instead of memcpy
Looks cool...
But do we want to allow 10Mbytes records ?