Fluent-bit Splitting Large JSON Log into Partial Messages #10378
gursharan-bagha
started this conversation in
General
Replies: 1 comment 4 replies
-
Note if it's for the AWS image then they are using an unsupported 1.9 version of OSS so I would raise on their repo directly. You can test with latest OSS image to confirm. How does the actual log entry look on disk in the file? Is it a single line or separate lines there? |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Stack Details
Using AWS Managed Fluent-bit image:
public.ecr.aws/aws-observability/aws-for-fluent-bit:latest
Problem Statement
All JSON-formatted logs are being parsed correctly, except for one particular log from an application that generates an unusually large log entry. The size details of this log are:
Fluent Bit appears to split this single log into multiple
partial_message
chunks and stores them under thelog
field.In the ECS task logs, I see the following warning, which seems related:
[ warn] [engine] failed to flush chunk '1-1747838836.435799559.flb', retry in 8 seconds: task_id=1, input=forward.1 > output=es.2 (out_id=2)
I’d like to understand:
How does Fluent Bit handle such large log entries?
What are the best practices or possible solutions to reliably process and forward large logs like this without data loss or chunking issues?
Beta Was this translation helpful? Give feedback.
All reactions