-
Notifications
You must be signed in to change notification settings - Fork 1.7k
[refactor] Migrate persistent queue to consolidated metadata format #13126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
❌ Your patch check has failed because the patch coverage (75.36%) is below the target coverage (95.00%). You can increase the patch coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## main #13126 +/- ##
==========================================
- Coverage 91.27% 91.25% -0.03%
==========================================
Files 514 514
Lines 28857 28943 +86
==========================================
+ Hits 26340 26411 +71
- Misses 1998 2001 +3
- Partials 519 531 +12 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Please use changes from #13125 |
Hey… something is wrong here |
250d434
to
4aa8773
Compare
It's done! Apologies, I'm still getting comfortable with git rebase operations. |
exporter/exporterhelper/internal/queuebatch/persistent_queue.go
Outdated
Show resolved
Hide resolved
exporter/exporterhelper/internal/queuebatch/persistent_queue.go
Outdated
Show resolved
Hide resolved
exporter/exporterhelper/internal/queuebatch/persistent_queue.go
Outdated
Show resolved
Hide resolved
exporter/exporterhelper/internal/queuebatch/persistent_queue.go
Outdated
Show resolved
Hide resolved
exporter/exporterhelper/internal/queuebatch/persistent_queue.go
Outdated
Show resolved
Hide resolved
exporter/exporterhelper/internal/queuebatch/persistent_queue.go
Outdated
Show resolved
Hide resolved
Embed QueueMetadata directly into persistentQueue, replacing the scattered metadata fields. 1. Add QueueMetadata as a member of persistentQueue. 2. Delete redundant fields (queueSize, readIdx, writeIdx, …). Relates to #13126
Please rebase with latest changes |
…ry#13140) Embed QueueMetadata directly into persistentQueue, replacing the scattered metadata fields. 1. Add QueueMetadata as a member of persistentQueue. 2. Delete redundant fields (queueSize, readIdx, writeIdx, …). Relates to open-telemetry#13126
7d0eef9
to
1acbafc
Compare
exporter/exporterhelper/internal/queuebatch/persistent_queue.go
Outdated
Show resolved
Hide resolved
exporter/exporterhelper/internal/queuebatch/persistent_queue.go
Outdated
Show resolved
Hide resolved
exporter/exporterhelper/internal/queuebatch/persistent_queue.go
Outdated
Show resolved
Hide resolved
exporter/exporterhelper/internal/queuebatch/persistent_queue.go
Outdated
Show resolved
Hide resolved
exporter/exporterhelper/internal/queuebatch/persistent_queue.go
Outdated
Show resolved
Hide resolved
exporter/exporterhelper/internal/queuebatch/persistent_queue.go
Outdated
Show resolved
Hide resolved
return err | ||
} | ||
|
||
if metadataSizerType != pq.set.sizerType { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add support for this in a followup PR, for the moment in this case return error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not returning an error directly here for now. Please see my previous comment.
exporter/exporterhelper/internal/queuebatch/persistent_queue.go
Outdated
Show resolved
Hide resolved
exporter/exporterhelper/internal/queuebatch/persistent_queue.go
Outdated
Show resolved
Hide resolved
exporter/exporterhelper/internal/queuebatch/persistent_queue.go
Outdated
Show resolved
Hide resolved
Thanks for the review, @bogdandrutu! Changes are ready. |
Thanks again for your patience and thorough feedback! I’d really like to accelerate finalizing this PR, and I’m fully open to splitting it into smaller pieces if that helps us move faster. Please let me know how you’d prefer to proceed—I’m more than happy to accommodate your workflow to get this wrapped up efficiently. |
} | ||
|
||
if metadataSizerType != pq.set.sizerType { | ||
pq.logger.Warn("Sizer type mismatch in stored metadata") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it have to be a warning?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably this can be a warning, but I think we need to add some wording that the incoming requests are blocked until the queue is drained
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we actually have to force draining when sizer changes? I imagine it'll be a significant delay to upgrade the collectors that accumulated a huge queue for any reason.
Can we maintain and store different metadata instances per each sizer type instead of having only one under qmv0
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When users change the sizerType
, the current "force full drain" mechanism ensures consistency in queueSize
and capacity
calculations.
Implementing separate metadata instances per sizer type would create challenges:
- Item Storage Overlap: Queue items use keys like "0", "1", "2". Different metadata instances would reference the same indices, requiring coordination.
- Complex Capacity Management: Multiple metadata instances would complicate queue capacity handling.
While blocking during full drain causes delays for large queues, a non-blocking approach would significantly increase implementation complexity. We need to weigh the benefits against higher maintenance costs.
Do you think it would be better to work on enhancing the current force full drain?
Probably this can be a warning, but I think we need to add some wording that the incoming requests are blocked until the queue is drained
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exporter/exporterhelper/internal/queuebatch/persistent_queue.go
Outdated
Show resolved
Hide resolved
…streamline metadata handling
f5bded2
to
1284e18
Compare
This PR updates the persistence strategy to consolidate all relevant queue information (start, stop, in-progress, size) under a single key for each write operation.