-
Notifications
You must be signed in to change notification settings - Fork 976
[Variant] Avoid collecting offset iterator #7934
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Variant] Avoid collecting offset iterator #7934
Conversation
Signed-off-by: codephage2020 <[email protected]>
Signed-off-by: codephage2020 <[email protected]>
Signed-off-by: codephage2020 <[email protected]>
CC @friendlymatthew @alamb. Would it be possible for you to review this PR at your convenience? Thank you in advance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @codephage2020 -- I went through this PR carefully and it all makes sense to me and I think it is good to merge
cc @scovich @Samyak2 and @friendlymatthew
return Err(ArrowError::InvalidArgumentError( | ||
"field names not sorted".to_string(), | ||
)); | ||
let mut current_field_name = match field_ids_iter.next() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It wasn't added in this PR, but this check for the field names being sorted doesn't seem right to me -- I thought the only requirement on an object's fields were that the field_ids were sorted (so lookup by field_id can be fast) but the corresponding names of the fields don't have to be sorted
Maybe @friendlymatthew can help
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are right. VariantEncoding.md is described like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, this is how I understood the spec
The field ids and field offsets must be in lexicographical order of the corresponding field names in the metadata dictionary
If we have an object with a sorted dictionary, the field ids are already ordered by the lexicographical order of field names.
If we don't have a sorted dictionary, we iterate through the field ids and probe the object for the corresponding field name. If the field names are lexicographically ordered, we can also verify that the field ids are in lexicographical order.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it -- I re-read the spec and I was confused and agree this check is doign the right thing
The field ids and field offsets must be in lexicographical order of the corresponding field names in the metadata dictionary. However, the actual value entries do not need to be in any particular order. This implies that the field_offset values may not be monotonically increasing. For example, for the following object:
// we also know all field ids are smaller than the dictionary size and in-bounds. | ||
if let Some(&last_field_id) = field_ids.last() { | ||
if last_field_id >= self.metadata.dictionary_size() { | ||
if current_id >= dictionary_size { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It took me a few times to figure out why this (redundant) check was still needed
I am not sure if there is some way to refactor the loop to avoid this (perhaps by keeping previous_id: Option<u32>
as you did in the loop above 🤔
No changes needed, I just figured I would point it out
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I didn't expect this to be confusing. This loop has considered many ways of writing, and in the end, I referred to https://github.com/apache/arrow-rs/blob/main/parquet-variant/src/variant/list.rs#L225-L232. I think this is an elegant implementation. Perhaps I need to add comments or use the following writing style to make the code more readable.
let mut previous_id: Option<u32> = None;
for field_id in field_ids_iter {
if field_id >= dictionary_size {
return Err(ArrowError::InvalidArgumentError(
"field id is not valid".to_string(),
));
}
if let Some(prev_id) = previous_id {
if field_id <= prev_id {
return Err(ArrowError::InvalidArgumentError(
"field names not sorted".to_string(),
));
}
}
previous_id = Some(field_id);
}
🤖 |
🤖: Benchmark completed Details
|
🤖 |
🤔 interesting that the benchmark seems to imply this approach is slower than collecting in a Vec
I reran to see if we can see a difference |
🤖: Benchmark completed Details
|
Next benchmark run looks good to me |
🚀 |
Thanks agian @codephage2020 and @friendlymatthew |
let are_offsets_monotonic = offsets.is_sorted_by(|a, b| a < b); | ||
if !are_offsets_monotonic { | ||
return Err(ArrowError::InvalidArgumentError( | ||
"offsets not monotonically increasing".to_string(), | ||
)); | ||
for next_offset in offsets_iter { | ||
if next_offset <= current_offset { | ||
return Err(ArrowError::InvalidArgumentError( | ||
"offsets not monotonically increasing".to_string(), | ||
)); | ||
} | ||
current_offset = next_offset; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Belated comment but -- AFAIK this specific snippet of original code was perfectly fine? This PR would have just changed it from https://doc.rust-lang.org/std/primitive.slice.html#method.is_sorted_by to Iterator::is_sorted_by?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, but this PR unconditionally consumes the first offset outside the if/else; that part would need to move inside the if
so this else
can keep doing what it always did.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe worth a follow on PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh! There's also a bug: Neither variant nor JSON forbids the empty string as a field name. So we have to allow empty ranges. In case the dictionary is sorted, ""
compares less-than every other string, and would be the first string; the sortedness check would naturally catch any later empty strings as breaking the sort order.
I'll throw up a quick PR for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let next_field_name = self.metadata.get(field_id)?; | ||
|
||
if let Some(current_name) = current_field_name { | ||
if next_field_name <= current_name { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, I caught this post-merge but in this branch we can't assume the field names in the metadata dictionary are unique. So when we perform the probing mentioned in https://github.com/apache/arrow-rs/pull/7934/files#r2213356637, we should only check if next_field_name
< current_name
.
fixed in #7961
# Which issue does this PR close? - Follow-up to #7901 # Rationale for this change - #7934 Introduced a minor regression, in (accidentally?) forbidding the empty string as a dictionary key. Fix the bug and simplify the code a bit further while we're at it. # What changes are included in this PR? Revert the unsorted dictionary check back to what it had been (it just uses `Iterator::is_sorted_by` now, instead of `primitive.slice::is_sorted_by`). Remove the redundant offset monotonicity check from the ordered dictionary path, relying on the fact that string slice extraction will anyway fail if the offsets are not monotonic. Improve the error message now that it does double duty. # Are these changes tested? New unit tests for dictionaries containing the empty string. As a side effect, we now have at least a little coverage for sorted dictionaries -- somehow, I couldn't find any existing unit test that creates a sorted dictionary?? # Are there any user-facing changes? No --------- Co-authored-by: Andrew Lamb <[email protected]>
# Rationale for this change If a variant has an unsorted dictionary, you can't assume fields are unique nor ordered by name. This PR updates the logical equality check among `VariantMetadata` to properly handle this case. - Closes #7952 It also fixes a bug in #7934 where we do a uniqueness check when probing an unsorted dictionary --------- Co-authored-by: Andrew Lamb <[email protected]>
Which issue does this PR close?
We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax.
Rationale for this change
Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed.
Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes.
What changes are included in this PR?
There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR.
Are these changes tested?
We typically require tests for all PRs in order to:
If tests are not included in your PR, please explain why (for example, are they covered by existing tests)?
Are there any user-facing changes?
If there are user-facing changes then we may require documentation to be updated before approving the PR.
If there are any breaking changes to public APIs, please call them out.