Skip to content

Support encoding to file-like object #754

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 20 commits into
base: main
Choose a base branch
from

Conversation

NicolasHug
Copy link
Member

@NicolasHug NicolasHug commented Jul 5, 2025

This PR adds the to_file_like() method to the AudioEncoder. This allows users to encode samples into a custom file-like object that supports seek and write methods.

Marking this as draft because there are unresolved points (see below), but this is still ready for a solid first review round.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Jul 5, 2025
@NicolasHug NicolasHug marked this pull request as draft July 5, 2025 14:49

# This check is useless but it's critical to keep it to ensures that samples
# is still alive during the call to encode_audio_to_file_like.
assert samples.is_contiguous()
Copy link
Member Author

@NicolasHug NicolasHug Jul 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hate that we have to do this but I do not see any other obvious way to keep the input samples alive for the duration of the call.
Claude is saying that we could just pass samples as a py::object. We won't be able to turn it back to a tensor (as mentioned in the code comment above), but claude claims that passing it as a parameter will ensure that pybind will keep it alive. I cannot verify this.

@scotts, any thoughts?

Copy link
Contributor

@scotts scotts Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the keep-alive part, I believe Claude is right. If we pass something as a py::object, that gets properly reference-counted which will keep the object alive. When we launder a pointer as an int, there's no reference counting.

Of course, we would ideally just pass the tensor - but we run into problems passing tensors as tensors into the pybind11 code. The next simplest thing that we probably can't do for performance reasons is to copy the tensor into either bytes or a list, and then pass those as py::object. But since samples will be large, I don't think we want to do that.

Most workarounds I can think of are worse. One that might be just as bad, but could potentially apply to both this situation and decoder creation:

  1. On the pybind11 side, we only create the AVIOFileLikeContext. We don't create the encoder or decoder. We do still accept the file-like objects, and they are still stored in the AVIOFileLIkeContext.
  2. We return an int from the C++ side to the Python side where that int is a pointer to the AVIOFileLikeContext.
  3. On the PyTorch custom ops side, we have functions for create-from-file-like and encode-to-file-like that accept the int value and do a reinterpret_cast<AVIOFileLikeContext*> in the C++. Those are then passed to the decoder or encode.

As it is right now, we're doing a lot of ugly pointer casting with tensors. The above may actually be better, as then the pybind11 code is only really concerned with creating AVIOFIleLikeContext objects. It doesn't even need to know about encoders and decoders.

class AVIOFileLikeContext : public AVIOContextHolder {
// TODO: explain this. We probably don't want it.
class __attribute__((visibility("default"))) AVIOFileLikeContext
: public AVIOContextHolder {
Copy link
Member Author

@NicolasHug NicolasHug Jul 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, this has been causing some headaches for me.

Forget about the __attribute__((visibility("default"))) for a second.
In this PR, the AVIOFileLikeContext class is now a direct dependency of the Encoder [1]. It means that this file must be compiled along with the libtorchcodec_decoderN library. Before (i.e. in main), this file was only compiled with libtorchcodec_pybind_opsN.

And we know from https://pybind11.readthedocs.io/en/stable/faq.html#someclass-declared-with-greater-visibility-than-the-type-of-its-field-someclass-member-wattributes that this class should be compiled with "hidden" visibility: it holds a py::object fileLike_ attribute, which itself is set to "hidden" by pybind.

But if we were to set AVIOFileLikeContext to "hidden", then we'd have another problem: we'd get the same kind of warning, but for the Encoder, because the Encoder would not be "hidden":

Encoder declared with greater visibility than the type of its field Encoder::avioFileLikeContext_

We can't set the Encoder as hidden, that would make it invisible to custom ops and other things. It just wouldn't work.

[1] See comment below in this PR about the need for 2 different constructors: one for AVIOFileLikeContext, one for AVIOToTensorContext.


So here with __attribute__((visibility("default"))) we are explicitly setting the visibility of AVIOFileLikeContext to "default".
"default" actually means public, i.e. NOT hidden. This override pybind's desire of having the py::object fileLike_ to be hidden, and avoid the related warnings/errors. I do not think this is a great solution though, because IIUC, we're not actually solving the issue, we're mainly silencing a warning by forcing something to be "default"/public when it shouldn't. I think we are thus exposing ourselves to symbol conflicts:

As to why -fvisibility=hidden is necessary, because pybind modules could have been compiled under different versions of pybind itself, it is also important that the symbols defined in one module do not clash with the potentially-incompatible symbols defined in another. While Python extension modules are usually loaded with localized symbols (under POSIX systems typically using dlopen with the RTLD_LOCAL flag), this Python default can be changed, but even if it isn’t it is not always enough to guarantee complete independence of the symbols involved when not using -fvisibility=hidden.


If we don't want to do this (by this I mean setting AVIOFileLikeContext to "default"), then I think we'll need to avoid relying on AVIOFileLikeContext within the Encoder. That means we will only want to interact with the base AVIOContextHolder class, and I think that means we'll need to expose the getOutputTensor as virtual, even know it will only make sense for AVIOToTensorContext.

I hope this makes somewhat sense. I'm still wrapping my head around all this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NicolasHug, we might be able to get around this, see my comment below about storing and accepting AVIOContextHolder and doing a dynamic cast to AVIOToTensorContext as needed.

${pybind_ops_library_name}
PUBLIC
"-fvisibility=hidden"
)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be removed now because the new __attribute__((visibility("default"))) has priority over any compilation flag. See other big comment above.

@NicolasHug NicolasHug marked this pull request as ready for review July 6, 2025 14:51
@NicolasHug NicolasHug marked this pull request as draft July 7, 2025 09:14
"format",
"file_like",
"bit_rate",
"num_channels");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, wow, we're not listing the parameters for create_from_file_like(). Are there benefits to doing so?

std::optional<int64_t> bit_rate = std::nullopt,
std::optional<int64_t> num_channels = std::nullopt) {
// We assume float32 *and* contiguity, this must be enforced by the caller.
auto tensor_options = torch::TensorOptions().dtype(torch::kFloat32);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we keep this technique, we can probably allow all dtypes by passing in the dtype from the Python side as ints. I assume the Python and C++ enums agree on values, but even if they don't, we can figure out the mapping. Ugly, but possible.

AudioEncoder(
const torch::Tensor& samples,
int sampleRate,
std::string_view formatName,
std::unique_ptr<AVIOToTensorContext> avioContextHolder,
std::unique_ptr<AVIOToTensorContext> AVIOToTensorContext,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: variable name should start with lower case, maybe avioToTensorContext.

// Stores the AVIOContext for the output tensor buffer.
std::unique_ptr<AVIOToTensorContext> avioContextHolder_;
std::unique_ptr<AVIOToTensorContext> avioToTensorContext_;
std::unique_ptr<AVIOFileLikeContext> avioFileLikeContext_;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a simpler approach is to only store and accept a AVIOContextHolder, but when you need to access AVIOToTensorContext specific methods, do a std::dynamic_cast<AVIOToTensorContext*>. Dynamic casts are also ugly, but it localizes the ugliness to the one place we need it, and avoids having parallel variables and parallel constructors.

A dynamic cast, in C++, can be used to say "I only have a pointer to Base, but I know that this object is actually type Derived: return me a pointer to Derived." If it fails, you get back nullptr.

This might solve the linking-visibility problem above, I'm not sure. The reason I think it might is that we remove AVIOFileLikeContext from the declaration of Encoder. It will only appear inside the implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants