Skip to content

"There appear to be 1 leaked semaphore objects to clean up at shutdown" while llava export #5171

@JLake310

Description

@JLake310

🐛 Describe the bug

Hi there,
I've got an error like below during exporting llava.

$ python -m executorch.examples.models.llava.export_llava --pte-name llava.pte --with-artifacts
[INFO 2024-09-09 13:50:49,162 export_llava.py:296] Exporting Llava model to ExecuTorch with sdpa_with_kv_cache: True, max_seq_len: 768
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
/Users/jaeyeonkim/miniconda3/envs/executorch/lib/python3.10/site-packages/transformers/models/llava/configuration_llava.py:100: FutureWarning: The `vocab_size` argument is deprecated and will be removed in v4.42, since it can be inferred from the `text_config`. Passing this argument has no effect
  warnings.warn(
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:47<00:00, 15.85s/it]
[1]    10753 killed     python -m executorch.examples.models.llava.export_llava --pte-name llava.pte 
/Users/jaeyeonkim/miniconda3/envs/executorch/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown

It seems like my MAC ran out of memory since I have 16GB RAM.
(Please let me know if it's not the problem of the low memory.)

Q1) Is there minimum hardware requirement for exporting a llava?
Q2) Is there any solution to reduce the memory usage during export process?

Sincerely,

Versions

PyTorch version: 2.5.0.dev20240901
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: macOS 14.5 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.29.6
Libc version: N/A

Python version: 3.10.0 (default, Mar 3 2022, 03:54:28) [Clang 12.0.0 ] (64-bit runtime)
Python platform: macOS-14.5-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Apple M3

Versions of relevant libraries:
[pip3] executorch==0.4.0a0+99fbca3
[pip3] numpy==1.21.3
[pip3] torch==2.5.0.dev20240901
[pip3] torchaudio==2.5.0.dev20240901
[pip3] torchsr==1.0.4
[pip3] torchvision==0.20.0.dev20240901
[conda] executorch 0.4.0a0+99fbca3 pypi_0 pypi
[conda] numpy 1.21.3 pypi_0 pypi
[conda] torch 2.5.0.dev20240901 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20240901 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchvision 0.20.0.dev20240901 pypi_0 pypi

cc @mergennachin @cccclai @helunwencser @dvorjackz

Metadata

Metadata

Assignees

Labels

module: examplesIssues related to demos under examples/module: llmIssues related to LLM examples and apps, and to the extensions/llm/ codetriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

Projects

Status

Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions