-
Notifications
You must be signed in to change notification settings - Fork 12.4k
Closed
Description
Hello, so i downloaded llama 2 uncensored ggml version by TheBloke but when i try to convert it to gguf i can't:
~/llama.cpp $ python convert-llama-ggmlv3-to-gguf.py --input llama2_7b_chat_uncensored.ggmlv3.q5_K_S.bin --output llama2_7b_chat_uncensored.ggmlv3.q5_K_S.gguf
* Using config: Namespace(input=PosixPath('llama2_7b_chat_uncensored.ggmlv3.q5_K_S.bin'), output=PosixPath('llama2_7b_chat_uncensored.ggmlv3.q5_K_S.gguf'), name=None, desc=None, gqa=1, eps='5.0e-06', context_length=2048, model_metadata_dir=None, vocab_dir=None, vocabtype='spm')
=== WARNING === Be aware that this conversion script is best-effort. Use a native GGUF model if possible. === WARNING ===
* Scanning GGML input file
* GGML model hyperparameters: <Hyperparameters: n_vocab=32000, n_embd=4096, n_mult=256, n_head=32, n_layer=32, n_rot=128, n_ff=11008, ftype=16>
=== WARNING === Special tokens may not be converted correctly. Use --model-metadata-dir if possible === WARNING ===
* Preparing to save GGUF file
* Adding model parameters and KV items
* Adding 32000 vocab item(s)
* Adding 291 tensor(s)
Traceback (most recent call last):
File "/data/data/com.termux/files/home/llama.cpp/convert-llama-ggmlv3-to-gguf.py", line 345, in <module>
main()
File "/data/data/com.termux/files/home/llama.cpp/convert-llama-ggmlv3-to-gguf.py", line 341, in main
converter.save()
File "/data/data/com.termux/files/home/llama.cpp/convert-llama-ggmlv3-to-gguf.py", line 165, in save
self.add_tensors(gguf_writer)
File "/data/data/com.termux/files/home/llama.cpp/convert-llama-ggmlv3-to-gguf.py", line 273, in add_tensors
mapped_name = nm.get(name)
^^^^^^
AttributeError: 'TensorNameMap' object has no attribute 'get'
Metadata
Metadata
Assignees
Labels
No labels