Skip to content

#3746 introduces error in convert-mpt-hf-to-gguf.py #3783

Closed
@maddes8cht

Description

@maddes8cht

Expected Behavior

In release b1412 i can successfully run the convert-mpt-hf-to-gguf.py script.

Current Behavior

With the changes introduced with #3746, I get an error with each mpt model:

python convert-mpt-hf-to-gguf.py e:\hf\mpt-7b-storywriter\
gguf: loading model mpt-7b-storywriter
gguf: found 2 model parts
This gguf file is for Little Endian only
gguf: get model metadata
gguf: get tokenizer metadata
gguf: get gpt2 tokenizer vocab
Traceback (most recent call last):
  File "e:\hf\llama.cpp\convert-mpt-hf-to-gguf.py", line 140, in <module>
    if tokenizer.added_tokens_decoder[i].special:
AttributeError: 'GPTNeoXTokenizerFast' object has no attribute 'added_tokens_decoder'

Environment and Context

Windows 10, running convert scripts in a conda environment with python 3.10.13

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingstale

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions