Skip to content

Intel CPU and Graphics card Macbook pro: failed to create context with model './models/model.q4_k_s.gguf' #3129

Closed
@Bateoriginal

Description

@Bateoriginal

Issue: Error when loading model on MacBook Pro with Intel Core i7 and Intel Iris Plus

System Information:

  • Device: MacBook Pro
  • CPU: Quad-Core Intel Core i7
  • Graphics: Intel Iris Plus

Steps to Reproduce:

  • Cloned the latest version of the repository.
  • Executed make.
  • Created a models directory using mkdir models.
  • Within the models folder, downloaded the model using: wget https://huggingface.co/substratusai/Llama-2-13B-chat-GGUF/resolve/main/model.bin -O model.q4_k_s.gguf
  • In the llama.cpp folder, executed the following command: ./main -t 4 -m ./models/model.q4_k_s.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story\n### Response:"

Error Message:

ggml_metal_init: loaded kernel_mul_mat_q6_K_f32            0x7fb57ba06090 | th_max =  896 | th_width =   16
ggml_metal_init: loaded kernel_mul_mm_f16_f32                         0x0 | th_max =    0 | th_width =    0
ggml_metal_init: load pipeline error: Error Domain=CompilerError Code=2 "AIR builtin function was called but no definition was found." UserInfo={NSLocalizedDescription=AIR builtin function was called but no definition was found.}
llama_new_context_with_model: ggml_metal_init() failed
llama_init_from_gpt_params: error: failed to create context with model './models/model.q4_k_s.gguf'
main: error: unable to load model

I would appreciate any guidance or advice on how to resolve this issue. Thank you!

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions