Skip to content

Eval bug: Release b4524 breaks serving of granite-code models #11500

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
cgruver opened this issue Jan 29, 2025 · 2 comments · Fixed by #11533
Closed

Eval bug: Release b4524 breaks serving of granite-code models #11500

cgruver opened this issue Jan 29, 2025 · 2 comments · Fixed by #11533
Assignees
Labels
bug Something isn't working

Comments

@cgruver
Copy link

cgruver commented Jan 29, 2025

Name and Version

Changes made to Chat Template support in release b4524 of llama.cpp break serving of granite-code models.

./bin/llama-cli --version
version: 4524 (6171c9d2)
built with Intel(R) oneAPI DPC++/C++ Compiler 2025.0.4 (2025.0.4.20241205) for x86_64-unknown-linux-gnu

Operating systems

Linux

GGML backends

SYCL, CPU

Hardware

clinfo -l
Platform #0: Intel(R) OpenCL
 `-- Device #0: Intel(R) Core(TM) Ultra 7 155H
Platform #1: Intel(R) OpenCL Graphics
 `-- Device #0: Intel(R) Arc(TM) Graphics

Models

Granite Code 3b & 8b

granite-code:3b
granite-code:8b

Problem description & steps to reproduce

  1. Build llama.cpp from release b4523 and observe that despite a warning message, the server will work -

    Run with CPU -

    ./bin/llama-server --model ~/granite-code:3b --host 0.0.0.0 
    

    Run with GPU -

    ./bin/llama-server --model ~/granite-code:3b --host 0.0.0.0 --n-gpu-layers 999 --flash-attn --ctx-size 32768
    

    Warning message:

    The chat template that comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses
    
  2. Build llama.cpp from release b4524 or later and observe a failure and core dump -

    Error:

    main: The chat template that comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses
    terminate called after throwing an instance of 'std::runtime_error'
      what():  this custom template is not supported
    Aborted (core dumped)
    

First Bad Commit

Release b4524

Relevant log output

./bin/llama-server --model ~/granite-code:3b --host 0.0.0.0 --n-gpu-layers 999 --flash-attn --ctx-size 32768

build: 4524 (6171c9d2) with Intel(R) oneAPI DPC++/C++ Compiler 2025.0.4 (2025.0.4.20241205) for x86_64-unknown-linux-gnu
system info: n_threads = 6, n_threads_batch = 6, total_threads = 22

system_info: n_threads = 6 (n_threads_batch = 6) / 22 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 | 

main: HTTP server is listening, hostname: 0.0.0.0, port: 8080, http threads: 21
main: loading model
srv    load_model: loading model '/root/granite-code:3b'
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
llama_model_load_from_file_impl: using device SYCL0 (Intel(R) Arc(TM) Graphics) - 89909 MiB free
llama_model_loader: loaded meta data with 33 key-value pairs and 514 tensors from /root/granite-code:3b (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Granite 3b Code Instruct 128k
llama_model_loader: - kv   3:                           general.finetune str              = code-instruct-128k
llama_model_loader: - kv   4:                           general.basename str              = granite
llama_model_loader: - kv   5:                         general.size_label str              = 3B
llama_model_loader: - kv   6:                            general.license str              = apache-2.0
llama_model_loader: - kv   7:                               general.tags arr[str,3]       = ["code", "granite", "text-generation"]
llama_model_loader: - kv   8:                           general.datasets arr[str,9]       = ["bigcode/commitpackft", "TIGER-Lab/M...
llama_model_loader: - kv   9:                          llama.block_count u32              = 32
llama_model_loader: - kv  10:                       llama.context_length u32              = 128000
llama_model_loader: - kv  11:                     llama.embedding_length u32              = 2560
llama_model_loader: - kv  12:                  llama.feed_forward_length u32              = 10240
llama_model_loader: - kv  13:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  14:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv  15:                       llama.rope.freq_base f32              = 10000000.000000
llama_model_loader: - kv  16:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  17:                          general.file_type u32              = 2
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 49152
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 80
llama_model_loader: - kv  20:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  21:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  22:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  23:                         tokenizer.ggml.pre str              = refact
llama_model_loader: - kv  24:                      tokenizer.ggml.tokens arr[str,49152]   = ["<|endoftext|>", "<fim_prefix>", "<f...
llama_model_loader: - kv  25:                  tokenizer.ggml.token_type arr[i32,49152]   = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  26:                      tokenizer.ggml.merges arr[str,48891]   = ["Ġ Ġ", "ĠĠ ĠĠ", "ĠĠĠĠ ĠĠ...
llama_model_loader: - kv  27:                tokenizer.ggml.bos_token_id u32              = 0
llama_model_loader: - kv  28:                tokenizer.ggml.eos_token_id u32              = 0
llama_model_loader: - kv  29:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  30:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  31:                    tokenizer.chat_template str              = {% for message in messages %}\n{% if m...
llama_model_loader: - kv  32:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  289 tensors
llama_model_loader: - type q4_0:  224 tensors
llama_model_loader: - type q6_K:    1 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type   = Q4_0
print_info: file size   = 1.86 GiB (4.58 BPW) 
load: special tokens cache size = 19
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
load: token to piece cache size = 0.2826 MB
print_info: arch             = llama
print_info: vocab_only       = 0
print_info: n_ctx_train      = 128000
print_info: n_embd           = 2560
print_info: n_layer          = 32
print_info: n_head           = 32
print_info: n_head_kv        = 32
print_info: n_rot            = 80
print_info: n_swa            = 0
print_info: n_embd_head_k    = 80
print_info: n_embd_head_v    = 80
print_info: n_gqa            = 1
print_info: n_embd_k_gqa     = 2560
print_info: n_embd_v_gqa     = 2560
print_info: f_norm_eps       = 0.0e+00
print_info: f_norm_rms_eps   = 1.0e-05
print_info: f_clamp_kqv      = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale    = 0.0e+00
print_info: n_ff             = 10240
print_info: n_expert         = 0
print_info: n_expert_used    = 0
print_info: causal attn      = 1
print_info: pooling type     = 0
print_info: rope type        = 0
print_info: rope scaling     = linear
print_info: freq_base_train  = 10000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn  = 128000
print_info: rope_finetuned   = unknown
print_info: ssm_d_conv       = 0
print_info: ssm_d_inner      = 0
print_info: ssm_d_state      = 0
print_info: ssm_dt_rank      = 0
print_info: ssm_dt_b_c_rms   = 0
print_info: model type       = 3B
print_info: model params     = 3.48 B
print_info: general.name     = Granite 3b Code Instruct 128k
print_info: vocab type       = BPE
print_info: n_vocab          = 49152
print_info: n_merges         = 48891
print_info: BOS token        = 0 '<|endoftext|>'
print_info: EOS token        = 0 '<|endoftext|>'
print_info: EOT token        = 0 '<|endoftext|>'
print_info: UNK token        = 0 '<|endoftext|>'
print_info: PAD token        = 0 '<|endoftext|>'
print_info: LF token         = 145 'Ä'
print_info: EOG token        = 0 '<|endoftext|>'
print_info: max token length = 512
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
load_tensors: offloading 32 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 33/33 layers to GPU
load_tensors:        SYCL0 model buffer size =  1903.13 MiB
load_tensors:   CPU_Mapped model buffer size =    98.44 MiB
llama_init_from_model: n_seq_max     = 1
llama_init_from_model: n_ctx         = 32768
llama_init_from_model: n_ctx_per_seq = 32768
llama_init_from_model: n_batch       = 2048
llama_init_from_model: n_ubatch      = 512
llama_init_from_model: flash_attn    = 1
llama_init_from_model: freq_base     = 10000000.0
llama_init_from_model: freq_scale    = 1
llama_init_from_model: n_ctx_per_seq (32768) < n_ctx_train (128000) -- the full capacity of the model will not be utilized
GGML_SYCL_DEBUG: 0
GGML_SYCL_FORCE_MMQ:   no
GGML_SYCL_F16: no
Found 1 SYCL devices:
|  |                   |                                       |       |Max    |        |Max  |Global |                     |         XMX  |
|  |                   |                                       |       |compute|Max work|sub  |mem    |                     |          or  |
|ID|        Device Type|                                   Name|Version|units  |group   |group|size   |       Driver version| Tensor Cores |
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|--------------|
| 0|     [opencl:gpu:0]|                     Intel Arc Graphics|    3.0|    128|    1024|   32| 94277M|       24.35.30872.32|            no|
llama_kv_cache_init: kv_size = 32768, offload = 1, type_k = 'f16', type_v = 'f16', n_layer = 32, can_shift = 1
llama_kv_cache_init:      SYCL0 KV buffer size = 10240.00 MiB
llama_init_from_model: KV self size  = 10240.00 MiB, K (f16): 5120.00 MiB, V (f16): 5120.00 MiB
llama_init_from_model:  SYCL_Host  output buffer size =     0.19 MiB
llama_init_from_model:      SYCL0 compute buffer size =   116.00 MiB
llama_init_from_model:  SYCL_Host compute buffer size =   384.01 MiB
llama_init_from_model: graph nodes  = 1127
llama_init_from_model: graph splits = 66
common_init_from_params: setting dry_penalty_last_n to ctx_size = 32768
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
srv          init: initializing slots, n_slots = 1
slot         init: id  0 | task -1 | new slot n_ctx_slot = 32768
main: model loaded
main: The chat template that comes with this model is not yet supported, falling back to chatml. This may cause the model to output suboptimal responses
terminate called after throwing an instance of 'std::runtime_error'
  what():  this custom template is not supported
Aborted (core dumped)
@ochafik
Copy link
Collaborator

ochafik commented Jan 31, 2025

Hey @cgruver, thanks for reporting this!

I managed to repro the crash, definitely introduced by #11016

llama-cli -hf mradermacher/granite-8b-code-instruct-128k-GGUF:Q4_K_M -fa -p Hey

I seem to have altered the default template logic for unsupported templates, working on a fix.

In the meantime as a workaround, you could use --jinja, but it's somewhat broken w/ llama-server until #11531 gets merged.

@cgruver
Copy link
Author

cgruver commented Jan 31, 2025

@ochafik Thank You!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants