-
Notifications
You must be signed in to change notification settings - Fork 11.7k
Feature Request: Support for Gemma 3 models family #12345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
enhancement
New feature or request
Comments
There's an implementation in Ollama, which could probably(?) be ported |
It's written in Go, but it might be helpful: ollama/ollama#9661. Seems to be working: Example
|
I just tested b4875 (7841fc7 / #12343, ❤️ @ngxson) using gemma-3-4b-it-Q8_0.gguf (GGUFs @ ggml-org), and... it seems to be working, now :). Example
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Prerequisites
Feature Description
It would be nice if we could run Gemma 3 models family in llama.cpp.
Currently conversion to GGUF fails with
ERROR:hf-to-gguf:Model Gemma3ForCausalLM is not supported
.Motivation
Gemma family was supported so far.
Possible Implementation
No response
The text was updated successfully, but these errors were encountered: