-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Issues: intel/ipex-llm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
IPv6 needs to be disabled before PPA install
user issue
#13112
opened Apr 26, 2025 by
dennis-george0
[XPU] library mismatch and version issue while performing fine-tuning on B580
user issue
#13108
opened Apr 24, 2025 by
raj-ritu17
Ollama failed to run deepseek-coder-v2,Error: unable to load model
user issue
#13107
opened Apr 24, 2025 by
weryswang
Can't set Ollama context size - seems to be fixed to 8k
user issue
#13106
opened Apr 24, 2025 by
kirel
--verbose-prompt does not print any additional information
user issue
#13090
opened Apr 17, 2025 by
HanShengGoodWay
Feature request - add support for the mojo lang and max platform
#13086
opened Apr 17, 2025 by
NewtonChutney
IPEX-LLM Slow Token Generation on Gemma 3 12B on Arc A770M
user issue
#13080
opened Apr 15, 2025 by
Sketchfellow
Unable to find docker image intelanalytics/ipex-llm-inference-cpp-xpu:2.2.0-SNAPSHOT
#13078
opened Apr 15, 2025 by
yshashix
The NPU version of llama.cpp did not return an appropriate response on Intel Core Ultra 7 268V
user issue
#13074
opened Apr 14, 2025 by
kotauchisunsun
llama-cpp-ipex-llm-2.2.0-ubuntu-xeon NOT Support 4XARC770
user issue
#13065
opened Apr 10, 2025 by
macafeeee
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.