Skip to content

Issues: mlc-ai/mlc-llm

Project Tracking
#647 opened Aug 2, 2023 by tqchen
Open
Model Request Tracking
#1042 opened Oct 9, 2023 by CharlieFRuan
Open 4
Beta
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

[Bug] Cannot auto device detect without internet bug Confirmed bugs
#3214 opened Apr 26, 2025 by Raviu56
[Question] question Question about the usage
#3209 opened Apr 18, 2025 by haoxuanWeng
[Bug] Trouble to run mlc_llm chat with Gemma 3 models bug Confirmed bugs
#3206 opened Apr 16, 2025 by grf53
[Bug] Missing post layernorm in CLIP model bug Confirmed bugs
#3205 opened Apr 16, 2025 by vincentccc
[Bug] Rope doesn't work for llama-3 bug Confirmed bugs
#3202 opened Apr 14, 2025 by bene-ges
启动app时报错(class文件空指针异常) bug Confirmed bugs
#3199 opened Apr 12, 2025 by Myl-Ma
[Question] Does MLC-LLM support multi nodes parallel? question Question about the usage
#3198 opened Apr 10, 2025 by shengxinhu
[Question] no convert Qwen2.5-Omni-7B question Question about the usage
#3193 opened Apr 1, 2025 by hlovingness
[Question] How to evaluate the accuracy of models??? question Question about the usage
#3188 opened Mar 24, 2025 by kunxiongzhu
[Bug] gemma3 WebGPU <unnamed> panicked bug Confirmed bugs
#3182 opened Mar 18, 2025 by nico-martin
[Question] Does it support multi-gpu (intel ARC A770)? question Question about the usage
#3175 opened Mar 14, 2025 by savvadesogle
ProTip! Mix and match filters to narrow down what you’re looking for.