You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Remove PyTorch 2.3 installation option for GPU
* Remove xpu_lnl option in installation guides for docs
* Update BMG quickstart
* Remove PyTorch 2.3 dependencies for GPU examples
* Update the graphmode example to use stable version 2.2.0
* Fix based on comments
If you encounter network issues when installing IPEX, you can also install IPEX-LLM dependencies for Intel XPU from source archives. First you need to download and install torch/torchvision/ipex from wheels listed below before installing `ipex-llm`.
96
72
97
-
- For **Intel Core™ Ultra Processors (Series 2) with processor number 2xxV (code name Lunar Lake)**:
> All the wheel packages mentioned here are for Python 3.11. If you would like to use Python 3.9 or 3.10, you should modify the wheel names for ``torch``, ``torchvision``, and ``intel_extension_for_pytorch`` by replacing ``cp11`` with ``cp39`` or ``cp310``, respectively.
@@ -453,7 +407,7 @@ We recommend using [Miniforge](https://conda-forge.org/download/) to create a py
453
407
> The ``xpu`` option will install IPEX-LLM with PyTorch 2.1 by default, which is equivalent to
> If you encounter network issues during installation, refer to the [troubleshooting guide](../Overview/install_gpu.md#install-ipex-llm-from-wheel-1) for alternative steps.
79
+
```bash
80
+
pip install --pre --upgrade ipex-llm[cpp]
81
+
```
91
82
92
83
---
93
84
@@ -106,7 +97,7 @@ If your driver version is lower than `32.0.101.6449/32.0.101.101.6256`, update i
106
97
Download and install Miniforge for Windows from the [official page](https://conda-forge.org/download/). After installation, create and activate a Python environment:
107
98
108
99
```cmd
109
-
conda create -n llm python=3.11 libuv
100
+
conda create -n llm python=3.11
110
101
conda activate llm
111
102
```
112
103
@@ -117,27 +108,18 @@ conda activate llm
117
108
With the `llm` environment active, install the appropriate `ipex-llm` package based on your use case:
118
109
119
110
#### For PyTorch and HuggingFace:
120
-
Install the `ipex-llm[xpu-arc]` package. Choose either the US or CN website for `extra-index-url`:
> If you encounter network issues while installing IPEX, refer to [this guide](../Overview/install_gpu.md#install-ipex-llm-from-wheel) for troubleshooting advice.
120
+
```cmd
121
+
pip install --pre --upgrade ipex-llm[cpp]
122
+
```
141
123
142
124
---
143
125
@@ -166,21 +148,24 @@ Run a Quick PyTorch Example:
166
148
torch.Size([1, 1, 40, 40])
167
149
```
168
150
169
-
For benchmarks and performance measurement, refer to the [Benchmark Quickstart guide](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/benchmark_quickstart.md).
151
+
> [!TIP]
152
+
> Please refer to here ([Linux](./install_pytorch26_gpu.md#runtime-configurations-1) or [Windows](./install_pytorch26_gpu.md#runtime-configurations)) regarding runtime configurations for PyTorch with IPEX-LLM on B-Series GPU.
153
+
154
+
For benchmarks and performance measurement, refer to the [Benchmark Quickstart guide](./benchmark_quickstart.md).
170
155
171
156
---
172
157
173
158
### 3.2 Ollama
174
159
175
-
To integrate and run with **Ollama**, follow the [Ollama Quickstart guide](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/ollama_quickstart.md).
160
+
To integrate and run with **Ollama**, follow the [Ollama Quickstart guide](./ollama_quickstart.md).
176
161
177
162
### 3.3 llama.cpp
178
163
179
-
For instructions on how to run **llama.cpp** with IPEX-LLM, refer to the [llama.cpp Quickstart guide](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/llama_cpp_quickstart.md).
164
+
For instructions on how to run **llama.cpp** with IPEX-LLM, refer to the [llama.cpp Quickstart guide](./llama_cpp_quickstart.md).
180
165
181
166
### 3.4 vLLM
182
167
183
-
To set up and run **vLLM**, follow the [vLLM Quickstart guide](https://github.com/intel-analytics/ipex-llm/blob/main/docs/mddocs/Quickstart/vLLM_quickstart.md).
168
+
To set up and run **vLLM**, follow the [vLLM Quickstart guide](./vLLM_quickstart.md).
> If you encounter network issues while installing IPEX, refer to [this guide](../Overview/install_gpu.md#install-ipex-llm-from-wheel) for troubleshooting advice.
0 commit comments