You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+65-9
Original file line number
Diff line number
Diff line change
@@ -7,6 +7,8 @@
7
7
* [Hardware Requirements](#HardwareRequirements)
8
8
* [Software Requirements](#SoftwareRequirements)
9
9
* [Instantiating the development container](#Instantiatingthedevelopmentcontainer)
10
+
* [Command line options](#Commandlineoptions)
11
+
* [Using the mounts file](#Usingthemountsfile)
10
12
* [Updating the base docker](#Updatingthebasedocker)
11
13
* [Build base docker](#Buildbasedocker)
12
14
* [Test the newly built base docker](#Testthenewlybuiltbasedocker)
@@ -25,16 +27,16 @@
25
27
26
28
TAO Toolkit is a Python package hosted on the NVIDIA Python Package Index. It interacts with lower-level TAO dockers available from the NVIDIA GPU Accelerated Container Registry (NGC). The TAO containers come pre-installed with all dependencies required for training. The output of the TAO workflow is a trained model that can be deployed for inference on NVIDIA devices using DeepStream, TensorRT and Triton.
27
29
28
-
This repository contains the required implementation for the all the deep learning components and networks using the PyTorch backend. These routines are packaged as part of the TAO Toolkit PyTorch container in the Toolkit package.
30
+
This repository contains the required implementation for the all the deep learning components and networks using the PyTorch backend. These routines are packaged as part of the TAO Toolkit PyTorch container in the Toolkit package. These source code here is compatible with PyTorch version > 2.0.0
29
31
30
32
## <aname='GettingStarted'></a>Getting Started
31
33
32
34
As soon as the repository is cloned, run the `envsetup.sh` file to check
33
-
if the build enviroment has the necessary dependencies, and the required
35
+
if the build environment has the necessary dependencies, and the required
34
36
environment variables are set.
35
37
36
38
```sh
37
-
source scripts/envsetup.sh
39
+
source${PATH_TO_REPO}/scripts/envsetup.sh
38
40
```
39
41
40
42
We recommend adding this command to your local `~/.bashrc` file, so that every new terminal instance receives this.
@@ -64,23 +66,24 @@ We recommend adding this command to your local `~/.bashrc` file, so that every n
64
66
|**Software**|**Version**|
65
67
| :--- | :--- |
66
68
| Ubuntu LTS | >=18.04 |
67
-
| python | >=3.8.x |
69
+
| python | >=3.10.x |
68
70
| docker-ce | >19.03.5 |
69
71
| docker-API | 1.40 |
70
72
|`nvidia-container-toolkit`| >1.3.0-1 |
71
73
| nvidia-container-runtime | 3.4.0-1 |
72
74
| nvidia-docker2 | 2.5.0-1 |
73
-
| nvidia-driver | >525.85 |
75
+
| nvidia-driver | >535.85 |
74
76
| python-pip | >21.06 |
75
77
76
78
### <aname='Instantiatingthedevelopmentcontainer'></a>Instantiating the development container
77
79
78
-
Inorder to maintain a uniform development enviroment across all users, TAO Toolkit provides a base environment docker that has been built and uploaded to NGC for the developers. For instantiating the docker, simply run the `tao_pt` CLI. The usage for the command line launcher is mentioned below.
80
+
Inorder to maintain a uniform development environment across all users, TAO Toolkit provides a base environment Dockerfile in `docker/Dockerfile` that contains all
81
+
the required third party dependencies for the developers. For instantiating the docker, simply run the `tao_pt` CLI. The usage for the command line launcher is mentioned below.
--mounts_file MOUNTS_FILE Path to the mounts file.
93
96
--shm_size SHM_SIZE Shared memory size for docker
94
97
--run_as_user Flag to run as user
98
+
--tag TAG The tag value for the local dev docker.
95
99
--ulimit ULIMIT Docker ulimits for the host machine.
96
100
--port PORT Port mapping (e.g. 8889:8889).
97
101
@@ -106,6 +110,55 @@ tao_pt --gpus all \
106
110
--env PYTHONPATH=/tao-pt
107
111
```
108
112
113
+
Running Deep Neural Networks implies working on large datasets. These datasets are usually stored on network share drives with significantly higher storage capacity. Since the `tao_pt` CLI wrapper uses docker containers under the hood, these drives/mount points need to be mapped to the docker.
114
+
115
+
There are 2 ways to configure the `tao_pt` CLI wrapper.
116
+
117
+
1. Via the command line options
118
+
2. Via the mounts file. By default, at `~/.tao_mounts.json`.
119
+
120
+
#### <aname='Commandlineoptions'></a>Command line options
121
+
122
+
|**Option**|**Description**|**Default**|
123
+
| :-- | :-- | :-- |
124
+
|`gpus`| Comma separated GPU indices to be exposed to the docker | 1 |
125
+
|`volume`| Paths on the host machine to be exposed to the container. This is analogous to the `-v` option in the docker CLI. You may define multiple mount points by using the --volume option multiple times. | None |
126
+
|`env`| Environment variables to defined inside the interactive container. You may set them as `--env VAR=<value>`. Multiple environment variables can be set by repeatedly defining the `--env` option. | None |
127
+
|`mounts_file`| Path to the mounts file, explained more in the next section. |`~/.tao_mounts.json`|
128
+
|`shm_size`| Shared memory size for docker in Bytes. | 16G |
129
+
|`run_as_user`| Flag to run as default user account on the host machine. This helps with maintaining permissions for all directories and artifacts created by the container. |
130
+
|`tag`| The tag value for the local dev docker | None |
131
+
|`ulimit`| Docker ulimits for the host machine |
132
+
|`port`| Port mapping (e.g. 8889:8889) | None |
133
+
134
+
#### <aname='Usingthemountsfile'></a>Using the mounts file
135
+
136
+
The `tao_pt` CLI wrapper instance can be configured by using a mounts file. By default, the wrapper expects the mounts file to be at
137
+
`~/.tao_mounts.json`. However, for multiple options, you may be able
138
+
139
+
The launcher config file consists of three sections:
140
+
141
+
*`Mounts`
142
+
143
+
The `Mounts` parameter defines the paths in the local machine, that should be mapped to the docker. This is a list of `json` dictionaries containing the source path in the local machine and the destination path that is mapped for the CLI wrapper.
144
+
145
+
A sample config file containing 2 mount points and no docker options is as below.
146
+
147
+
```json
148
+
{
149
+
"Mounts": [
150
+
{
151
+
"source": "/path/to/your/experiments",
152
+
"destination": "/workspace/tao-experiments"
153
+
},
154
+
{
155
+
"source": "/path/to/config/files",
156
+
"destination": "/workspace/tao-experiments/specs"
157
+
}
158
+
]
159
+
}
160
+
```
161
+
109
162
### <aname='Updatingthebasedocker'></a>Updating the base docker
110
163
111
164
There will be situations where developers would be required to update the third party dependancies to newer versions, or upgrade CUDA etc. In such a case, please follow the steps below:
@@ -120,10 +173,11 @@ cd $NV_TAO_PYTORCH_TOP/docker
120
173
```
121
174
122
175
#### <aname='Testthenewlybuiltbasedocker'></a>Test the newly built base docker
123
-
Developers may tests their new docker by using the `tao_pt` command.
176
+
177
+
The build script tags the newly built base docker with the username of the account in the user's local machine. Therefore, the developers may tests their new docker by using the `tao_pt` command with the `--tag` option.
124
178
125
179
```sh
126
-
tao_pt -- script args
180
+
tao_pt --tag $USER -- script args
127
181
```
128
182
129
183
#### <aname='Updatethenewdocker'></a>Update the new docker
The TAO docker is built on top of the TAO Pytorch base dev docker, by building a python wheel forthe `nvidia_tao_pyt` modulein this repository and installing the wheel in the Dockerfile defined in`release/docker/Dockerfile`. The whole build process is captured in a single shell script which may be run as follows:
0 commit comments