List of Content
- objective
- Install the TVM
Objective
The objective of this post can be summarized as follows:
- Get the TVM successfully installed and run.
Installation
pre-install configuration
As we may need to use the GPU for TVM, CUDA and cudnn should be installed within /usr/local/cuda
path. The potential issue is that user may install the CUDA together with GPU driver (i.e., runfile), thus the CUDA path may not be /usr/local/cuda
, which makes it difficult for TVM config.cmake
file to locate the CUDA and cudnn path.
double check the compatibility between GPU driver, CUDA, cudnn on website
CUDA
Download the CUDA (e.g., v-11) following instructions from site. Remeber to choose deb(local)
if it is Ubuntu.
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
However, the above wget
download cmd may fail due to the technical issue of website IP addr. It routes the connection to developer.download.nvidia.com
to developer.download.nvidia.cn
.
(base) elliot@HomeWS:~/Documents/CUDA$ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin --no-check-certificate
--2021-03-09 21:44:14-- https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
Resolving developer.download.nvidia.com (developer.download.nvidia.com)... 45.43.38.237, 129.227.6.190, 129.227.6.189, ...
Connecting to developer.download.nvidia.com (developer.download.nvidia.com)|45.43.38.237|:443... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://developer.download.nvidia.cn/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin [following]
--2021-03-09 21:44:16-- https://developer.download.nvidia.cn/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
Resolving developer.download.nvidia.cn (developer.download.nvidia.cn)... 192.229.211.70
Connecting to developer.download.nvidia.cn (developer.download.nvidia.cn)|192.229.211.70|:443... connected.
WARNING: no certificate subject alternative name matches
requested host name ‘developer.download.nvidia.cn’.
HTTP request sent, awaiting response... 404 Not Found
2021-03-09 21:44:17 ERROR 404: Not Found.
Alternative approach is using a computer with VPN to download the cudnn package, then send it to the target host:
$ scp cuda-repo-ubuntu2004-11-0-local_11.0.2-450.51.05-1_amd64.deb user@<IP-addr>:<target directory>
$ scp cuda-ubuntu2004.pin user@<IP-addr>:<target directory>
Then apply the left cmd
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
sudo dpkg -i cuda-repo-ubuntu2004-11-1-local_11.1.0-455.23.05-1_amd64.deb
sudo apt-key add /var/cuda-repo-ubuntu2004-11-1-local/7fa2af80.pub
sudo apt-get update
sudo apt-get -y install cuda
cudnn
Download the cudnn from the following website. As the downloading issue sustain, use a computer with VPN to download the cudnn package then send it to the target host:
$ scp cudnn-9.0-linux-x64-v7.tgz user@<IP-addr>:<target directory>
configurations
following the commands described in nvidia-website:
the command in nvidia doc is wrong, error and solution can be found in github
$ tar -xzvf cudnn-9.0-linux-x64-v7.tgz
$ sudo cp cuda/include/cudnn*.h /usr/local/cuda/include
$ sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64
$ sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn*
Inserting the following script in the ~/.bashrc
:
export CUDA_HOME=/usr/local/cuda:$CUDA_HOME
export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/lib64
Install the TVM
Here I choose to install by the conda environment from source.
git clone --recursive https://github.com/apache/tvm tvm
cd tvm
# Install the dependancy
sudo apt-get update
sudo apt-get install -y python3 python3-dev python3-setuptools gcc libtinfo-dev zlib1g-dev build-essential cmake
# Install the LLVM
sudo apt-get install llvm-11 # check the llvm version w.r.t linux dist
dpkg -L llvm-11 # get the llvm-11 installed path
cd <path-to-tvm>/tvm
mkdir build
cp cmake/config.cmake build
Update the configurations in config.cmake
, for example the llvm path
# get the the path of llvm-config
set(USE_LLVM /usr/lib/llvm-11/bin/llvm-config)
set(USE_MICRO ON) # turn this on otherwise google_test unitest will fail
# https://discuss.tvm.apache.org/t/crttest-failed-no-rules-to-make-the-target-crttest/8450
then build the tvm:
cd build
cmake ..
make -j4
Create a conda environment for the TVM
# Create a conda environment with the dependencies specified by the yaml
conda env create --file conda/build-environment.yaml
# Activate the created environment
conda activate tvm-build
In order to use the TVM python package after the conda environment is configured, add the following command in ~.bashrc
export TVM_HOME=/path/to/tvm
export PYTHONPATH=$TVM_HOME/python:${PYTHONPATH}
Install some other packages
conda activate tvm-build
# pip can install under conda env
# Necessary dependencies:
pip3 install --user numpy decorator attrs # --user is not necessary if using conda or virtualenv
pip3 install numpy decorator attrs
#want to use RPC
pip3 install --user tornado
pip3 install tornado
#want to use auto-tuning module
pip3 install --user tornado psutil xgboost cloudpickle
pip3 install tornado psutil xgboost cloudpickle
Install the testing
# ensure you are not under tvm
git clone https://github.com/google/googletest
cd googletest
mkdir build
cd build
cmake ..
make
make install # may need sudo permission
go back to the tvm and check the tvm code
cd tvm
./tests/scripts/task_cpp_unittest.sh
Webpages to check
- https://tvm.apache.org/docs//vta/tutorials/index.html
- https://krantz-xrf.github.io/2019/10/24/tvm-workflow.html
- 编译安装TVM