Software Installations#

Set Up Host (Developer) Machine#

This section explains how to install Vitis AI tools on the host PC using the Docker and then set up the cross-compiler environment for building applications. The Quickstart tutorial provides separate instructions for installing Vitis AI on the target platform. This release provides a pre-built Docker image for ease of use.

Download Source Code And Pre-Builts#

Follow these steps to download the Docker image from Docker Hub:

  1. Download the source code vitis-ai-5.1.tar.

  2. Change the directory to the location of the downloaded file and extract it:

    $ cd <path-to-downloaded-vitis-ai-5.1.tar>
    $ tar xf vitis-ai-5.1.tar
    

    The extracted source code contains the following directory structure and folders:

    Vitis-AI
    ├── bin
    ├── docker
    ├── examples
    ├── npu_ip
    ├── src
    └── tools
    └── README
    
    • bin: Includes the download.bash file for downloading the NPU IP and software package.

    • docker: Contains a Dockerfile and a script to download the Docker image from Docker Hub.

    • examples: Includes examples such as:

      • Reference Design: Builds the platform, creates the Vitis project, and generates an SD Card image for VEK280.

      • X+ML Application: Provides the source code for the X+ML application.

      • Python Examples: Contains Python scripts to download models like ResNet50 and generate snapshots.

    • npu_ip: Contains the settings.sh file. Sourcing this file starts the bin/download.bash script, which downloads all necessary packages and the NPU IP.

    • src: Contains the source code for VART ML and VART X components.

    • tools: Responsible for installing NPU software packages and downloading the bitstream file.

    • README: Contains instructions to build the VEK280 platform and Vitis project.

  3. Navigate to the Vitis-AI directory and export the VITIS_AI_REPO variable:

    $ cd Vitis-AI
    $ export VITIS_AI_REPO=$PWD
    
  4. Select the NPU IP configuration by using the following commands:

    $ source npu_ip/settings.sh LIST
    

    It shows the following list of supported NPU IPs:

    VE2202_NPU_IP_O05_A024_M1
    VE2302_NPU_IP_O00_A016_M1
    VE2302_NPU_IP_O00_A032_M1
    VE2602_NPU_IP_O00_A064_M1
    VE2802_NPU_IP_O00_A128_M3
    VE2802_NPU_IP_O00_A192_M3
    VE2802_NPU_IP_O00_A304_M1
    VE2802_NPU_IP_O00_A304_M3    (default)
    VE2802_NPU_IP_O16_A080_M3
    

    Note

    The VE2802_NPU_IP_O00_A304_M3 is the default NPU IP.

    The naming convention of the IPs is as follows:

    • [Target]_NPU_IP_O[AIE_Offset]_A[AIE_number]_M[DDRS_number]

    where:

    • Target: Chip for which the IP (AIE/RTL code) was generated

    • AIE_Offset: Leftmost AIE column from which the partition allocation starts

    • AIE_number: Number of AIEs used

    • DDRS_number: Number of external memories required by the IP

  • For VE2802 NPU (38 columns) IP:

    $ source npu_ip/settings.sh VE2802_NPU_IP_O00_A304_M3
    

    This command enables the VE2802_NPU_IP_O00_A304_M3 performance IP.

    Note

    Run the following commands to enable snapshot generation for other NPU IP configurations or to disable the NPU software stack if required.

  • For VE2802 NPU (38 columns) IP using either 1 or 2 DDRs:

    $ source npu_ip/settings.sh VE2802_NPU_IP_O00_A304_M1
    

    Note

    The VE2802_NPU_IP_O00_A304_M1 NPU IP can be used for a design with NPU connected on either 1 or 2 DDRs (used in interleaving mode).

  • For VE2802 NPU (24 columns) IP:

    $ source npu_ip/settings.sh VE2802_NPU_IP_O00_A192_M3
    
  • For VE2802 NPU (16 columns) IP:

    $ source npu_ip/settings.sh VE2802_NPU_IP_O00_A128_M3
    
  • For VE2602 performance NPU IP:

    $ source npu_ip/settings.sh VE2602_NPU_IP_O00_A064_M1
    
  • For VE2302 performance NPU IP (full configuration):

    $ source npu_ip/settings.sh VE2302_NPU_IP_O00_A032_M1
    
  • For VE2302 performance NPU IP (half configuration):

    $ source npu_ip/settings.sh VE2302_NPU_IP_O00_A016_M1
    
  • For VE2202 performance NPU IP:

    $ source npu_ip/settings.sh VE2202_NPU_IP_O05_A024_M1
    
  • To disable the NPU software stack from running inference on the CPU or GPU:

    $ source tools/VAI_NPU_SW/unset.sh
    

Note

  1. After completing this step, the NPU IP (either the default or the one selected) is downloaded to the Vitis-AI/npu_ip directory, and the NPU software package is downloaded to the Vitis-AI/tools/ directory. Both are required for the container environment.

  2. All NPU IP variants support INT8 and BF16 data types

Set Up/Flash SD Card#

The following SD Card images can be downloaded for a Quick Start on Vitis AI tool:

Note

  1. The SD Card images are shared as compressed files (.gz because it saves time in tasks such as uploading, downloading, and copying) and you need to uncompress it before flashing the SD Card.

  2. The SD Card image (V5.1_VE2802_NPU_IP_O00_A304_M3_sd_card.img.gz) with the highest performance configuration IP is chosen as the reference for the SD card setup and the Quick Start Guide for Versal™ AI Edge VEK280 sections.

The SD card can be flashed using either a Windows or Linux machine.

On a Linux Machine#

Flash V5.1_VE2802_NPU_IP_O00_A304_M3_sd_card.img.gz with the following Linux commands. Insert the SD card into the Linux host machine and follow these steps:

Note

Use the sudo dmesg | tail command to find the SD card partition.

$ gunzip V5.1_VE2802_NPU_IP_O00_A304_M3_sd_card.img.gz
$ sudo dd bs=256M status=progress if=V5.1_VE2802_NPU_IP_O00_A304_M3_sd_card.img of=/dev/<sdcard_device>
$ sudo parted /dev/<sdcard_device> resizepart 2 100%
$ sudo e2fsck -f /dev/<sdcard_device_partition2>
$ sudo resize2fs /dev/<sdcard_device_partition2>

Note

The required version of e2fsck tool is 1.47.2 and it can be downloaded from here.

These commands are used to write an image file to an SD card and then resize the filesystem to use all available space on the card. Here is a breakdown of what each command does:

Write the image to the SD card

$ sudo dd bs=256M status=progress if=V5.1_VE2802_NPU_IP_O00_A304_M3_sd_card.img of=/dev/<sdcard_device>

Resize the sdcard_device_partition

$ sudo parted /dev/<sdcard_device> resizepart 2 100%

Check the filesystem

$ sudo e2fsck -f /dev/<sdcard_device_partition2>

Resize the filesystem

$ sudo resize2fs /dev/<sdcard_device_partition2>

On a Windows Machine#

On a Windows machine, flash V5.1_VE2802_NPU_IP_O00_A304_M3_sd_card.img.gz with Windows utilities such as Raspberry PI Imager, Balena Etcher or Win32DiskImager.

Note

If the SD card flash fails, try formatting and re-flashing the SD card.

Install And Verify Docker#

If you are trying Vitis AI tool for first time, it is recommended to start with Quick Start section first and then revisit this section later.

Docker Installation and Verification#

  1. After ensuring that your host adheres to the prerequisites, refer to the official Docker documentation to install the Docker engine.

    Important

    The Docker daemon always runs as the root user. Non-root users should be added to the Docker group.

  2. Next, quickly test your Docker installation by running the following command. It downloads a test image from Docker Hub and runs it in a container. When the container runs successfully, it prints a “Hello World” message and exits.

    docker run hello-world
    

Leverage the Pre-Built Docker Image#

  1. The Docker image is automatically fetched from Docker Hub.

    $ docker/run.bash
    

    After the license agreement is accepted, the script loads the image from Docker Hub.

    Note

    Loading the Docker image takes a few minutes on the first execution.

    The Docker container launches an instance with Ubuntu 24.04 with all the necessary Python packages installed. The Docker container also mounts the Vitis-AI directory inside the Docker container. The user inside the Docker container has the same write access as the user outside of the Docker container and can write inside the Vitis-AI directory from the Docker.

    Inside the Docker container, the VITIS_AI_REPO environment variable points to the root directory of the Vitis AI repository.

    Note

    The Vitis AI repository must be located in a directory that the root user of the computer, which is launching the Docker container, can access. This is necessary to mount the Vitis AI repository inside the Docker container. When the repository is stored on the server’s local path, there are no issues because the root user has access to it. However, if the repository is situated on a remote drive, like NFS, and not all users are granted access, this can lead to problems.

    The following command shows the required information:

    $ ./docker/run.bash --help
    
    ######################################################################
    #   ./docker/run.bash - build and launch a VAISW image using docker
    ######################################################################
    
    Usage:
    
    ./docker/run.bash [docker_run_argument] -- [commands]
       -h,--help     this help
       --runOnly     skip steps and only run the image
       --fromImg     use a tgz file to build the docker image
       --noFromImg   build all the image from the dockerFile
       --noFromHub   don't use docker hub. This can be used to rebuild the base part of the docker image.
       --noFpga      do not check for FPGA
       --dockerfile  specify another docker file to build and run
       --useApptainer  use Apptainer instead of docker
       --acceptLicense do not prompt for the license agreement.
       Where:
          docker_run_argument are the arguments to pass to docker run command (like -v /path:/mount)
          commands            are the commands to run instead of having an interactive shell
    
       For instance:
    
       # to mount the local my_imagepath directory in /IMAGE
       ./docker/run.bash -v /home/my_user/DATA/my_image_path:/IMAGE
    
       # to run a command within the docker
       ./docker/run.bash -- /bin/bash '. npu_ip/settings.sh && make -C examples/python_examples/ssdResnet34 '
    
       # to run an httpd docker
       ./docker/run.bash --runOnly --name httpd httpd
    

    Note

    It’s recommended to use venv to install the new Python packages and inherit from the docker venv in order to get access to the installed packages.

    You need to use following instructions to install the new Python packages.

    # Create new VENV
    python3 -m venv myenv
    
    # Then add docker venv into the base created venv
    echo /usr/base_venv/lib/python3.12/site-packages > myenv/lib/python3.12/site-packages/base_venv.pth
    echo $( dirname $( dirname $( which python3 )))/lib/python3.12/site-packages >> myenv/lib/python3.12/site-packages/base_venv.pth
    
    # Activate the new venv
    . myenv/bin/activate
    
    # The command usage to install new Python packages:
    python3 -m pip install <custom_packages>
    

Set Up Cross-Compiler#

If you are trying Vitis AI tool for first time, it is recommended to start with the Quick Start section first and then revisit this section later.

  1. Download the pre-built cross-compiler install script sdk-vai-5.1.sh.

  2. Run the following command to provide execution permissions to the sdk-vai-5.1.sh script:

    $ chmod a+x sdk-vai-5.1.sh
    
  3. Run the following command to install the cross-compilation system environment. Install it on your local host Linux system and not in the Docker container.

    $ ./sdk-vai-5.1.sh -d <path-to-your-sdk-installation>
    
  4. When the installation is complete, follow the prompts and execute this command:

    $ source <path-to-your-sdk-installation>/environment-setup-cortexa72-cortexa53-amd-linux
    

Note

If you close the current terminal, you need to re-run step 3 in the new terminal interface to set up the environment for the SDK.