Namespace vart#

namespace vart#

Typedefs

typedef struct vart::npu_tensor npu_tensor_t#

Structure of a tensor.

This structure contains all the parameters needed to define a tensor.

Enums

enum DataType#

Enum that list the different data type supported by the VART API.

Values:

enumerator INT8#
enumerator FLOAT32#
enumerator UINT8#
enumerator BF16#
enumerator INT64#
enumerator UNKNOWN#

Variables

static const int sys_log_level[] = {0, LOG_ERR, LOG_WARNING, LOG_NOTICE, LOG_INFO, LOG_DEBUG}#
static Logger &obj = Logger::get_instance()#
template<typename InputType, typename OutputType>
class BaseRunner

Subclassed by Runner

class Device#
#include <vart_device.hpp>

This module manages the hardware context and loading of xclbin on to the device.

Please check API documentation for more information. Any module utilizing hardware acceleration requires an instance of the Device class.

class InferResult : public std::enable_shared_from_this<InferResult>#
#include <vart_inferresult.hpp>

This module is used to represent inference results.

Presently, the default supported types include classification and detection. Users can integrate new types by overriding base class methods to incorporate custom inference results. For additional information please check API documentation.

class Logger
#include <vart_logger.hpp>

The logger module provides the logging support for VART modules.

Supports logging to console, file, syslog.

class Memory#
#include <vart_memory.hpp>

This module is responsible for allocating and managing memory on the device.

class MetaConvert#
#include <vart_metaconvert.hpp>

This module facilitates the conversion of Infer metadata into a format compatible with the overlay module.

Metaconvert also accepts configuration parameters as JSON string, which provide further flexibility on configuring overlay information such as line thickness, font size, font type ., etc. Please check API documentation for more information. Additionally, if users have a custom meta data then they can integrate customized functions to convert them into a format suitable for processing by the overlay module by overriding base class.

struct npu_tensor#
#include <runner.h>

Structure of a tensor.

This structure contains all the parameters needed to define a tensor.

class Overlay#
#include <vart_overlay.hpp>

This module facilitates the overlay of annotations onto the video frame, currently overlay utilizes OpenCV library to draw on frames, which is software based.

Overlay supports drawing of bounding boxes, text, lines, arrows, circles and polygons on frames. Application can also incorporate custom implementation using base class.

class PLKernel#
class PostProcess#
#include <vart_postprocess.hpp>

This module performs additional computations on output tensor data from NPU to generate more meaningful interpretation.

Post processing by default supports YOLOv2, ResNet50, SSD-ResNet34, please check API documentation on usage and additional information. If an application requires custom post processing, it can override the base class methods.

class PreProcess#
#include <vart_preprocess.hpp>

The preprocessing module handles data preparation tasks such as normalization, scaling, and video format conversion.

This module supports software based pre-processing as well as hardware accelerated pre-processing for optimized performance. It ensures that input data is appropriately formatted for inference. Application can also incorporate custom pre processing by overriding base class methods.

class Runner : public vart::BaseRunner<const void**, void**>
#include <runner.h>

Class of the Runner, provides API to use the runner.

The runner instance has a number of member functions to control the execution and get the input and output tensors of the runner.

Sample code:

// This example assumes that you have a snapshot stored in the model_path.
// The way to create a runner to run the snapshot is shown below.

// create runner
auto runner = vart::Runner::create_runner(model_path, in_shape_format, out_shape_format);
// get input tensors
auto input_tensors = runner->get_input_tensors();
// get output tensors
auto output_tensors = runner->get_output_tensors();
// run runner
auto v = runner->execute_async(input, output);
auto status = runner->wait((int)v.first, 1000000000);
}
class VideoFrame#
#include <vart_videoframe.hpp>

This module simplifies the management of frame memory complexities and provides APIs for reading and writing a frame.

The VideoFrame class offers flexibility for applications to encapsulate their own memory into the VideoFrame class. In such instances, the application bears the responsibility for deallocating the frame memory.