Class vart::PostProcess#

class PostProcess

This module performs additional computations on output tensor data from NPU to generate more meaningful interpretation.

Post processing by default supports YOLOv2, ResNet50, SSD-ResNet34, please check API documentation on usage and additional information. If an application requires custom post processing, it can override the base class methods.

Public Functions

PostProcess() = delete#
PostProcess(PostProcessType postprocess_type, std::string &json_data, std::shared_ptr<Device> device)#

PostProcess() - Constructor for using existing post-process implementations.

Parameters:
  • postprocess_type – Enum class to specify which implementation to instantiate

  • json_data – JSON config string based on the implementation class

  • deviceDevice handle to be used by implementations

PostProcess(std::shared_ptr<PostProcessImplBase> ptr)#

PostProcess() - Constructor for using user defined implementation.

Parameters:

ptr – Pointer to user’s implementation instance

const std::shared_ptr<PostProcessImplBase> &get_pimpl_handle() const#

get_pimpl_handle() - Gives pointer to implementation class.

Returns:

Returns a constant reference of pointer to implementation class.

void set_config(std::vector<TensorInfo> &info, uint32_t batch_size)#

set_config() - Set PostProcessInfo config data before start doing the post-process.

Use this method to set batch size per tensor and tensor information required to parse/process the ML network output. Call this method before the first call to “process” method.

Parameters:
  • info – TensorInfo to be set.

  • batch_size – Supported batch size.

std::vector<std::vector<std::shared_ptr<InferResult>>> process(std::vector<int8_t*> data, uint32_t current_batch_size)#

process() - Process/parse tensors data from ML network output to create infer results.

Parameters:
  • data – Vector of tensors data. Each tensor will have the data for the entire batch of images.

  • current_batch_size – Numer of inputs in the current batch

Returns:

Vector of inference result objects for every image in the batch.

std::vector<std::vector<std::shared_ptr<InferResult>>> process(std::vector<std::vector<std::shared_ptr<vart::Memory>>> tensor_memory, uint32_t current_batch_size)#

process() - Process/parse tensors data from ML network output to create infer results.

Parameters:
  • tensor_memory – Vector of vart::Memory pointers. Each vart::Memory contains one tensor, total number of tensors is equal to current_batch_size * number of tensors in each batch.

  • current_batch_size – Numer of inputs in the current batch

Returns:

Vector of inference result objects for every image in the batch.

Private Members

std::shared_ptr<PostProcessImplBase> pimpl#