Before learning this developer guide of bolt, the code architecture is strongly recommended for you to read in advance. In code architecture, you will get the deep understanding of the whole design of bolt, which helps you develop bolt more efficiently. If you want to verify your model quickly, you can use the out-of-the-box c api or java api to infer your model and check the inference result. If your model run with time series data, you can use Flow to accelerate the inference. What’s more, if your encounter unsupported operators in conversion or inference of your model, you can customize the unsupported operators step by step which has been described in details in the document.

Contents


    Use out-of-the-box API to infer your model
        C API
        Java API
    Accelerate time series model by Flow
    Customize models with unsupported operators step by step
        model conversion customization
        tensor computing customization
        inference's engine customization
    How to contribute
        submit issue
        pull request

Use out-of-the-box API to infer your model


C API

Bolt provides C API document generated by doxygen to help you use C API, image classification example and Chinese input method example. You can compile it and link libbolt.so library with your C/C++ project.

Java API

Bolt provides Java API document generated by doxygen to help use Java API with a detailed example. You can compile bolt and load libBoltModel.so to using the Java Native Interface(JNI) with your Java project.

Accelerate time series model by Flow


Flow provides API document generated by doxygen to help use Flow C++ header, and examples(tinybert, faceSR, ASR). You can also use Java API and there is a simple GSR test.

Here are the steps to use Flow:

  • Use predefined flow protobuf standard to define a graph

    Here is an example for CV application faceSR graph file flow_facesr.prototxt. This graph has one input, one input node, one inference node and one output. Input node need to be marked as Input, and inference node need to be marked as Inference. Each node can have multiple input or output tensors. Each type node has typical fields.

  • Add output tensor size infer function for each node, and register function to Flow function manager (optional)

    facesr doesn't need to post-process the final tensor, so the node's output tensor can be used directly.

    If you need to post-process the final tensor, you can refer to flow_tinybert which has defined a post-processing function(tinybertInferOutputSize) and register the post-processing function by using flowRegisterFunction API.

  • Add input tensor pre-processing function for each node, and register function to Flow function manager (optional)

    (same as output tensor size infer function)

  • Add output tensor post-processing function for each node, and register function to Flow function manager (optional)

    (same as output tensor size infer function)

  • Define a Flow object and add task

    Declare a Flow object and set CPU cores and GPU. Describe the task by Task format and use enque API to add the task into Flow heterogeneous executor.

  • Get Flow process result

    Use dequeue API to get the result sorted in FIFO order. You can choose to set the results as a block to get all enqueue task results at the same time. size function can be used to query the unfinished task number.

Customize models with unsupported operators step by step


model conversion customization

In model_tools, you can define any operator for model conversion.

  1. Switch to code of the specific framework (caffe/onnx/tflite) you are working on;
  2. Judge the op whether it is a weight-op or non-weight-op;
  3. Define the Operator parameter format;
  4. Extract the meta information of the operator;
  5. Extract the weight data if the operator is a weight-op, otherwise skip this step.

  6. Example: support pooling in caffe converter

    1. Switch to model_tools/src/caffe, which is the caffe converter for bolt;

    2. Judgment: pooling is non-weight-op.

    3. Define pooling parameter format.

      3.1 Modify OperatorType data structure in common/uni/include/operator_type.h

      typedef enum {
      ...
          OT_Pooling,    //  Addition 
      ...
      } OperatorType
      

      3.2 Modify inline const char\ const* OperatorTypeName()* function in common/uni/include/operator_type.h

      inline const char* const* OperatorTypeName() {
          static const char* const names[] = {
              ...
              "OT_Pooling",    // Addition, please corresponds to the OperatorType
              ...
          }
      }
      

      3.3 Add pooling definition of bolt in common/uni/include/parameter_spec.h

      // Addition ======>
      typedef struct {
          unsigned int kernel_t;
          unsigned int kernel_h;
          unsigned int kernel_w;
          unsigned int stride_t;
          unsigned int stride_h;
          unsigned int stride_w;
          unsigned int padding_before;
          unsigned int padding_after;
          unsigned int padding_top;
          unsigned int padding_bottom;
          unsigned int padding_left;
          unsigned int padding_right;
          RoundMode rm;
          PoolingMode mode;
      } PoolingParamSpec;
      // <====== Addition
      

      3.4 Modify int get_operator_parameter_size(OperatorType operatorType) function in common/uni/include/parameter_spec.h

      std::map<OperatorType, int> operatorParameterSizeMap = {
          ...
          {OT_Pooling, sizeof(PoolingParamSpec)},    // Addition
          };
      
    4. Extract the meta information of pooling operator in caffe.

      4.1 Modify OperatorType convert_caffe_type(std::string inputType) function in model_tools/src/caffe/caffe_adaptee.h.

      Add the caffe type mapping code as following:

      OperatorType convert_caffe_type(std::string inputType) {
          // Addition ======>
          if (inputType == "Pooling") {    
              return OT_Pooling;    
          }    // <====== Addition
          else if (inputType ==  "Convolution") {
             ...
          } 
      }
      

      4.2 Register the abstract adapt_Pooling() function in class ModelAdaptee if it has not been registered in model_tools/src/model_adaptee.h. Otherwise, skip this step.

      virtual EE adapt_operator(OperatorType type, ParameterSpec *ps) {
          ...
          // Addition ======>
          else if (type == OT_Pooling) {
              *ps = adapt_Pooling();
          }
          // <====== Addition
          ...
      }
      
      // Addition ======>
      REGISTER_EMPTY_ADAPT_OPERATOR(adapt_Pooling)
      // <====== Addition
      

      4.3 Extract the meta information of pooling operator from caffe model, add ParameterSpec adapt_Pooling() override function in model_tools/src/caffe/caffe_adaptee.h.

      // Addition ======>
      ParameterSpec adapt_Pooling() override
      {
          ParameterSpec curPs;
          memset(&curPs, 0, sizeof(curPs));
          PoolingParamSpec pps;
          memset(&pps, 0, sizeof(pps));
          pps.kernel_t = 1;
          pps.stride_t = 1;
          pps.padding_before = 0;
          pps.padding_after = 0;
          if (layer.pooling_param().has_kernel_w() && layer.pooling_param().has_kernel_h()) {
              pps.kernel_w = layer.pooling_param().kernel_w();
              pps.kernel_h = layer.pooling_param().kernel_h();
          } else {
              pps.kernel_h = layer.pooling_param().kernel_size();
              pps.kernel_w = pps.kernel_h;
          }
          if (layer.pooling_param().has_stride_w() && layer.pooling_param().has_stride_h()) {
              pps.stride_w = layer.pooling_param().stride_w();
              pps.stride_h = layer.pooling_param().stride_h();
          } else {
              pps.stride_h = layer.pooling_param().stride();
              pps.stride_w = pps.stride_h;
          }
          bool global_pooling = layer.pooling_param().global_pooling();
          if (global_pooling) {
              pps.kernel_h = 0;
              pps.kernel_w = 0;
              pps.stride_h = 1;
              pps.stride_w = 1;
          } else {
              CHECK_REQUIREMENT(pps.kernel_h > 0);
          }
          if (layer.pooling_param().has_pad_w() && layer.pooling_param().has_pad_h()) {
              pps.padding_left = layer.pooling_param().pad_w();
              pps.padding_right = pps.padding_left;
              pps.padding_top = layer.pooling_param().pad_h();
              pps.padding_bottom = pps.padding_top;
          } else {
              pps.padding_top = layer.pooling_param().has_pad() ? layer.pooling_param().pad() : 0;
              pps.padding_bottom = pps.padding_top;
              pps.padding_left = pps.padding_top;
              pps.padding_right = pps.padding_top;
          }
      
          if (layer.pooling_param().has_round_mode() && layer.pooling_param().round_mode() == 1) {
              pps.rm = FLOOR;
          } else {
              pps.rm = CEIL;
          }
          auto op = layer.pooling_param().pool();
          switch (op) {
              case caffe::PoolingParameter_PoolMethod_MAX: {
                  pps.mode = POOLING_MAX;
                  break;
              }
              case caffe::PoolingParameter_PoolMethod_AVE: {
                  pps.mode = POOLING_MEAN;
                  break;
              }
              default: {
                  const google::protobuf::EnumDescriptor *descriptor =
                      caffe::PoolingParameter::PoolMethod_descriptor();
                  UNI_ERROR_LOG("can not map operator name:%s %s to Pooling.\n",
                      this->layer.name().c_str(), descriptor->FindValueByNumber(op)->name().c_str());
              }
          }
          curPs.pooling_spec = pps;
          return curPs;
      }     
      // <====== Addition
      
    5. Pooling is non-weight op, skip this step.

  7. Example: support pooling in onnx converter

    1. Switch to model_tools/src/onnx, which is the onnx converter for bolt;

    2. Judgment: pooling is non-weight-op;

    3. Define pooling parameter format.

      Note: Definition actions same with add pooling in caffe converter step 3 . Please refer the former content.

    4. Extract the meta information of pooling operator in onnx.

      4.1 Modify the function named OperatorType convert_onnx_type(std::string inputType) in model_tools/onnx/onnx_adaptee.h.

      Add the onnx type mapping code as following:

      OperatorType convert_onnx_type(std::string inputType) {
          // Addition ======>
          if (inputType == "AveragePool" || inputType == "MaxPool" || inputType == "GlobalAveragePool") {
              return OT_Pooling;
          } // <====== Addition
          else if (inputType == "Conv") {
              ...
          }
      }
      

      4.2 Register the abstract adapt_Pooling() function in class ModelAdaptee if it has not been registered in model_tools/src/model_adaptee.h. Otherwise, skip this step.

      virtual EE adapt_operator(OperatorType type, ParameterSpec *ps) {
          ...
          // Addition ======>
          else if (type == OT_Pooling) {
              *ps = adapt_Pooling();
          }
          // <====== Addition
          ...
      }
      
      // Addition ======>
      REGISTER_EMPTY_ADAPT_OPERATOR(adapt_Pooling)
      // <====== Addition
      

      4.3 Extract the meta information of pooling operator from onnx model, add ParameterSpec adapt_Pooling() override function in model_tools/onnx/onnx_adaptee.h.

      // Addition ======>
      ParameterSpec adapt_Pooling() override
      {
          ParameterSpec curPs;
          memset(&curPs, 0, sizeof(curPs));
          PoolingParamSpec pps;
          memset(&pps, 0, sizeof(pps));
          std::string autoPad = get_node_str_attribute_by_name(node, "auto_pad");  // deprecated
          std::vector<int> kernelShape = get_node_vector_ints_attribute_by_name(node, "kernel_shape");
          std::vector<int> strides = get_node_vector_ints_attribute_by_name(node, "strides");
          std::vector<int> pads = get_node_vector_ints_attribute_by_name(node, "pads");
      
          if (op == "AveragePool" || op == "ReduceMean" || op == "GlobalAveragePool") {
              pps.mode = POOLING_MEAN;
          } else {
              pps.mode = POOLING_MAX;
          }
      
          if (autoPad == "SAME_UPPER") {
              pps.rm = CEIL;
          } else {
              pps.rm = FLOOR;
          }
      
          pps.kernel_t = 0;
          pps.kernel_h = 0;
          pps.kernel_w = 0;
          if (kernelShape.size() == 3) {
              pps.kernel_t = kernelShape[0];
              pps.kernel_h = kernelShape[1];
              pps.kernel_w = kernelShape[2];
          } else if (kernelShape.size() == 2) {
              pps.kernel_t = 1;
              pps.kernel_h = kernelShape[0];
              pps.kernel_w = kernelShape[1];
          } else if (kernelShape.size() == 1) {
              pps.kernel_t = 1;
              pps.kernel_h = kernelShape[0];
              pps.kernel_w = 1;
          }
      
          pps.stride_t = 1;
          pps.stride_h = 1;
          pps.stride_w = 1;
          if (strides.size() == 3) {
              pps.stride_t = strides[0];
              pps.stride_h = strides[1];
              pps.stride_w = strides[2];
          } else if (strides.size() == 2) {
              pps.stride_h = strides[0];
              pps.stride_w = strides[1];
          } else if (strides.size() == 1) {
              pps.stride_h = strides[0];
          }
      
          pps.padding_before = 0;
          pps.padding_top = 0;
          pps.padding_left = 0;
          pps.padding_after = 0;
          pps.padding_bottom = 0;
          pps.padding_right = 0;
          if (pads.size() == 6) {
              pps.padding_before = pads[0];
              pps.padding_top = pads[1];
              pps.padding_left = pads[2];
              pps.padding_after = pads[3];
              pps.padding_bottom = pads[4];
              pps.padding_right = pads[5];
          } else if (pads.size() == 4) {
              pps.padding_top = pads[0];
              pps.padding_left = pads[1];
              pps.padding_bottom = pads[2];
              pps.padding_right = pads[3];
          } else if (pads.size() == 2) {
              pps.padding_top = pads[0];
              pps.padding_bottom = pads[1];
          }
          curPs.pooling_spec = pps;
          return curPs;
      }
      // <======= Addition
      
    5. Pooling is non-weight op, skip this step.

  8. Example: support pooling in tflite converter

    1. Switch to model_tools/src/tflite, which is the tflite converter for bolt;

    2. Judgment: pooling is non-weight-op;

    3. Define pooling parameter format;

      Note: Definition actions same with add pooling in caffe converter step(3) . Please refer the former content.

    4. Extract the meta information of pooling operator in tflite.

      4.1 Modify OperatorType convert_tflite_type(std::string inputType) function in model_tools/tflite/tflite_adaptee.h.

      Add the tflite type mapping code as following:

      OperatorType convert_tflite_type(tflite::BuiltinOperator tfliteType) {
          // Addition ======>
          if (tfliteType == tflite::BuiltinOperator_MAX_POOL_2D || tfliteOperatorType == tflite::BuiltinOperator_AVERAGE_POOL_2D) {
              return OT_Pooling;
          } // <====== Addition
          else if (tfliteType == tflite::BuiltinOperator_CONCATENATION) {
              ...
          }
      }
      

      4.2 Register the abstract adapt_Pooling() function in class ModelAdaptee if it has not been registered in model_tools/src/model_adaptee.h. Otherwise, skip this step.

      virtual EE adapt_operator(OperatorType type, ParameterSpec *ps) {
          ...
          // Addition ======>
          else if (type == OT_Pooling) {
              *ps = adapt_Pooling();
          }
          // <====== Addition
          ...
      }
      
      // Addition ======>
      REGISTER_EMPTY_ADAPT_OPERATOR(adapt_Pooling)
      // <====== Addition
      

      4.3 Extract the meta information of pooling operator from tflite model, add ParameterSpec adapt_Pooling() override function in model_tools/tflite/tflite_adaptee.h.

      // Addition ======>
      ParameterSpec adapt_Pooling() override
      {
          ParameterSpec curPs;
          memset(&curPs, 0, sizeof(curPs));
          PoolingParamSpec poolingPs;
          memset(&poolingPs, 0, sizeof(poolingPs));
          poolingPs.kernel_t = 1;
          poolingPs.stride_t = 1;
          poolingPs.padding_before = 0;
          poolingPs.padding_after = 0;
          poolingPs.padding_top = 0;
          poolingPs.padding_bottom = 0;
          poolingPs.padding_left = 0;
          poolingPs.padding_right = 0;
          poolingPs.rm = CEIL;
      
          const auto &inputTensor =
              this->tfliteTensors[this->tfliteOperators[this->tfliteOperatorIndex]->inputs[0]];
          const auto &inputShape = inputTensor->shape;
          CHECK_REQUIREMENT(inputShape.size() == 4);
          if (opCode == tflite::BuiltinOperator_MEAN) {  // Interpret as global pooling
              const auto &axisTensor =
                  this->tfliteTensors[this->tfliteOperators[this->tfliteOperatorIndex]->inputs[1]];
              const auto &axisData = tfliteModelBuffer[axisTensor->buffer]->data;
              auto axisPtr = reinterpret_cast<const int32_t *>(axisData.data());
              CHECK_REQUIREMENT(1 == axisPtr[0] && 2 == axisPtr[1]);
              poolingPs.mode = POOLING_MEAN;
              poolingPs.kernel_h = 0;
              poolingPs.kernel_w = 0;
              poolingPs.stride_h = 1;
              poolingPs.stride_w = 1;
          } else {
              const auto &tflitePoolOption =
                  this->tfliteOperators[this->tfliteOperatorIndex]->builtin_options.AsPool2DOptions();
              poolingPs.kernel_h = tflitePoolOption->filter_height;
              poolingPs.kernel_w = tflitePoolOption->filter_width;
              poolingPs.stride_h = tflitePoolOption->stride_h;
              poolingPs.stride_w = tflitePoolOption->stride_w;
              int tfPaddingRoundMode = tflitePoolOption->padding;
              if (tfPaddingRoundMode == 0) {
                  poolingPs.rm = TF_SAME;
      
                  int oLength = (inputShape[2] + poolingPs.stride_w - 1) / poolingPs.stride_w;
                  int padLength = UNI_MAX(
                      (oLength - 1) * poolingPs.stride_w + poolingPs.kernel_w - inputShape[2], 0);
                  poolingPs.padding_left = padLength / 2;
                  poolingPs.padding_right = padLength - poolingPs.padding_left;
      
                  oLength = (inputShape[1] + poolingPs.stride_h - 1) / poolingPs.stride_h;
                  padLength = UNI_MAX(
                      (oLength - 1) * poolingPs.stride_h + poolingPs.kernel_h - inputShape[1], 0);
                  poolingPs.padding_top = padLength / 2;
                  poolingPs.padding_bottom = padLength - poolingPs.padding_top;
              } else if (tfPaddingRoundMode == 1) {
                  poolingPs.rm = TF_VALID;
              } else {
                  UNI_ERROR_LOG("can not process operator location:%d Pooling round mode.\n",
                      this->tfliteOperatorIndex);
              }
              if (opCode == tflite::BuiltinOperator_MAX_POOL_2D) {
                  poolingPs.mode = POOLING_MAX;
              } else if (opCode == tflite::BuiltinOperator_AVERAGE_POOL_2D) {
                  poolingPs.mode = POOLING_MEAN;
              }
              insertActivationOperator(
                  getActivationOperatorType(tflitePoolOption->fused_activation_function));
          }
          curPs.pooling_spec = poolingPs;
          return curPs;
      }
      // <====== Addition
      
    5. Pooling is non-weight op, skip this step.

tensor computing customization

In tensor, you can define any operator for computing.

  1. Create a new operator file in compute/tensor/src;
  2. The computing implementations on various backends(x86 CPU, ARM CPU, GPU) are usually different. You should add the corresponding operator implementation to the specific folder in compute/tensor/src depending on the target backend.

  3. Example: add pooling operator in tensor

    1. Create pooling.cpp in compute/tensor/src, the complete implementation refers to compute/tensor/src/pooling.cpp

    2. For ARM CPU, create pooling.cpp in compute/tensor/src/cpu/arm/pooling.cpp, and dispatch to implementations of different data type(bnn/fp16/fp32/int8).

    3. For ARM GPU, create pooling.cpp in compute/tensor/src/gpu/mali/pooling.cpp, and only fp16 supported now compute/tensor/src/gpu/mali/fp16/pooling_mali_fp16.cpp, and put your cl file in compute/tensor/src/gpu/mali/cl/pooling_max.cpp, the file name of cl must be the same with kernel name. if your kernel has compile option, create .sh file in common/gcl/tools/kernel_lib_compile/sh/compile, the file name of sh must be the same with kernel name.

inference's engine customization

In engine, you can define any operator for the inference of your model.

  1. Add the definition of the specific operator in inference/engine/include;
  2. If the specific operator implement in CPU is different from its implement in GPU, implement should be divided into CPU and GPU version. If the specific operator implement in CPU is same with its implement in GPU, skip this step.

  3. Example: add pooling operator in inference/engine

    1. Create pooling.hpp in inference/engine/include, add the definition of pooling operator, the complete implement code refers to inference/engine/include/pooling.hpp

    2. pooling operator implement in CPU is different from its implement in GPU. So pooling implement should be two version: CPU and GPU

      (1) Create pooling_cpu.hpp and add pooling CPU implement in inference/engine/include/cpu , the complete implement refers to inference/engine/include/cpu/pooling_cpu.hpp

      (2) Create pooling_ocl.hpp and add pooling GPU implement in inference/engine/include/ocl , the complete implement refers to inference/engine/include/ocl/pooling_ocl.hpp

How to contribute


submit issue

  • question

    Submit any question you have encountered when you use Bolt. You can give feedback to us through committing issues. Refer to https://github.com/huawei-noah/bolt/issues, create your new issue and submit it. The issue can be a bug in Bolt, a suggestion for Bolt, or anything you don't understand about Bolt.

  • feature request

    Submit any feature that you want but it has not been implemented in Bolt. We have created a special issue and you can leave a commit under this issue . We will seriously consider the needs of all users and continue to enrich the functions of Bolt.

pull request

  • add MIT license

    For consistency, please add MIT license at the head of your source codes indicating your codes will be open to all.

  • provide an executable unit test

    Fork Bolt on your github account. Modify your code and make sure your code pass all testing cases. Commit the code and initiate a pull request on github.

results matching ""

    No results matching ""