| Convolution |
common 2D&3D convolution, dilated 2D&3D convolution, group 2D&3D convolution, depthwise 2D convolution |
| Deconvolution |
2D deconvolution, onnx ConvTranspose |
| FullyConnected |
onnx Gemm, Linear |
| RNN |
LSTM, PLSTM, GRU, onnx LBR GRU, onnx Scan, also supports bi-direction |
| Matmul |
matrix multiply |
| Resize |
linear, nearest, cubic mode resize, same as onnx Resize, Upsample |
| BilateralSliceApply |
hrnet BilateralSliceApply function |
| Pooling |
max, mean pooling |
| Scale |
y = alpha * x + beta per channel |
| Prelu |
prelu activation |
| BatchNorm |
y = (x - mean) / sqrt(variance + eps) per channel |
| LayerNorm |
layernorm |
| L2Normalization |
L2-Normalization |
| Reduction |
sum, min, max, mean reduction |
| ArgMax |
max value index |
| Softmax |
y = exp(x - max(x)) / sum(exp(x - max(x))) |
| SoftmaxWithLoss |
softmax with loss(not implement) |
| LogSoftmax |
log softmax |
| Clip |
y = clip(x, min, max) |
| Power |
y = (scale * x + shift) ^ pow |
| Sigmoid |
sigmoid activation |
| Relu |
relu(scale = 0 when x < 0) |
| LeakyRelu |
relu(scale != 0 when x < 0) |
| Relu6 |
y = relu6(x) |
| HSwish |
y = x * relu6(x + 3) / 6 |
| HSigmoid |
hard sigmoid, y = clip((x + 1) / 2, 0, 1) |
| Gelu |
gelu activation |
| TanH |
y = tanh(x) |
| Mish |
y = x * tanh(log(1 + e ^ x)) |
| Erf |
erf(x) = 2/sqrt(pi) * integral from 0 to x of exp(-t^2) dt |
| Gather |
onnx gather, gather_elements, gatherND, also same as embedding |
| Embedding |
Caffe embedding |
| Pad |
constant(0), reflect, edge, symmetric padding |
| Eltwise |
sum, min, max, mul(prod), sub, div elementwise operation |
| Concat |
many tensors concat on some axis |
| Slice |
caffe slice |
| TfSlice |
onnx or tflite slice, strided slice |
| Cast |
change tensor data type |
| Shape |
get tensor shape |
| ConstantOfShape |
allocate memory(not implement) |
| Transpose |
transpose data, same as caffe permute |
| Reshape |
change dimension |
| Squeeze |
remove 1 dimension |
| Unsqueeze |
add 1 dimension |
| Space2Depth |
tensorflow space_to_depth function |
| Depth2Space |
tensorflow depth_to_space function |
| Constant |
onnx constant |
| ChannelResize |
channel padding or channel cut |
| PreAllocatedMemory |
allocate memory |
| SharedWeight |
used to represent onnx/tflite operator input that is not generated by another operator |
| Copy |
memory copy |
| Check |
tensor level compare, result is used for Jump |
| Repeat |
do while loop for dynamic control flow |
| Jump |
if statement for dynamic control flow |
| Attention |
transformer global attention mask |
| AttentionMask |
transformer local attention mask |
| RelativePositionEmbedding |
relative position embedding |
| RelativeShift |
relative shift |
| PriorBox |
SSD caffe PriorBox |
| DetectionOutput |
SSD caffe DetectionOutput |
| Yolov3DetectionOutput |
Yolov3 caffe DetectionOutput |
| MultiHeadAttention |
transformer multi-head attention |
| SqDiff |
tflite squared difference |
| Tile |
onnx tile |
| Splice |
Kaldi extract feature function, same as Gather |
| Neg |
y = -x |
| Greater |
elementwise tensor compare, same as onnx greater |
| Where |
onnx where |
| SoftPlus |
y = log(1 + e ^ x) |
| Exp |
y = exp(x) |
| Split |
y = x |
| Tdnn |
Kaldi tdnn operator(Splice + Linear) |
| Dropout |
dropout function |
| TopK |
same as onnx topk |
| SpaceToBatchNd |
tensorflow space_to_batch function |
| BatchToSpaceNd |
tensorflow batch_to_space function |
| Abs |
y = (x > 0) ? x : -x |
| Equal |
elementwise tensor compare, same as onnx equal, this also support tflite NOT_EQUAL |
| Sign |
y = sign(x) |
| HSwishNoDiv |
y = x * relu6(x + 3) |
| InstanceNorm |
Instance Normalization |
| Expand |
onnx expand |
| Scatter |
onnx scatter, scatter_elements, scatterND |
| Log |
y = log(x) |
| Select |
y = choice ? a : b, same as tflite select |
| Not |
y = ! (x), same as onnx not |
| RoIAlign |
same as onnx RoIAlign |
| GenerateProposals |
same as tf tf.image.generate_bounding_box_proposals |
| Reciprocal |
same as onnx reciprocal |