Skip to content

readNetFromONNX: Cant create layer of type 'Exp' in getLayerInstance #18088

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
skaldesh opened this issue Aug 13, 2020 · 11 comments
Closed

readNetFromONNX: Cant create layer of type 'Exp' in getLayerInstance #18088

skaldesh opened this issue Aug 13, 2020 · 11 comments
Milestone

Comments

@skaldesh
Copy link

skaldesh commented Aug 13, 2020

  • OpenCV => 4.4.0-dev (master branch)
  • Operating System / Platform => Arch Linux x86_64
  • Compiler => gcc
  • PyTorch => 1.6.0 (latest docker container with gpu)
Detailed description

Very similar issue: #15244
@dkurt

Basically, I exported a ssdlite-mobilenetv3 in PyTorch to ONNX format. Loading it in OpenCV leads to this error:
create detection model: OpenCV(4.4.0-dev) /tmp/opencv/opencv/modules/dnn/src/dnn.cpp:604: error: (-2:Unspecified error) Can't create layer "1117" of type "Exp" in function 'getLayerInstance'

Enabling fusion did not solve my issue.

Here is the code I used to export the model:

net.load(model_path)
net.to('cuda')
net.eval()

model_path = f"models/model.onnx"

dummy_input = torch.randn(1, 3, 300, 300).cuda()

torch.onnx.export(net, dummy_input, model_path, verbose=False, output_names=['scores', 'boxes'])

Here is my PyTorch model (mb3-ssd-lite.pth.txt) and converted ONNX model (mb3-ssd-lite.onnx.txt)

@l-bat
Copy link
Contributor

l-bat commented Aug 14, 2020

Exp subgraph
image

@l-bat
Copy link
Contributor

l-bat commented Aug 14, 2020

You can try to create custom layer like this:

import cv2 as cv
import numpy as np

class ExpLayer(object):
    def __init__(self, params, blobs):
        super(ExpLayer, self).__init__()

    def getMemoryShapes(self, inputs):
        return inputs

    def forward(self, inputs):
        return [np.exp(inputs[0])]

cv.dnn_registerLayer('Exp', ExpLayer)

@skaldesh
Copy link
Author

@l-bat Thanks for the suggestion. I am programming in C++, do you maybe have a hint for me how I could try the same in C++?

@dkurt
Copy link
Member

dkurt commented Aug 27, 2020

@skaldesh
Copy link
Author

We are executing the model on the GPU. Is my assumption right that by writing such a custom layer in simple C++, the execution for it would jump back to CPU?

@dkurt
Copy link
Member

dkurt commented Aug 27, 2020

That's right. For more optimal implementaion of SSD, use PyTorch @symbolic aliases to create layers such as PriorBox and DetectionOutput. See amdegroot/ssd.pytorch#462 for example.

@fengyuentau
Copy link
Member

@dkurt Hi, as you suggested using PyTorch's @symbolic, I checked your PR(amdegroot/ssd.pytorch#462), and saw you defined the decorator, but cannot find anywhere using this decorator. Could you explain about how this work?

@dkurt
Copy link
Member

dkurt commented Mar 22, 2021

@fengyuentau, https://pytorch.org/docs/stable/onnx.html#custom-operators. It creates just a node with specific inputs and parameters in ONNX format. Then you can parse ONNX as you want. You may find some other simple examples here: https://github.com/dkurt/openvino_pytorch_layers/

@fengyuentau
Copy link
Member

@dkurt thanks very much! I'll give it a try.

@fengyuentau
Copy link
Member

@dkurt I have some questions for your solution to add the 'DetectionOutput' node into the ONNX model:

  1. Does the symbolic function in Detect override the forward when building the 'DetectionOutput' node? In this case, maybe I can leave forward unimplemented and only have the symbolic function?
  2. Is it necessary for the Detect class to inherit from torch.autograd.Function? Usually I use torch.nn.Module instead of torch.autograd.Function, so I am not familiar with the latter one.

BTW, I tried to add the 'DetectionOutput' layer to my onnx model following this guide (https://pytorch.org/docs/stable/onnx.html#custom-operators), but failed because I thought I can leave the actual op unimplemented and only add the node, which turns out to be wrong (https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html).

@dkurt
Copy link
Member

dkurt commented Mar 24, 2021

@fengyuentau, symbolic should have forward method because ONNX conversion runs model once to get tensors.

Is it necessary for the Detect class to inherit from torch.autograd.Function? Usually I use torch.nn.Module instead of torch.autograd.Function, so I am not familiar with the latter one.

I've tried only torch.autograd.Function. Then you can wrap it into the Module but symbolic op should be used by .apply()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants