This patch trying to resolve mulitiple issues related to parameter
configuration:
Firstly, each DNN filters duplicate DNN_COMMON_OPTIONS, which should
be the common options of backend.
Secondly, backend options are hidden behind the scene. It's a
AV_OPT_TYPE_STRING backend_configs for user, and parsed by each
backend. We don't know each backend support what kind of options
from the help message.
Third, DNN backends duplicate DNN_BACKEND_COMMON_OPTIONS.
Last but not the least, pass backend options via AV_OPT_TYPE_STRING
makes it hard to pass AV_OPT_TYPE_BINARY to backend, if not impossible.
This patch puts backend common options and each backend options inside
DnnContext to reduce code duplication, make options user friendly, and
easy to extend for future usecase.
For example,
./ffmpeg -h filter=dnn_processing
dnn_processing AVOptions:
dnn_backend <int> ..FV....... DNN backend (from INT_MIN to INT_MAX) (default tensorflow)
tensorflow 1 ..FV....... tensorflow backend flag
openvino 2 ..FV....... openvino backend flag
torch 3 ..FV....... torch backend flag
dnn_base AVOptions:
model <string> ..F........ path to model file
input <string> ..F........ input name of the model
output <string> ..F........ output name of the model
backend_configs <string> ..F.......P backend configs (deprecated)
options <string> ..F.......P backend configs (deprecated)
nireq <int> ..F........ number of request (from 0 to INT_MAX) (default 0)
async <boolean> ..F........ use DNN async inference (default true)
device <string> ..F........ device to run model
dnn_tensorflow AVOptions:
sess_config <string> ..F........ config for SessionOptions
dnn_openvino AVOptions:
batch_size <int> ..F........ batch size per request (from 1 to 1000) (default 1)
input_resizable <boolean> ..F........ can input be resizable or not (default false)
layout <int> ..F........ input layout of model (from 0 to 2) (default none)
none 0 ..F........ none
nchw 1 ..F........ nchw
nhwc 2 ..F........ nhwc
scale <float> ..F........ Add scale preprocess operation. Divide each element of input by specified value. (from INT_MIN to INT_MAX) (default 0)
mean <float> ..F........ Add mean preprocess operation. Subtract specified value from each element of input. (from INT_MIN to INT_MAX) (default 0)
dnn_th AVOptions:
optimize <int> ..F........ turn on graph executor optimization (from 0 to 1) (default 0)
Signed-off-by: Zhao Zhili <zhilizhao@tencent.com>
Reviewed-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
There are lots of files that don't need it: The number of object
files that actually need it went down from 2011 to 884 here.
Keep it for external users in order to not cause breakages.
Also improve the other headers a bit while just at it.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
PyTorch is an open source machine learning framework that accelerates
the path from research prototyping to production deployment. Official
website: https://pytorch.org/. We call the C++ library of PyTorch as
LibTorch, the same below.
To build FFmpeg with LibTorch, please take following steps as
reference:
1. download LibTorch C++ library in
https://pytorch.org/get-started/locally/,
please select C++/Java for language, and other options as your need.
Please download cxx11 ABI version:
(libtorch-cxx11-abi-shared-with-deps-*.zip).
2. unzip the file to your own dir, with command
unzip libtorch-shared-with-deps-latest.zip -d your_dir
3. export libtorch_root/libtorch/include and
libtorch_root/libtorch/include/torch/csrc/api/include to $PATH
export libtorch_root/libtorch/lib/ to $LD_LIBRARY_PATH
4. config FFmpeg with ../configure --enable-libtorch \
--extra-cflag=-I/libtorch_root/libtorch/include \
--extra-cflag=-I/libtorch_root/libtorch/include/torch/csrc/api/include \
--extra-ldflags=-L/libtorch_root/libtorch/lib/
5. make
To run FFmpeg DNN inference with LibTorch backend:
./ffmpeg -i input.jpg -vf \
dnn_processing=dnn_backend=torch:model=LibTorch_model.pt -y output.jpg
The LibTorch_model.pt can be generated by Python with torch.jit.script()
api. https://pytorch.org/tutorials/advanced/cpp_export.html. This is
pytorch official guide about how to convert and load torchscript model.
Please note, torch.jit.trace() is not recommanded, since it does
not support ambiguous input size.
Signed-off-by: Ting Fu <ting.fu@intel.com>
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
Now when using openvino backend, user doesn't need to set input/output
names in command line. Model ports will be automatically detected.
For example:
ffmpeg -i input.png -vf \
dnn_detect=dnn_backend=openvino:model=model.xml:input=image:\
output=detection_out -y output.png
can be simplified to:
ffmpeg -i input.png -vf dnn_detect=dnn_backend=openvino:model=model.xml\
-y output.png
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
Reviewed-by: Guo Yejun <yejun.guo@intel.com>
This commit prepares the filter side to handle specific error codes
from the DNN backends instead of current DNN_ERROR.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Remove async flag from filter's perspective after the unification
of async and sync modes in the DNN backend.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
This commit unifies the async and sync mode from the DNN filters'
perspective. As of this commit, the Native backend only supports
synchronous execution mode.
Now the user can switch between async and sync mode by using the
'async' option in the backend_configs. The values can be 1 for
async and 0 for sync mode of execution.
This commit affects the following filters:
1. vf_dnn_classify
2. vf_dnn_detect
3. vf_dnn_processing
4. vf_sr
5. vf_derain
This commit also updates the filters vf_dnn_detect and vf_dnn_classify
to send only the input frame and send NULL as output frame instead of
input frame to the DNN backends.
Signed-off-by: Shubhanshu Saxena <shubhanshu.e01@gmail.com>
Different function type of model requires different parameters, for
example, object detection detects lots of objects (cat/dog/...) in
the frame, and classifcation needs to know which object (cat or dog)
it is going to classify.
The current interface needs to add a new function with more parameters
to support new requirement, with this change, we can just add a new
struct (for example DNNExecClassifyParams) based on DNNExecBaseParams,
and so we can continue to use the current interface execute_model just
with params changed.
So the backend knows the usage of model is for frame processing,
detect, classify, etc. Each function type has different behavior
in backend when handling the input/output data of the model.
Signed-off-by: Guo, Yejun <yejun.guo@intel.com>