WEAVER is a high performance inference engine for machine vision. It executes your deep neural networks on both nVidia GPU and Intel CPU with the highest performance. Being a fully commercial product, it assures industrial grade quality and long-term support.
Want to know technical details?
WEAVER is a library coming with both C and modern C++ interfaces. You need to convert your data structure to WEAVER's tensor (multi-dimensional array) and then you can invoke its two functions: deploy and run.
Most of the other inference engines require you to do the Python programming and tweak many things. WEAVER is different. He only does two things: (1) model optimization, (2) execution. All you need to deliver is your H5 network file (Keras output). Moreover, when you create your solution with Adaptive Vision Studio, WEAVER is already employed in its ready-made deep learning tools.
The fast pace of development in the field of deep learning is not particularly helpful in developing real-world systems. Most of our customers require long-term support and stable availability of the components they use. WEAVER solves that. He is independent of the open-source frameworks, eliminates Python code and provides long-term support for commercial projects.
We are specialists in porting results of machine learning research to production and to other real-life environments. Our team will help in optimizing your neural networks for the given hardware and integrating it with applications written in C, C++ or C#.
Typically, WEAVER provides 3-10 times better execution time than an open-source framework without optimizations. Here you can see our internal results that we achieved for one of our ready-made tools, showing competitive performance also against other available inference engines.
DISCLAIMER: These are results for one particular network type. For other network architectures or runtime environments results may differ significantly.