最新訊息
HYPERSPECTRAL (HSI) CAMERAS
Cooled Camera
Adaptive Vision Studio
鏡頭目錄
 

Deep Learning Inference Engine
分享
 

Deep Learning Inference Engine for Machine Vision


Overview

WEAVER is a high performance inference engine for machine vision. It executes your deep neural networks on both nVidia GPU and Intel CPU with the highest performance. Being a fully commercial product, it assures industrial grade quality and long-term support.

 

Why WEAVER?

 

Want to know technical details?

WEAVER is a library coming with both C and modern C++ interfaces. You need to convert your data structure to WEAVER's tensor (multi-dimensional array) and then you can invoke its two functions: deploy and run.

Documentation Minimal example Supported Keras layers

Features

Ease of use

Most of the other inference engines require you to do the Python programming and tweak many things. WEAVER is different. He only does two things: (1) model optimization, (2) execution. All you need to deliver is your H5 network file (Keras output). Moreover, when you create your solution with Adaptive Vision Studio, WEAVER is already employed in its ready-made deep learning tools.

 

Long-term support

The fast pace of development in the field of deep learning is not particularly helpful in developing real-world systems. Most of our customers require long-term support and stable availability of the components they use. WEAVER solves that. He is independent of the open-source frameworks, eliminates Python code and provides long-term support for commercial projects.

 

Bespoke service

We are specialists in porting results of machine learning research to production and to other real-life environments. Our team will help in optimizing your neural networks for the given hardware and integrating it with applications written in C, C++ or C#.

Performance

Typically, WEAVER provides 3-10 times better execution time than an open-source framework without optimizations. Here you can see our internal results that we achieved for one of our ready-made tools, showing competitive performance also against other available inference engines.

DISCLAIMER: These are results for one particular network type. For other network architectures or runtime environments results may differ significantly.

 

Click to enlarge

 

Click to enlarge

 
 
特別聲明:本網站為汎叡有限公司版權所有,請尊重智慧財產權,未經允許請勿任意轉載、複製或做商業用途
所使用的所有商標名稱, 分屬各商標註冊公司所有。
Copyright c 2009 Fadracer Technology Inc. All Rights Reserved.
汎叡有限公司 TEL:+886-2-2585-8592 FAX:+886-2-2598-8802 E-MAIL:sales.tp@fadracer.com