FPGA-based Engine for DNNs

FPGA-based Engine for DNNs

The Omnitek DPU (Deep Learning Processing Unit) is a configurable IP core built from a suite of FPGA IP comprising the key components needed to construct inference engines suitable for running DNNs used for a wide range of Machine Learning applications, plus an SDK supporting the development of applications which integrate the DPU functionality.

 (Bild: Omnitek)

(Bild: Omnitek)

These can be targeted for a range of devices including small FPGAs with an embedded processor control for edge devices, or a PCI Express card with a large FPGA for data centre applications. The DPU can be programmed by creating a model of a chosen neural network in C/C++ or Python using standard frameworks such as TensorFlow. The DPU SDK Compiler converts the model into microcode for execution by the DPU. A quantizer optimally converts the weights and biases into the selected reduced precision processing format.

Themen:

| Neuheiten

Das könnte Sie auch Interessieren