Jul. 18, 2018 – Machine learning-based applications have become prevalent across consumer, medical, and automotive markets. Still, the underlying architecture(s) and implementations are evolving rapidly, to best fit the throughput, latency, and power efficiency requirements of an ever increasing application space. Although ML is often associated with the unique nature of (many parallel) compute engines in GPU hardware, the opportunities for ML designs extend to cost-sensitive, low-power markets. The implementation of an ML inference engine on an SoC is a great fit for these applications – this article (very briefly) reviews ML basics, and then highlights what the embedded FPGA team at Flex Logix is pursuing in this area.
Introduction
Machine learning refers to the capability of an electronic system to: