AI processing at the edge

Lattice Semiconductor has launched the latest version of its complete solutions stack for on-device AI processing at the Edge, Lattice sensAI™ 3.0.

  • 4 years ago Posted in
The latest version of the stack includes support for the CrossLink-NX™ family of FPGAs for low power smart vision applications and features customized convolutional neural network (CNN) IP, a flexible accelerator IP that simplifies implementation of common CNN networks and is optimized to further leverage the parallel processing capabilities of FPGAs. With the addition of support for CrossLink-NX FPGAs, Lattice sensAI brings new levels of power and performance to smart vision applications in the surveillance/security, robotics, automotive, and computing markets.

 

To address data security, latency, and privacy issues, developers want to move the AI processing that powers their smart vision and other AI applications from the cloud to the Edge. Most Edge devices are battery-powered or otherwise sensitive to power consumption, so developers need hardware and software solutions that deliver the processing capabilities needed for AI applications, while keeping power consumption as low as possible. By enhancing the sensAI stack, Lattice is widening the range of power and performance options available to customers. For applications like smart vision that require higher Edge AI performance, CrossLink-NX FPGAs running sensAI software deliver twice the performance at half the power when compared to prior releases of the solutions stack.

 

“With enhancements optimized for our new low power CrossLink-NX family of FPGAs, the latest release of our Lattice sensAI stack provides a compelling blend of performance and power consumption,” said Hussein Osman, Market Segment Manager, Lattice Semiconductor. “Lattice is building on the success we’ve had to date with sensAI by continually improving the stack’s capabilities and ease-of-use for customers by releasing new, ready-to-implement application demos and reference designs for popular AI applications.”

 

New and updated features of the sensAI solutions stack include:

 

·         New CNN engine IP and compiler support for CrossLink-NX – the stack now supports a customized CNN accelerator IP running on a CrossLink-NX FPGA that takes advantage of the underlying parallel processing architecture of the FPGA. Updates to the NN compiler software tool let developers easily compile a trained NN model and download it to a CrossLink-NX FPGA.

·         CrossLink-NX-based object counting demo - a VGG-based object counting demo operating on a CrossLink-NX FPGA delivers 10 frames-per-second while consuming only 200 mW. Object counting is a common smart vision application used in the surveillance/security, industrial, automotive, and robotics markets.

·         Optimized FPGA architecture for CrossLink-NX – when running on a CrossLink-NX FPGA, the sensAI solutions stack offers up to 2.5 Mb of distributed memory and block RAM and additional DSP resources for efficient on-chip implementation of AI workloads to reduce the need for cloud-based analytics.

·         Up to 75 percent lower power consumption – CrossLink-NX FPGAs are manufactured in a 28 nm FD-SOI process that delivers a 75 percent reduction in power in comparison to similar competing FPGAs.

·         High-performance I/O – many components (images sensors, applications processors, etc.) used in smart vision systems require support for the MIPI I/O standard. One of the target applications for sensAI is smart vision, and CrossLink-NX devices are currently the only low-power FPGAs to deliver MIPI I/O speeds of up to 2.5 Gbps. This makes CrossLink-NX FPGAs an ideal hardware platform for sensAI applications requiring MIPI support. CrossLink-NX FPGA’s I/Os offer instant-on performance and are able to configure themselves in less than 3 ms, with full-device configuration in as little as 8 ms.

·         Increased neural network architecture support – previous versions of sensAI supported the VGG and MobileNet v1 neural network models. The latest version of the stack adds support for the MobileNet v2, SSD, and ResNet models on the Lattice ECP5™ family of general-purpose FPGAs.

Beacon, NY, Dec 20, 2024– DocuWare unveils its AI-powered Intelligent Document Processing...
85% of IT decision makers surveyed reported progress in their companies’ 2024 AI strategy, with...
Lopitaux joins as global companies embrace GenAI solutions at scale and look to build their own...
Predictive maintenance and forecasting for security and failures will be a growing area for MSPs...
NVIDIA continues to dominate the AI hardware market: powering over 2x the enterprise AI deployments...
Hitachi Vantara survey finds data demands to triple by 2026, highlighting critical role of data...
81% of enterprises plan to increase investments in AI-powered IT operations to accelerate...
Hitachi Vantara survey finds data demands to triple by 2026, highlighting critical role of data...