Smiley face

Avnet, a major distributor, recently announced its decision to put considerable weight behind Mipsology’s Zebra software—specifically in selling the software to its customers based in Asia. Zebra software is said to break down the complexity of FPGAs and help designers accelerate deep learning inference.

FPGAs offer low-latency performance in AI inference operations. But FPGAs have historically posed a major drawback: they require the expertise of highly specialized (and scarce) engineers who are trained to work with FPGAs. Mipsology claims that its proven Zebra software enables OEMs to incorporate FPGAs into their designs without the need for this specialized FPGA expertise. 

Like Avnet, several other major suppliers and distributors are developing solutions that make FPGAs more accessible to the everyday designer. Let’s start with a few more details about Zebra.

Zebra: A Plug-and-Play FPGA Solution?

Mipsology explains that Zebra takes down the roadblock of FPGA expertise by offering plug-and-play solutions for AI applications. According to the AI-software company, the perks of FPGA access far outstrips the CPUs and GPUs that they’ll replace. Mipsology says that with Zebra, designers can run neural networks defined with frameworks such as PyTorch, Caffe, or TensorFlow.

Diagram of Mipsology's Zebra

Diagram of Mipsology’s Zebra. Image used courtesy of Xilinx and Mipsology
 

In addition to removing the technical barriers formerly impeding the deployment of FPGAs, Zebra also makes plug-and-play replacement possible for existing solutions. OEMs aiming to exploit FPGA-based acceleration technologies will be able do so with no code changes.

Xilinx’s Alveo data center accelerator cards will be Avnet’s first offering featuring Zebra. According to Alan Chui, supplier business management president of Avnet Asia, “the combination of Avnet’s comprehensive design services, Alveo’s best-in-class FPGA acceleration, and Mipsology’s Zebra technology empowers our customers with a low-cost, high-performance and long lifespan solution for AI neural networking inference.”

Xilinx Gives FPGAs a Facelift with “ACAPs”

No one computational architecture can best fit all applications. As Xilinx describes in its white paper on Versal Premium, there were, until now, three choices:

  1. Scalar processing elements, such as CPUs. These are quite effective with complex algorithms with diverse decision trees supported by a broad set of libraries. However, these offer only limited scalability.
  2. Vector processing elements such as DSPs and GPUs exhibit greater efficiency at a narrower set of parallelizable compute functions, but they suffer from high latency.
  3. FPGAs feature programmable logic that can be customized for latency-critical, real-time applications. But here, algorithmic changes are time-consuming to implement.

Versal Premium

Versal Premium integrates three types of programmable engines. Image used courtesy of Xilinx

Xilinx’s ACAP, or adaptive compute acceleration platform, is the first of its kind to deliver all three—CPUs, DSPs/GPUs, and FPGAs—in one package. The Versal Premium is coupled together with a high-bandwidth network-on-chip (NoC), which provides memory-mapped access to the three different processing element types. This unique architecture allows more focused customization and greater performance than any of the three original architectures could on its own.

Xilinx’s new ACAP, Versal Premium, is designed for network and cloud applications.

Versal Premium

Xilinx’s Versal Premium. Image used courtesy of Xilinx

According to Xilinx, Versal Premium offers highly integrated, networked, and power-optimized cores. The FPGA inventor claims that Versal Premium has the highest bandwidth and compute density available on an adaptable platform. It is designed for cloud applications requiring scalable and adaptable application acceleration. 

Xilinx also offers the Versal AI Core, an ACAP which is aimed primarily at AI applications. It is touted by the company as offering a hundredfold increase in performance when compared to today’s server-class CPUs.

TI Brings the Power of Stackable ICs to FPGAs

TI’s TPS546D24A SWIFT provides a way for engineers to stack four ICs for four times the power—which is especially useful for power-dense FPGAs. The 40 A DC-DC buck converter is available in a 7 mm by 5 mm LQFN-CLIP(40) package. Four of the devices can be interconnected to provide a full 160 A output.

Functional diagram of TPS546D24A

Functional diagram of TPS546D24A. Image used courtesy of Texas Instruments
 

With its unique stackability, the TPS546D24A buck converter offers both the small size and thermal performance required for FPGA-based applications. High efficiency is attained in part by allowable switching speeds of up to 1.5 MHz. A 0.9-mΩ low-side MOSFET allows the unit to achieve what TI claims is a 3.5% higher efficiency than competing DC-DC buck converters. 

SWIFT features a PMBus interface offering a selectable internal compensation network. This should allow designers to eliminate up to six external compensation components, shrinking the overall size of the power supply by more than 10% for high-current FPGA applications.

One of the device’s more important features is a maximum output error of <10%. This is a necessary virtue when powering FPGAs, which can tolerate little variation in operating power supply voltage.

What’s Your Take? 

If you’re a designer specialized to work with FPGAs, what are your thoughts on these new developments? Will they affect the way you handle FPGAs? What about engineers with little to no experience with them? Do you see these new products (or others) bringing FPGAs down to a level that makes sense for your designs? Share your feedback in the comments below. 

Source: All about circuit News