In this article

A mid-size manufacturer needed to add cybersecurity and network monitoring workloads to its general-purpose data center, creating a spike in demand for compute power. Deploying a mainstream network interface card would have resulted in critical (and costly) packet loss and degraded performance.

The better solution: an Intel® FPGA Programmable Acceleration Card (Intel® FPGA PAC), customized for the cybersecurity workload at hand. Adding the PAC card relieved the server CPU of the heaviest compute tasks, freeing up processing resources for essential business applications and services.

Field-programmable gate arrays (FPGAs) are exactly what their name implies: acceleration devices that can be re-configured after manufacture to suit a given environment or workload. Their array of logic gates, memory stores and input/output wires can be quickly interconnected to create a hardware acceleration circuit/solution tailored for almost any given operation.

FPGAs were introduced decades ago as a highly flexible and low-latency acceleration option. Although ASICs remain faster and more efficient, their rigidity is increasingly disqualifying them as a viable choice in the high-performance computing (HPC) arena. On the other end of the spectrum, GPUs deliver brute-force throughput but are often inefficient and come with significant power and cooling demands. The programmability of FPGAs proved to be highly appealing for workflow acceleration, but requiring programming was a double-edged sword. FPGAs required specialized and hard-to-find coding expertise, gaining them a reputation for being difficult and time-consuming to program.

That's all changed now. Modern manufacturing methods have closed the performance and efficiency gaps between FPGAs and ASICs and modern programming methods often make FPGAs a drop-in accelerator solution. 

New Intel® FGPA technologies offer fast deployment and standardization through Intel® Programmable Acceleration Cards (PACs) and the Intel® Acceleration Stack. Independent solution vendors have pre-designed FPGA accelerator solutions that seamlessly integrate into shared libraries, software frameworks and custom software applications.

Another technology worth noting is the Open Programmable Acceleration Engine (OPAE), which simplifies and streamlines the integration of FPGA acceleration devices into software applications and environments. Providing users with a consistent API across FPGA products and platforms, the OPAE abstracts the FPGA hardware to ease resource access and management without significantly affecting performance — and in so doing, removes earlier barriers to entry for FPGA workflow acceleration.

Creating a custom FPGA acceleration solution did once take days or weeks to burn in, but that's in the past, thanks to modern toolware resources like Intel® oneAPI, a unified model that simplifies programming across multiple accelerators and the openVINO™ toolkit for optimizing neural networks. 

Today there are many use cases requiring a flexible, power-efficient accelerator — and in those use cases, FPGAs are fast becoming the preferred choice over GPUs and ASICs. 

FPGAs offer many benefits in many environments

In a "suitability matrix" of accelerator choices, FPGAs are a stand out option, especially when measured against other accelerator options across a range of metrics, including relatively low price, performance per-watt per-volume, ruggedness, security, time to market, increased lifecycle and total cost of ownership (TCO). FPGAs are the only solution in which the hardware can be tailored repeatedly to fit the software exactly.

In short, FPGAs win in spaces where the workflow is highly dynamic and flexible, where power consumption must be minimized, low latency is a priority or the accelerator must function independently from the CPU.

The challenge is determining when FPGAs make sense. What follows are a few use cases in which we'll compare the three options and apply a suitability matrix to identify the logical acceleration choice.

Use case 1: Edge to Core – when you need a flexible, hardened edge solution

Edge computing will play a critical role in the emerging 5G universe. But, the edge also presents unique challenges around the need for flexibility and ruggedness. What's more, many 5G technologies are being rolled out even while their standards are still being drafted.

From smart phones to in-home personal assistants, the suitable acceleration solution will play a crucial role in realizing the possibilities for edge AI. So, let's apply our suitability matrix to the choices:

  • GPUs would need to be scaled down for edge applications, and even then, they would run too hot and are not rugged enough.
  • ASICs are hardened, purpose-built, solid-state devices that appear to be applicable for the edge. But, if the edge environment ever changes, the ASIC would need to be re-engineered. Thus, an ASIC would require rigid standards in place — yet standards are almost certain to evolve.
  • FPGAs offer energy-efficient operation at a relatively low unit cost, and their flexibility reduces the time to market as the workflow can be accelerated while still under a draft standard. As the 5G standard evolves, the easy programmability of FPGAs ensures nearly double the usable lifecycle over ASICs.

In one real-world example, Cassandra users can accelerate their database most effectively by using Intel® FPGA performance acceleration cards powered by the rENIAC Cassandra accelerator stack. Thanks to FPGAs, heavy data handling can be offloaded from the CPU, increasing data flow and throughput speed, reducing latency and TCO.

Use case 2: Supercomputing – overcoming power and cooling demands

The fast-growing technologies that are shaping the future — IoT, AI, machine learning, data analytics — are advancing in tandem with the capabilities of HPC and supercomputing. Thanks to FPGAs' modest power requirements as well as modern programming innovations like Intel® oneAPI, their use is becoming more mainstream in the supercomputing world.

By comparison, GPUs are ill-suited for general-purpose supercomputing deployments, owing to their electrical consumption and SIMD programming model. Furthermore, the power and cooling requirements of GPUs become cost-prohibitive across the entire compute portfolio. Next-generation GPUs will require more than 400 watts, and that power consumption is expected to grow — whereas FPGAs need only up to 137 watts, and their future power requirements are trending downward.

For their low energy consumption, affordability and programming flexibility, FPGAs are the logical choice for use cases that demand supercomputing performance and time to solution.

Use case 3: In-store image recognition analytics

Everyone has experienced that "red flag" moment in the supermarket checkout line when an item refuses to scan correctly. One component of scanner technology involves image recognition, in which an analytics camera matches the bar code to a stored picture of the object.

Stores with more than a dozen checkout scanners running multiple image recognition analytics at once require significant electricity to support that workload. At the same time, they need to be conscious of performance per square inch of the operation — retailers don't design their environments to accommodate the multiple racks, power and cooling needs of GPUs in a full-on data center.

In this use case, in a head-to-head matchup, the minimal power requirements of FPGAs once again make them the suitable accelerator solution.

Watch FPGAs accelerate a world of workloads in diverse ATC labs

It took years to throw off the reputation for being difficult, but now FPGAs have come into their own, thanks to modern programming innovations. When compared side-by-side in a multifactored suitability matrix, WWT finds that FPGAs are often the superior choice over GPUs and ASICs for compute and data acceleration.

In our Advanced Technology Center (ATC), we are actively testing and demonstrating use case labs ranging from accelerated analytics and database management to cybersecurity and supercomputing. We invite you to schedule a demonstration in the ATC or visit WWT for a personalized workshop, as we explore places in your operation that would benefit from the cost-effective flexibility and rugged reliability of next-gen FPGAs.

Don't fear the programming issues of yesteryear — the broad applicability of fast, programmable FPGAs from edge to core to cloud, for compute and data, is undeniable.

Technologies