Primer Series: Field Programmable Gate Arrays (FPGAs)
In this article
This was originally published in June 2020
Have you ever sat in a meeting, and the conversation turns to a technology with which you are unfamiliar? Suddenly, a bunch of acronyms are being thrown around. You have no idea what they mean, while everyone else is nodding their heads and seems to know precisely what is being discussed.
We've all been there, and to help our valued customers, we've decided to write a series of 'primer articles' to give the reader essential information on various products and technologies. This article is going to cover the basics of Field Programmable Gate Arrays, or FPGAs.
Workflow acceleration is an increasingly important topic of conversation these days. While a general-purpose chip such as a Central Processing Unit (CPU) or Graphics Processing Unit (GPU) can run nearly any code, modern use cases involve repetitive, compute-intensive functions that are slow and inefficient to execute in software.
Hardware acceleration is a better way forward. Rather than force-feeding an ill-fitting workflow to a general-purpose chip through software, the chip hardware is tailored to that workflow's exact needs. This creates a heterogeneous computing environment where workflows are executed on best-fit hardware, removing bottlenecks and creating a positively compounding effect for performance and efficiency.
There are many different types of hardware accelerators on the market and each has benefits and drawbacks to consider. This article aims to clear up the mystery around Field Programmable Gate Arrays (FPGAs), as they are a unique type of integrated circuit with inherent benefits for this application.
To understand what makes an FPGA special, some knowledge of chip manufacturing is required.
At the highest level, a processor is the component of a computer that executes instructions and its most elementary building block is the logic gate. Logic gates are the circuitry which perform operations. The logic gate designs interconnections between these gates, and the input/output paths on a chip cannot be modified once manufactured. This restriction creates an inverse relationship between flexibility and efficiency, evidenced by a market full of chips that are optimized for only one of those qualities.
Application-Specific Integrated Circuits (ASICs) are the fastest, most efficient and least flexible chip option. They are designed for workflows that are static in nature and narrow in scope. Cisco uses ASICs for operations like low-level switching inside their devices, as an example.
CPUs and GPUs are the flexible general-purpose option, designed around supporting a broad array of operations. While CPUs and GPUs are slower and less efficient than an ASIC, higher flexibility affords them a larger share of the data center. An important distinction of GPUs is they are optimized for executing operations in parallel and are more efficient than CPUs for certain workflows.
Today's computational needs are at an inflection point. Some workflows are constantly evolving, others have massive compute requirements for a small subset of operations and more still are chasing peak performance and efficiency. The FPGA occupies the space between ASICs and CPU/GPUs, providing a unique solution to these problems.
FPGAs claim to fame is their ability to be re-configured after manufacturing, hence the term "field-programmable." An "array" of logic gates, memory stores and input/output wires can be quickly configured and interconnected to most efficiently perform any given operation in hardware.
Imagine the concept of a workflow in terms of drawing a picture, where the chips are the writing utensil choices. An ASIC can be viewed as a printed photo (exact solution for one use case), and a CPU/GPU as an etch-a-sketch (solves more use cases less exactly). An FPGA is the deluxe box of erasable colored pencils: it allows for an exact hardware design while remaining flexible. Modern software development methods have enabled FPGAs to become drop-in solutions to workflows in need of hardware acceleration.
The following are common types of workflows that benefit from FPGA hardware acceleration, as a GPU is too inefficient and the use case doesn't justify creating a new ASIC.
- Data Analytics – FPGAs placed as inline accelerators between databases and their clients enables higher performance at lower latencies.
- AI and Machine Learning – FPGAs programmed with deeply pipelined logic greatly increases server throughput and reduces total cost of ownership (TCO).
- Risk Management – FPGAs can return results on financial model backtesting workflows two to eight times faster than conventional architectures.
No matter where your organization is in its progression to a next-generation software-defined infrastructure (SDI), WWT has resources to help. As we continue this primer series, look for additional articles in the coming months related to other world-class technologies and solutions.
If you're interested in getting your hands on the technology, schedule some time in WWT's FPGA labs hosted in our Advanced Technology Center (ATC).
- rENIAC FPGA Casandra Acceleration Lab- Intel FPGA-based database proxy lab for accelerating Cassandra queries.
- Swarm64 FPGA Analytics Lab- Intel FPGA-based data analytics lab for accelerating PostgreSQL analytic queries and data insertion performance.
- CTAccel CIP Image Processing Acceleration- Intel FPGA-based image processing lab for accelerating processing functions and workflows.
- Azure Stack Edge Hands-On Lab- Intelligent Edge lab featuring Intel FPGAs for accelerating video inferencing workflows prior to uploading to the Azure cloud.
- Levyx Risk Analytics Acceleration Framework - Intel FPGA-based Apache Spark lab for accelerating financial risk analysis and backtesting models.
Looking for something else? Please let us know.