Partner POV | How to Speak Like a Data Center Geek: Compute and Storage
In this article
Article written and contributed by, Equinix.
These days, people are paying more attention to data centers and the role they play in our modern digital society. They're asking questions about how data centers use resources like energy and water, and how they can do so more efficiently. But, with so much focus on how data centers work, it's easy to overlook why they exist.
Simply put, data centers exist to host and connect compute and storage hardware. The capabilities they provide help keep that hardware running to its full potential. In turn, digital enterprises rely on this hardware to help them innovate and serve customers better.
Rather than offering compute and storage hardware directly to customers, colocation providers like Equinix help businesses optimize the privately owned hardware they deploy inside our facilities. We can also help them access compute and storage capabilities from service providers in our partner ecosystem.
What our customers do with their hardware is ultimately up to them, and they have to pick the right systems for their unique business requirements. It's important to consider different varieties of hardware and how they support different kinds of workloads. With that in mind, let's define some of the key compute and storage terms commonly heard in the data center industry.
Server
The fundamental hardware component of a data center. Servers contain processors that perform the act of computing: executing programs and performing computations using data. They also include the memory and storage those processors need in order to access and work with data. Each server within a data center has dedicated power, cooling, and network connectivity.
Data center servers are mounted horizontally in racks or cabinets to save space and ensure ease of access for technicians. In turn, cages provide a secure storage area for racks and cabinets. In a colocation facility, a customer can acquire a private cage that contains only their hardware and is only accessible to their authorized visitors.
Memory
The temporary workspace where a computer executes programs and processes data. Also known as random access memory (RAM). Unlike storage, memory is volatile, meaning that programs and data only stay there long enough for the system to complete a given task. The amount of RAM in a computer system impacts its performance, but there are ways to make better use of the available memory.
Storage
In contrast to memory, storage is a non-volatile environment for data and programs. This means that anything moved into storage is intended to stay there until a user intentionally deletes it.
While cloud storage services are available, using a private storage environment may help businesses maintain privacy and control over their data. They can still access cloud services when the situation calls for it, but since they maintain copies of their data outside the cloud, they lower the risk of getting locked into one particular cloud provider.
CPU
Central processing unit. The primary type of processor found in any computer.
CPUs have been the workhorse of IT for decades now. Although they're more powerful today, the fundamental design hasn't changed over the years. Today's CPUs still execute tasks sequentially, just like those from the early days of computing.
Enterprises are increasingly using CPUs alongside co-processors such as GPUs, particularly for use cases that require a lot of processing power.
GPU
Graphics processing unit. Unlike CPUs, GPUs perform parallel processing, meaning that they execute many different jobs simultaneously.
GPUs were originally developed to render video game graphics. However, they've since been applied to different use cases, including training large language models (LLMs). However, GPUs aren't right for every AI workload. Inference workloads require less processing power than training workloads, so using GPUs for inference would typically be excessive and inefficient.
AI accelerators
Any processor that helps perform AI and machine learning tasks faster. This includes GPUs, but also emerging processors like language processing units (LPUs), neural processing units (NPUs) and tensor processing units (TPUs).
Each of these has a different specialty, so businesses may choose a combination of AI accelerators. For instance, NPUs support neural network workloads. Since they're energy efficient, they're a good fit for AI in edge devices.
Cloud
Hardware or software capabilities offered as a Service, without the customer having to own or manage the underlying physical infrastructure. In some cases, acquiring cloud compute and storage services can be quicker and easier than deploying infrastructure on-premises. New providers—commonly called neoclouds—have emerged to provide the specialized compute infrastructure needed for enterprise AI.
HPC
High-performance computing. Deploying many advanced computers in an interconnected cluster to work concurrently on the same task.
HPC clusters can solve complex, data-intensive problems faster than individual computers. HPC has existed for decades now, but recently, incorporating GPUs into HPC systems has further amplified their abilities. Much of the recent progress we've seen in AI and machine learning has been thanks to GPU-equipped HPC systems.
Edge computing
Any computing that happens at the digital edge: the outer limits of an enterprise network, where the digital world meets the physical world. Deploying at the edge is about getting close to end users and data sources to reduce latency and ensure a better user experience.
Edge computing is a concept, not a specific technology. However, there are certain devices that are well-suited for life at the edge: those that can be deployed quickly and efficiently and operate in places with limited power and connectivity. Examples include sensors inside manufacturing facilities, connected vehicles, and even the smartphone in your pocket.
Quantum computing
An emerging form of computing based on quantum bits, or "qubits," which are able to represent every state simultaneously. While traditional bits represent data in binary form—0s and 1s—qubits can represent both 0 and 1 at the same time.
Quantum computers can solve complex computational problems in minutes that would take a classical computer thousands of years. However, they're not practical for every use case and should be considered complementary to classical computers, rather than a direct replacement.
Enterprises now face the challenge of how to integrate quantum capabilities alongside their existing IT infrastructure. Quantum Computing as a Service (QCaaS) providers have emerged to help them achieve this.
Now that you're familiar with the compute and storage hardware that powers our digital world, learn more about how the right data center can take that hardware to the next level.