Overview
Explore
Expertise
Ecosystem
Select a tab
6 results found
NetApp AIPod Mini Environment
Explore the NetApp AIPod Mini integrated with Intel® AI for Enterprise RAG. This lab automates the deployment of a secure, scalable ChatQ&A pipeline on Kubernetes. Leverage Intel® Xeon® and Gaudi® accelerators to transform enterprise data into insights, featuring one-click deployment, hardware optimization, and comprehensive observability for production-ready AI workloads.
Guided Demonstration Lab
•Intermediate
•5 launches
Components of Compute
A server is a crucial piece of hardware in every organization's data center. Its primary role is to support the success of an organization's web-tier, mid-tier, database, and AI application stack from a networking, processing, and storage standpoint regardless if the application is deployed within a natively installed operating system, in a virtual machine, or as a cloud-native or edge-native application.
The server components that make up this hardware system can vary depending on the specific needs and requirements of the organization. The main components commonly found in a server include CPU (central processing unit), Memory, Network Interface Cards, Storage Devices, and in some cases accelerators like DPUs (data processing unit) and GPUs (graphical processing unit).
In this Learning Path engineers can learn about the latest solutions on the market today that they can integrate into their solutions to help drive their business forward most effectively and efficiently.
Learning Path
•Fundamentals
Pure Storage Enterprise AI in-a-Box with Intel Gaudi 3 and Iterate.ai
Iterate.ai's Generate platform pairs with Intel Xeon CPUs, Gaudi 3 accelerators, Pure Storage FlashBlade//S, and Milvus vector DB. Deployed via Kubernetes/Slurm, it scales quickly, needs minimal tuning, and runs Llama 3, Mistral, and Inflection to accelerate AI training, inference, and search for healthcare, life-science, and finance workloads.
Advanced Configuration Lab
•Advanced
•12 launches
Virtual Cable Modem Termination Systems (vCMTS)
This learning path examines the key technologies shaping today's cable broadband networks. It covers the basics of DOCSIS infrastructure and RF spectrum splits, then dives into how things are evolving with virtual CMTS (vCMTS) and Distributed Access Architecture (DAA), and covers the basics of Precision Time Protocol (PTP). It all wraps up with a hands-on lab showcasing a vCMTS deployment on Red Hat OpenShift.
Learning Path
•Fundamentals
Intel vCMTS on Red Hat OpenShift Lab
Virtual CMTS (vCMTS) revolutionizes bandwidth management by virtualizing DOCSIS processing on x86 servers, paving the way for DOCSIS 4.0. Intel's Xeon 6 processors enhance encryption efficiency, while Red Hat's OpenShift Cloud Platform unifies workload management. This lab explores a deployment of vCMTS on OpenShift, showcasing performance insights via Grafana.
Foundations Lab
•Fundamentals
•44 launches
Introduction into OpenShift AI with Intel and Dell Infrastructure
Red Hat OpenShift AI, formerly known as Red Hat OpenShift Data Science, is a platform designed to streamline the process of building and deploying machine learning (ML) models. It caters to both data scientists and developers by providing a collaborative environment for the entire lifecycle of AI/ML projects, from experimentation to production.
In this lab, you will explore the features of OpenShift AI by building and deploying a fraud detection model. This environment is built ontop of Dell R660's and Intel Xeon's 5th generation processors.
Foundations Lab
•Fundamentals
•337 launches