Overview
Explore
Expertise
Ecosystem
Select a tab
3 results found
NetApp AIPod Mini Environment
Explore the NetApp AIPod Mini integrated with Intel® AI for Enterprise RAG. This lab automates the deployment of a secure, scalable ChatQ&A pipeline on Kubernetes. Leverage Intel® Xeon® and Gaudi® accelerators to transform enterprise data into insights, featuring one-click deployment, hardware optimization, and comprehensive observability for production-ready AI workloads.
Guided Demonstration Lab
•Intermediate
•5 launches
Pure Storage Enterprise AI in-a-Box with Intel Gaudi 3 and Iterate.ai
Iterate.ai's Generate platform pairs with Intel Xeon CPUs, Gaudi 3 accelerators, Pure Storage FlashBlade//S, and Milvus vector DB. Deployed via Kubernetes/Slurm, it scales quickly, needs minimal tuning, and runs Llama 3, Mistral, and Inflection to accelerate AI training, inference, and search for healthcare, life-science, and finance workloads.
Advanced Configuration Lab
•Advanced
•12 launches
Intel vCMTS on Red Hat OpenShift Lab
Virtual CMTS (vCMTS) revolutionizes bandwidth management by virtualizing DOCSIS processing on x86 servers, paving the way for DOCSIS 4.0. Intel's Xeon 6 processors enhance encryption efficiency, while Red Hat's OpenShift Cloud Platform unifies workload management. This lab explores a deployment of vCMTS on OpenShift, showcasing performance insights via Grafana.
Foundations Lab
•Fundamentals
•47 launches