Overview
Explore
Expertise
Ecosystem
Select a tab
8 results found
Drone Landing Identification an Intel AI Reference Kit Lab
This lab will walk you through one of Intel's AI Reference Kits to develop an optimized semantic segmentation solution based on the Visual Geometry Group (VGG)-UNET architecture, aimed at assisting drones in safely landing by identifying and segmenting paved areas. The proposed system utilizes Intel® oneDNN optimized TensorFlow to accelerate the training and inference performance of drones equipped with Intel hardware. Additionally, Intel® Neural Compressor is applied to compress the trained segmentation model to further enhance inference speed. Explore the Developer Catalog for information on various use cases.
Advanced Configuration Lab
•34 launches
Pure Storage Enterprise AI in-a-Box with Intel Gaudi 3 and Iterate.ai
Iterate.ai's Generate platform pairs with Intel Xeon CPUs, Gaudi 3 accelerators, Pure Storage FlashBlade//S, and Milvus vector DB. Deployed via Kubernetes/Slurm, it scales quickly, needs minimal tuning, and runs Llama 3, Mistral, and Inflection to accelerate AI training, inference, and search for healthcare, life-science, and finance workloads.
Advanced Configuration Lab
•4 launches
Virtual Cable Modem Termination Systems (vCMTS)
This learning path examines the key technologies shaping today's cable broadband networks. It covers the basics of DOCSIS infrastructure and RF spectrum splits, then dives into how things are evolving with virtual CMTS (vCMTS) and Distributed Access Architecture (DAA), and covers the basics of Precision Time Protocol (PTP). It all wraps up with a hands-on lab showcasing a vCMTS deployment on Red Hat OpenShift.
Learning Path
Intel vCMTS on Red Hat OpenShift Lab
Virtual CMTS (vCMTS) revolutionizes bandwidth management by virtualizing DOCSIS processing on x86 servers, paving the way for DOCSIS 4.0. Intel's Xeon 6 processors enhance encryption efficiency, while Red Hat's OpenShift Cloud Platform unifies workload management. This lab explores a deployment of vCMTS on OpenShift, showcasing performance insights via Grafana.
Foundations Lab
•33 launches
Person Tracking with Intel's AI Reference Kit
This lab focuses on implementing live person tracking using Intel's OpenVINO™, a toolkit for high-performance deep learning inference. The objective is to read frames from a video sequence, detect people within the frames, assign unique identifiers to each person, and track them as they move across frames. The tracking algorithm utilized here is Deep SORT (Simple Online and Realtime Tracking), an extension of SORT that incorporates appearance information along with motion for improved tracking accuracy.
Advanced Configuration Lab
•39 launches
Introduction into OpenShift AI with Intel and Dell Infrastructure
Red Hat OpenShift AI, formerly known as Red Hat OpenShift Data Science, is a platform designed to streamline the process of building and deploying machine learning (ML) models. It caters to both data scientists and developers by providing a collaborative environment for the entire lifecycle of AI/ML projects, from experimentation to production.
In this lab, you will explore the features of OpenShift AI by building and deploying a fraud detection model. This environment is built ontop of Dell R660's and Intel Xeon's 5th generation processors.
Foundations Lab
•313 launches
Liqid Composable Disaggregated Infrastructure Lab
The Liqid Composable Disaggregated Infrastructure (CDI) Lab showcases how to create impossible hardware configurations that are unfeasible in the physical world. Compose bare metal servers with any number of configurations via software. The lab consists of Dell PowerEdge compute, Liqid PCIe4 Fabric, Liqid Matrix Software, Intel NVMe and NVIDIA GPUs.
Foundations Lab
•25 launches
Components of Compute
A server is a crucial piece of hardware in every organization's data center. Its primary role is to support the success of an organization's web-tier, mid-tier, database, and AI application stack from a networking, processing, and storage standpoint regardless if the application is deployed within a natively installed operating system, in a virtual machine, or as a cloud-native or edge-native application.
The server components that make up this hardware system can vary depending on the specific needs and requirements of the organization. The main components commonly found in a server include CPU (central processing unit), Memory, Network Interface Cards, Storage Devices, and in some cases accelerators like DPUs (data processing unit) and GPUs (graphical processing unit).
In this Learning Path engineers can learn about the latest solutions on the market today that they can integrate into their solutions to help drive their business forward most effectively and efficiently.
Learning Path