In this article

Deep learning is a strand of artificial intelligence (AI) and machine learning that imitates human knowledge gathering. Deep learning uses multiple layers of artificial neural networks to provide greater accuracy in language translation, object detection and speech recognition tasks.

Traditional machine learning uses linear algorithms, whereas deep learning uses algorithms stacked in increasingly abstract and complex hierarchies. It automatically learns from data like images, text and video without requiring human knowledge or written rules. As a result, it's commonly used to automate predictive analytics, recognize digital images and video, and understand natural language. 

Deep learning capabilities have resulted in critical advances in AI. It powers the Google DeepMind AlphaGo program, which defeated human world champions in the board game Go, intelligent voice assistants and self-driving cars. 

NVIDIA's graphics processing unit (GPU)-accelerated deep learning frameworks enable teams of data scientists and researchers to speed up the deep learning process. For example, deep learning training that would take weeks can now be completed in hours or days. These NVIDIA deep learning GPU-accelerated platforms deliver high-performance and low-latency for even the most data-intensive deep neural networks.

How does deep learning work?

Deep learning uses layered hierarchies of algorithms to help a machine understand data to an acceptable level of accuracy.

Traditional machine learning processes are supervised by human programmers, who have to specifically tell programs what they should be looking for. This process, known as feature extraction, means the machine is dependent on the programmer's ability to define the feature set accurately.

Deep learning enables the program to self-build the feature set without being supervised by a programmer — a faster and often more accurate approach. The program is built on training data, such as simple information like images of cats. In this example, the program creates a predictive model assuming that any image with pixel patterns that look like four legs and a tail is a cat. With each iteration of data, the machine refines its predictive model to become increasingly accurate and complex. As it does so, the program can accurately identify images of cats from a pool of millions.

To achieve this, deep learning programs require huge amounts of processing power and training data. This is especially crucial as the growth of technologies like the Internet of Things (IoT) creates vast volumes of unlabeled and unstructured information. 

Deep learning relies on various methods that help it to strengthen its models, including:

Neural networks: Deep learning models are underpinned by artificial neural networks — an advanced form of a machine learning algorithm. For this reason, deep learning is often known as deep neural learning and deep neural networking. Neural networks feed data to the model, then let it work out whether it's interpreted data in the right way or made the correct decision about a data element. To do this, neural networks use trial-and-error and rely on massive amounts of data to train the deep learning model.

Learning rate decay: Learning rate controls how much the deep learning model needs to alter or change in response to errors. If learning rates are too large, it can result in unstable training processes and suboptimal weighting. If they're too small, the training process may become too long or get stuck. Learning rate decay, also known as learning rate annealing and adaptive learning rates, means the learning rates adapt to increase performance levels and reduce the training time. It's commonly used to gradually reduce the learning rate.

Transfer learning: This process helps perfect previously trained models using interfaces to preexisting networks. Users feed networks with data and, when adjustments are made, enables tasks to be performed more specifically. Transfer learning requires less data than other processes and can drastically reduce computation time.

Training from scratch: This involves developers collecting large, labeled data sets to configure an architecture to learn features and models. Training from scratch is vital for new applications and those with vast output categories, but it's less common as it requires vast amounts of data and can take weeks to complete.

Dropout: Dropout aims to solve the issue of overfitting, which is the incorrect optimization of an AI model. Dropout can help to improve neural network performance on tasks like supervised learning in computational biology, document classification and speech recognition, to name a few. 

NVIDIA deep learning examples

Deep learning models process data like the human brain, which means it's ideal for being applied to tasks that people complete. As a result, common deep learning use cases include conversational AI, image recognition, natural language processing (NLP) and speech recognition tools. These tools increasingly appear in applications like language translation services and self-driving cars. 

Deep learning use cases also include big data analytics applications, such as medical diagnosis, network security and stock market trading signals. 

What is the NVIDIA AI Platform for Deep Learning?

The NVIDIA AI Platform for Deep Learning enables you to develop AI applications through GPU-accelerated deep learning frameworks. The platform offers the flexibility required to design, build and train custom deep neural networks and add interfaces to common programming languages like C/C++ and Python. Deep learning frameworks like PyTorch and TensorFlow are GPU-accelerated, which means your data scientists and researchers can begin working with them without requiring any GPU programming.

The NVIDIA deep learning GPU platform is compatible with software development kit (SDK) libraries and application programming interfaces (APIs). NVIDIA's Deep Learning SDK provides high-performance libraries that include building block APIs to help developers implement inference and training into their apps. Developers can also begin developments on their desktops, scale it up into the cloud and deploy it to edge devices with minimal code changes. 

Deep learning NVIDIA examples also include optimized software stacks that accelerate inference and training through the deep learning workflow.

What is the NVIDIA Deep Learning Institute?

The NVIDIA Deep Learning Institute (DLI) is a resource that provides training for data scientists, developers and researchers. It enables people to gain hands-on experience in accelerated computing and AI through self-paced online courses. 

DLI provides technical expertise to help students learn accelerated data science and deep learning applications for various industries, including healthcare, manufacturing and robotics. It relies on industry-standard tools and frameworks and is based on real-world examples. 

How WWT can help with NVIDIA deep learning

As a global elite partner of NVIDIA, WWT has vast expertise in deep learning deployments across a wide range of industries. We help you gain faster time-to-value from investments in deep learning and optimize your implementation of NVIDIA technology.

Discover how to fully realize the benefits of NVIDIA with our deep learning optimization workshop.

Technologies