In this article

Software development teams face increasing challenges in optimizing performance for diverse and demanding workloads, where a mix of acceleration architectures might offer an ideal solution. When creating applications that attempt to maximize the acceleration capabilities of various CPUs, GPUs, FPGAs and so forth, they often need to spend time rewriting base code for each acceleration option – or potentially stick with less-performant accelerators that share a common code.

Consequently, as varied technologies emerge that could benefit certain workloads but require testing and validation of new code by rewriting applications to take advantage of different APIs or libraries, organizations who choose not to do so could lose out on performance gains from employing those new technologies.

To maximize flexibility and speed time to value, developers can benefit from a programming model that enables a wider scope of acceleration solutions – and will continue to do so as future technologies emerge. Now, just such an approach is gaining increasingly wide acceptance: oneAPI.

Introducing a new era of accelerated computing: open, flexible oneAPI 

For those unfamiliar, standards-based oneAPI is a collection of tools and libraries that enable developers to engage and innovate across multiple hardware architectures, writing code once and breaking the requirement to write to each specific hardware vendor's APIs. 

oneAPI isn't a product but rather a way of simplifying application development across multiple architectures. In doing so, oneAPI provides the vendor-agnostic flexibility needed to take advantage of today's technologies as well as tomorrow's emerging solutions. It enables legacy code integration, speeds application performance, raises productivity, and accommodates new generations of innovation.

When employing oneAPI, developers enjoy the freedom to choose the best hardware or software architecture for their solution without major code refactoring – saving countless hours and expense. A growing number of participants in the global oneAPI developer community collaborate on the core elements of its specification, as well as compatible oneAPI implementations across the ecosystem. 

Many AI developers who operate at the framework level (TensorFlow, PyTorch, scikit-learn and others) are abstracted from the underlying devices and may never see or touch oneAPI – but, Intel developers are using oneAPI to plug their devices rapidly and performantly into these frameworks. Except for a few short lines, the Tensorflow or Pytorch code for CPU, versus GPU or Habana Gaudi, appears virtually identical.

Promoting open-standards flexibility with Intel® oneAPI toolkits, communities and more

We're among oneAPI's enthusiastic supporters, fully endorsing the industry-wide initiative to promote standards-based oneAPI solutions applied to artificial intelligence (AI), machine learning (ML), deep learning (DL) and other demanding workloads – maximizing performance while designing for the future.

Our technology ally Intel has long recognized the importance of flexibility and the value of open source solutions to allow customers to take advantage of whatever technologies best support their most demanding workloads. To back up its history of commitment to open source solutions, Intel provides an array of free oneAPI developer resources – a base kit and specialty add-ons – that include compilers, programming tools and performance libraries, plus add-on toolkits for specialized workloads.

For example, its libraries offer a set of APIs for varied uses such as data analytics, neural networks, video processing and more. These resources are interoperable with existing programming models and code bases. They allow for virtually any kind of application development involving different processors and accelerators, freeing developers to write code once for any accelerator, capturing the performance advantages of heterogeneous parallelism.

In addition to the Intel® oneAPI toolkit, Intel also sponsors the Intel® oneAPI Innovators program to nurture standout developers who apply oneAPI in novel ways that speed software development, integrate legacy code, and accelerate time to market.

A common misconception about oneAPI as it pertains to Intel is that it only works with Intel-branded accelerators. Not true – and in fact, that would be the opposite of how this open-standards approach actually works. oneAPI libraries like oneDNN offer a single programming model across Intel devices and give the ISV ecosystem the flexibility and freedom to select the architectures that works best for your workloads today and tomorrow. Intel developers have done the validation and support for Intel devices.

oneAPI can future-proof technology investments today and tomorrow

Change is inevitable and fresh innovations constantly emerge – but it doesn't necessarily mean their underlying code needs to be rewritten to take advantage of them. Organizations can employ open-source oneAPI as way to extend the usefulness of legacy code and explore new solutions without the time and expense of rewriting code. With oneAPI, developers can try new accelerator alternatives using a single code base, easing the transition to next-generation solutions, choosing the ones that best support their workloads.

WWT customers can experience the value of oneAPI for themselves in our Advanced Technology Center, a collaborative ecosystem for research and development where customers, WWT staffers and associates come to design, build, demonstrate and evaluate promising new solutions. This multi-campus incubator for IT innovation consists of four separate data centers for testing and validation. There, customers can engage with WWT data scientists and development consultants for a deep dive into oneAPI and its potential impact on the future of software creation.

Contact us for a personalized hands-on sandbox within WWT's Composable Open Technology Environment to see how OneAPI can enable developers to work more efficiently, writing code once to utilize an array of accelerator solutions, extend the value of legacy code, and introduce tomorrow's acceleration solutions more easily with shorter time to value.

Technologies