In this blog

In such an environment, institutions and researchers might be tempted to go it alone, clinging to the "not invented here" mantra and controlling every aspect of the program — right down to the infrastructure required to execute. On the surface, this approach appears to minimize the odds of failure. But is that true?

Building world-class research environments

When higher education institutions work directly with infrastructure vendors to build world-class research environments, they extend evaluation timelines, limit their purviews, and may receive performance feedback on the integrated compendium of products too late in the process. This process raises the risk of failure in the long run, which cannot be an option.

Researchers possess extensive subject matter expertise that's absolutely necessary, but not sufficient, for program execution. Achieving positive outcomes also requires an intimate understanding of facilitating technologies, like artificial intelligence, machine learning, and fusion computing. It hinges on designing and implementing the right infrastructure of instrumentation, servers and other hardware components, and software.

Even if researchers possess knowledge of critical technologies, do they grasp the infrastructure vendor landscape, which evolves constantly? Are they fully aware of the functional and performance capabilities and dependencies of individual products? Or the prerequisites and impacts of integrating these solutions?

Try before you buy

Infrastructure must not become an afterthought. Poor choices jeopardize success and increase the total cost of research. To make sustainable infrastructure decisions, researchers need a community — a network that extends beyond Government and industry, and includes higher education institutions' partnerships of neutral, unbiased investigators. This ecosystem should be expanded to facilities and data centers to help assess and select the technologies and products that optimally align with the research program's functional, cost, and scheduling drivers.

Leveraging an independent, third-party testbed outfitted with solutions from infrastructure vendors across the spectrum, accompanied by personnel with expertise using those products and in the underlying technologies, makes far more sense. The testbed helps account for the wide range of unknown unknowns. Researchers can use this testbed to rapidly assess individual components and develop proof-of-concept configurations of integrated enterprise-grade products. For example, the testbed can bring together Hewlett Packard Enterprise's Ezmeral orchestrator of containers and data fabrics platform to feed information to machine learning and deep learning systems that convert information into advanced analytics and intelligence — enterprise solutions meet high-performance computing scale.

In the testbed world, there's no penalty for failure. Finding out something does not work as anticipated can prove extremely valuable.

"Fail forward, fail fast, and make corrections in flight," said Jeff Hill, U.S. Sales Director, Hewlett Packard Enterprise. "By doing so, researchers can identify and select the infrastructure configuration that best supports the research program's objectives — turning ideas into practice before making a significant investment of budget and time, thereby mitigating the ultimate risk."

With this approach and rapid innovation mindset, researchers truly can move forward with confidence and realize world-class breakthroughs that reward higher education institutions and industry partnerships. That benefits all of us.