Use cases of HPC vary widely within an organization, as does the demand for resources. With shifting workflows, budget considerations and peaks and valleys of utilization, IT and HPC leaders have the difficult job of determining their ideal infrastructure needs so they can properly enable R&D to work unimpeded.
A hybrid approach to HPC systems holds the potential to deliver high throughput without killing the bottom line. It also lends to a software-first mindset. Having the flexibility and resources of cloud coupled with the consistency and low cost of on-premise infrastructure permits IT leaders to look at their primary objective and determine, for example, if they need to accelerate time-to-answer or be especially budget conscious. Based on their workflow and objectives, they can make strategic decisions to optimize and build efficiencies. While hybrid can solve many issues, it also introduces complexity to the tech stack that must be addressed to maximize benefits. These considerations include how scale and core type affect optimal performance and how to achieve the best value.
In this presentation, we will look at years of benchmarking for popular workflows on various different core types, consider hyperscale, and explore cost efficiencies for both hardware and software. Additionally, we will discuss a software-first approach to hybrid HPC, looking at on-premise and cloud as a consolidated infrastructure solution to better reach desired outcomes.