Exascale computing initiatives are expected to enable breakthroughs for multiple scientific disciplines. Increasingly these systems utilize cloud technologies, enabling complex and distributed workflows that improve not only scientific productivity, but accessibility of resources to a wide range of communities. Such an integrated and seamlessly orchestrated system for supercomputing and cloud technologies is indispensable for experimental facilities that have been experiencing an unprecedented rate of data growth. While a subset of high-performance computing (HPC) services has been available from public cloud environments, petascale and beyond data and computing capabilities are largely provisioned within HPC data centers using traditional, bare-metal provisioning services to ensure performance, scaling and cost effectiveness. This workshop aims to bring together experts and practitioners from academia, national laboratories and industry to discuss technologies, use cases and best practices in order to set a vision and direction for leveraging high-performance, extreme-scale computing and on-demand cloud ecosystems.