Oak Ridge National Laboratory, United States of America
Novel scalable scientific algorithms are needed to enable key science applications to exploit the computational power of large-scale systems. These extreme-scale algorithms need to hide network and memory latency, have very high computation/communication overlap and minimal communication and have no synchronization points. With the advent of Big Data and AI the need for such scalable mathematical methods and algorithms able to handle data and compute-intensive applications at scale becomes even more important. Scientific algorithms for multi-petaflop and exaflop systems also need to be fault tolerant and fault resilient, since the probability of faults increases with scale. Finally, with the advent of heterogeneous compute nodes that employ standard processors as well as GPGPUs, scientific algorithms need to match these architectures to extract the most performance. Key science applications require novel mathematics and mathematical models and system software that address the scalability and resilience challenges of current- and future-generation extreme-scale HPC systems.