RIKEN Center for Computational Science (R-CCS), Japan
Our performance benchmark of HPL-AI on the supercomputer Fugaku was awarded the 55th Top500. The effective performance was 1.42 EFlop/s, and the world's first achievement to exceed the wall of exascale in a floating-point arithmetic benchmark. Because HPL-AI is brand new and has no reference code for large systems, several challenges in the large-scale benchmark emerge from a low-precision numerical viewpoint. It is not sufficient to replace FP64 operations solely with those of FP32 or FP16. At the least, we need thoughtful numerical analysis for lower-precision arithmetic and the introduction of optimization techniques on extensive computing such as on Fugaku. This study presents some technical analyses and insights on the accuracy issues, implementation and performance improvement, and report on the exascale benchmark on Fugaku.