he U.S. Department of Energy's Exascale Computing Project (ECP) represents a broad effort to enable mission critical science and engineering on next generation HPC systems. As part of this, ECP includes 24 application development teams spanning a broad range of science and engineering domains. The teams are tasked in part with efficiently porting a flagship code to the forthcoming Aurora and Frontier U.S. exascale systems. These multi-GPU systems represent a significant departure from the hardware trajectory of previous generations. Thus, non-trivial code restructuring is expected. What is perhaps surprising, though, is the extent to which porting has evolved into far more general hardware-driven algorithmic adaptations. More fundamental changes like exposing new axes of parallelism, increasing computational intensity, prioritizing new physical models, and replacing time with ensemble averaging have gone along with standard porting, such as data layout, loop ordering, etc. In this talk I present a number of such examples of how the particular choice of exascale hardware is having a surprisingly deep impact on our approach to simulation.