High-Performance Computing (HPC) platforms are evolving toward having fewer but more powerful nodes, driven by the increasing number of physical cores in multiple sockets and accelerators. The boundary between nodes and networks is starting to blur, with some nodes now containing tens of compute elements and memory subsystems connected via a memory fabric. The immediate consequence is increasing complexity, due to ever more complex architecture (e.g., memory hierarchies), novel accelerator designs and energy constraints. Spurred largely by this trend, hierarchical parallelism is gaining momentum. This approach embraces the intrinsic complexity of current and future HPC systems, rather than avoiding it, by exploiting parallelism at all levels; compute, memory and network. This workshop focuses on hierarchical parallelism. It aims to bring together application, hardware and software practitioners proposing new strategies to fully exploit computational hierarchies, and includes examples to illustrate the benefits for achieving extreme scale parallelism.