Scalable Parallelism in the Extreme (SPX)
|Anindya Banerjeeemail@example.com||(703) 292-7885|
|Vipin Chaudharyfirstname.lastname@example.org||(703) 292-2254|
|Tracy Kimbrelemail@example.com||(703) 292-8910|
|Sandip Kundufirstname.lastname@example.org||(703) 292-8950|
|Mimi McClureemail@example.com||(703) 292-5197|
|Yuanyuan Yangfirstname.lastname@example.org||(703) 292-8067|
Important Information for Proposers
A revised version of the NSF Proposal & Award Policies & Procedures Guide (PAPPG) (NSF 19-1), is effective for proposals submitted, or due, on or after February 25, 2019. Please be advised that, depending on the specified due date, the guidelines contained in NSF 19-1 may apply to proposals submitted in response to this funding opportunity.
Computing systems have undergone a fundamental transformation from the single-core processor-devices of the turn of the century to today's ubiquitous and networked devices with multicore/many-core processors along with warehouse-scale computing via the cloud. At the same time, semiconductor technology is facing fundamental physical limits and single-processor performance has plateaued. This means that the ability to achieve performance improvements through improved processor technologies alone has ended. In recognition of this obstacle, the recent National Strategic Computing Initiative (NSCI) encourages collaborative efforts to develop, “over the next 15 years, a viable path forward for future high-performance computing (HPC) systems even after the limits of current semiconductor technology are reached (the 'post-Moore’s Law era').”
Exploiting parallelism is one of the most promising directions to meet these performance demands. While parallelism has already been studied extensively and is a reality in today’s computing technology, the expected scale of future systems is unprecedented. At extreme scales, factors that have small impacts today can become highly significant. For example, even short serial program sections can prove destructive to performance. Heterogeneity of processing elements [Central Processing Units (CPUs), Graphics-Processing Units (GPUs), and accelerators] and their memory hierarchies pose significant management challenges. High system complexity may lead to unacceptable latencies and mean time between failures, even if built with highly reliable components. Furthermore, the interconnectedness of large-scale distributed architectures poses an enormous challenge of understanding and providing guarantees on performance behavior. These are just four of many issues arising in the new era of parallel computing that is upon us.
The Scalable Parallelism in the Extreme (SPX) program aims to support research addressing the challenges of increasing performance in this modern era of parallel computing. This will require a collaborative effort among researchers in multiple areas, from services and applications down to micro-architecture. SPX encompasses all five NSCI Strategic Objectives, including supporting foundational research toward architecture and software approaches that drive performance improvements in the post-Moore’s Law era; development and deployment of programmable, scalable, and reusable platforms in the national HPC and scientific cyberinfrastructure ecosystem; increased coherence of data analytic computing and modeling and simulation; and capable extreme-scale computing. Coordination with industrial efforts that pursue related goals are encouraged.