This program has been archived.
Division of Mathematical Sciences
Stimulating Collaborative Advances Leveraging Expertise in the Mathematical and Scientific Foundations of Deep Learning (SCALE MoDL)
|Huixia Wangemail@example.com||(703) 292-2279|
|Aranya Chakraborttyfirstname.lastname@example.org||(703) 292-8113|
|Wei Dingemail@example.com||(703) 292-8017|
|Funda Ergunfirstname.lastname@example.org||(703) 292-8910|
|Eun Heui Kimemail@example.com||(703) 292-2091|
|Tracy Kimbrelfirstname.lastname@example.org||(703) 292-7924|
|Phillip A. Regaliaemail@example.com||(703) 292-2981|
|Christopher W. Starkfirstname.lastname@example.org||(703) 292-4869|
|Zhengdao Wangemail@example.com||(703) 292-7823|
|Kenneth C. Whangfirstname.lastname@example.org||(703) 292-5149|
|Joseph M. Whitmeyeremail@example.com||(703) 292-7808|
General inquiries may be addressed to firstname.lastname@example.org
Important Information for Proposers
A revised version of the NSF Proposal & Award Policies & Procedures Guide (PAPPG) (NSF 22-1), is effective for proposals submitted, or due, on or after October 4, 2021. Please be advised that, depending on the specified due date, the guidelines contained in NSF 22-1 may apply to proposals submitted in response to this funding opportunity.
Deep learning has met with impressive empirical success that has fueled fundamental scientific discoveries and transformed numerous application domains of artificial intelligence. Our incomplete theoretical understanding of the field, however, impedes accessibility to deep learning technology by a wider range of participants. Confronting our incomplete understanding of the mechanisms underlying the success of deep learning should serve to overcome its limitations and expand its applicability. The National Science Foundation Directorates for Mathematical and Physical Sciences (MPS), Computer and Information Science and Engineering (CISE), Engineering (ENG), and Social, Behavioral and Economic Sciences (SBE) will jointly sponsor new research collaborations consisting of mathematicians, statisticians, electrical engineers, and computer scientists. Research activities should be focused on explicit topics involving some of the most challenging theoretical questions in the general area of Mathematical and Scientific Foundations of Deep Learning. Each collaboration should conduct training through research involvement of recent doctoral degree recipients, graduate students, and/or undergraduate students from across this multi-disciplinary spectrum. This program complements NSF's National Artificial Intelligence Research Institutes and Harnessing the Data Revolution programs by supporting collaborative research focused on the mathematical and scientific foundations of Deep Learning through a different modality and at a different scale.
When responding to this solicitation, even though proposals must be submitted through the Directorate for Mathematical and Physical Sciences, Division of Mathematical Sciences (MPS/DMS), once received, the proposals will be managed by a cross-disciplinary team of NSF Program Directors. PI teams must collectively possess appropriate expertise in three disciplines - computer science, electrical engineering, and mathematics/statistics. Each project must clearly demonstrate substantial collaborative contributions from members of their respective communities; projects that increase diversity and broaden participation are encouraged.
A wide range of scientific themes on theoretical foundations of deep learning may be addressed in these proposals. Likely topics include but are not limited to geometric, topological, Bayesian, or game-theoretic formulations, to analysis approaches exploiting optimal transport theory, optimization theory, approximation theory, information theory, dynamical systems, partial differential equations, or mean field theory, to application-inspired viewpoints exploring efficient training with small data sets, adversarial learning, and closing the decision-action loop, not to mention foundational work on understanding success metrics, privacy safeguards, causal inference, and algorithmic fairness.