Trusted Computing (TC)
See program guidelines for contact information.
Important Information for Proposers
ATTENTION: Proposers using the Collaborators and Other Affiliations template for more than 10 senior project personnel will encounter proposal print preview issues. Please see the Collaborators and Other Affiliations Information website for updated guidance.
A revised version of the NSF Proposal & Award Policies & Procedures Guide (PAPPG) (NSF 18-1), is effective for proposals submitted, or due, on or after January 29, 2018. Please be advised that, depending on the specified due date, the guidelines contained in NSF 18-1 may apply to proposals submitted in response to this funding opportunity.
This program seeks to establish a sound scientific foundation and technological basis for managing privacy and security in a world linked through computing and communication technology. This research is necessary to build the secure and reliable systems required for a highly interconnected, information technology enabled society. The program supports innovative research in all aspects of secure, reliable information systems, including methods for assessing the trustworthiness of systems. Some specific areas in which research is needed include:
- Component technologies: specification, design, development, test, verification methods to provide quantifiable assurance that specified properties are met. Ideally, such technologies should be flexible, so that they can be applied in accordance with the degree of trustworthiness required and the resources available. Methods are needed to identify particular components that provide a good basis on which to construct trustworthy systems.
- Composition methods: Assembling components into subsystems and systems with known and quantifiable trustworthiness. Identifying and minimizing the security assumptions made in a given security design. Exploiting the existence of large numbers of untrustworthy computing platforms effectively to create secure or trustworthy multiparty computations.
- Methods for maintaining trustworthiness as systems adapt and evolve.
- Methods for improving human understanding of critical system behavior and control.
- Methods for assessing tradeoffs in trustworthy system design, for example between security and performance.
- Techniques for modeling, analyzing, and predicting trust properties of systems and components.