| NSF Org: |
CCF Division of Computing and Communication Foundations |
| Recipient: |
|
| Initial Amendment Date: | May 26, 2017 |
| Latest Amendment Date: | July 25, 2017 |
| Award Number: | 1704834 |
| Award Instrument: | Continuing Grant |
| Program Manager: |
Danella Zhao
dzhao@nsf.gov (703)292-4434 CCF Division of Computing and Communication Foundations CSE Direct For Computer & Info Scie & Enginr |
| Start Date: | June 1, 2017 |
| End Date: | September 30, 2023 (Estimated) |
| Total Intended Award Amount: | $1,000,000.00 |
| Total Awarded Amount to Date: | $1,000,000.00 |
| Funds Obligated to Date: |
|
| History of Investigator: |
|
| Recipient Sponsored Research Office: |
1033 MASSACHUSETTS AVE 5TH FL CAMBRIDGE MA US 02138-5369 (617)495-5501 |
| Sponsor Congressional District: |
|
| Primary Place of Performance: |
33 Oxford Street Cambridge MA US 02138-2901 |
| Primary Place of Performance Congressional District: |
|
| Unique Entity Identifier (UEI): |
|
| Parent UEI: |
|
| NSF Program(s): | Software & Hardware Foundation |
| Primary Program Source: |
|
| Program Reference Code(s): |
|
| Program Element Code(s): |
|
| Award Agency Code: | 4900 |
| Fund Agency Code: | 4900 |
| Assistance Listing Number(s): | 47.070 |
ABSTRACT
![]()
This project explores the research foundations necessary to build a universal language translator on a portable computing device for secure private use without the need for reliance on cloud servers. Transformative developments in both machine learning and computer hardware design have made this exciting challenge feasible. The project will nurture a true bidirectional co-design process between researchers in both fields. The broader impacts of the project include: 1) the practical applications of widely available language translation technology, and 2) the training of graduate engineers who have specialization in machine learning as well as hardware and circuit design, skills in broad demand in US industry.
The problem of developing hardware to fit deep learning models is not simply one of fitting current machine learning models on current circuit technology, as the models are much too large, too slow, and too energy-hungry. This project will need to develop novel machine learning techniques that take these factors into account. Machine learning researchers mostly optimize for accuracy; however, the project goal will require considering trade-offs on model size, speed, and computation. Conversely, the hardware design will have to consider and exploit the unique properties of the neural models, such as high-tolerance to certain types of noise, repeated computational structure, and non-linear interactions. The research approach includes three major areas for interaction: model compression, approximation in architecture, and training for unreliable hardware. Succeeding in these goals will be necessary to build a successful on-device system.
PUBLICATIONS PRODUCED AS A RESULT OF THIS RESEARCH
![]()
Note:
When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
Please report errors in award information by writing to: awardsearch@nsf.gov.