By the survey closing date of August 30, 1996, completed questionnaires had been received from 451 of the 499 academic institutions in the survey sample. This represents a 90.4 percent academic response rate, which includes all Top 100 institutions. In addition, 16 of the 18 FFRDCs provided responses and the last two were hand-estimated. The overall response rate for all surveyed institutions (academic and FFRDCs) was 90.3 percent (467 of 517 institutions).
The questionnaire asks respondents to provide the number of person hours required to complete the survey form, and provides a contact person at NSF to whom comments about the response burden should be directed. At the end of the FY 1995 cycle, the average number of hours was calculated for those institutions that indicated any response burden. Doctorate-granting institutions reported an average of 21.8 burden hours, compared with 21.6 hours in FY 1994 and 21.0 hours in FY 1993. Master's-granting institutions reported an average of 7.7 burden hours in FY 1995 anf FY 1994, compared with 8.1 hours in FY 1993. Institutions that grant a bachelor's degree or below reported 5.3 burden hours in FY 1995, 4.3 hours in FY 1994, and 5.2 hours in FY 1993.
In order to provide totals of all academic R&D expenditures, it was necessary first to develop estimates for the approximately 10 percent of the survey population that did not respond to the survey.
Data imputation is an automated procedure used to estimate data for totally and partially nonrespondent institutions. Imputation involves calculating inflator/deflator factors for certain institution classes (determined by highest degree granted and type of control) from fully responding institutions for three key variables: total R&D expenditures, federally financed R&D expenditures, and total research equipment expenditures. The imputation factors are applied to the previous year's key variable values for each nonrespondent institution to derive a current year estimate. These factors, when applied to institutions in each class, reflect the average annual growth or decline in expenditures for reporting institutions in that class. The key variables are then distributed among the various subtotal and detailed fields using the same relative percentages that were last reported by that institution. If no previous percentages are available for an institution, the summary percentages for the institution's class are used. Imputation was performed for institutions that did not submit responses to key items on the questionnaire.
Imputation rates for all surveyed data cells were calculated for all universities and colleges, and for various institution classes determined by highest degree granted and type of control.
Computer science received the lowest imputation rate, 0.9 percent, of all major academic S&E fields. The highest imputation rate was 5.1 percent for environmental sciences. For the sources of funding category at academic institutions, the lowest imputation rate was 1.2 percent (for Federal Government) and the highest was 5.6 percent (for all other sources).
A significant number of institutions in the survey population are intermittent respondents; they provide data one year, do not respond in one or more subsequent years, and then provide data again. Data for the years in which no response was received were imputed, as described in the previous section. Although the imputation algorithm accurately reflects national trends, it cannot account for reporting anomalies at individual institutions. For this reason, after current year imputation, a separate retroimputation for FY 1980-94 data was performed.
For each institution, key variables for items 1 through 3 that were formerly imputed were compared with subsequent submissions to determine whether the imputed data accurately represented the growth patterns shown by the reported data. Retroimputation was applied when the imputed data were not consistent with the reported data. If data were reported for FY 1992 and FY 1995 but not for the intervening years, for example, the difference between the reported figures for each item total was calculated and these amounts were then linearly interpolated across the intervening years. The new figures were spread across disciplines or sources of support on the basis of the most recent reporting pattern. These procedures result in more consistent reporting trends for individual institutions but have little effect upon aggregated figures reflecting national totals.
[ Survey Population and Sample Design
| Data Collection and Processing Activities ]
[ Response Rates and Imputation for Nonresponse | Data Weighting and Standard Errors of Measurement ]