banner

Section C.
Technical Notes, Survey Methodology



The Survey of Research and Development Funding and Performance by Nonprofit Organizations: 1996 and 1997 collected information on the science and engineering (S&E) research and development (R&D) activities of nonprofit organizations (NPOs). It collected data both from NPOs that fund S&E R&D and from those that perform R&D themselves. The National Science Foundation (NSF) carried out a similar study of nonprofit organizations in 1973. This section provides a brief overview of the methodology used for the 1996-97 survey. The complete Methodology report is available on the world wide web at http://www.nsf.gov/statistics/srvyrdnonprofit/srdfmeth.htm

Overview top

The Survey of Research and Development Funding and Performance by Nonprofit Organizations: 1996 and 1997 collected information on S&E R&D activities of nonprofit organizations. It collected data both from NPOs that fund S&E R&D and from those that perform R&D themselves. The overall target population for the survey was nonprofit organizations that funded or performed S&E R&D of $250,000 or more during fiscal year (FY) 1996.

Key Variables top

The following key variables were included on the survey instruments:

Target population and sample frame top

The overall target population for the survey was nonprofit organizations that fund or perform S&E R&D. In its 1973 study, NSF had used a $100,000 cutoff to determine which NPOs were eligible for the main data collection. Taking into account inflation over the intervening 25 years, NSF adopted a $250,000 cutoff for the current survey. An NPO was considered eligible for the main questionnaire if it reported spending $250,000 or more on R&D activities during 1996. Because there is no extant list of Nonprofit Organizations involved in S&E R&D activities, the first task for the new study was to compile a comprehensive list of potentially eligible NPOs. First, samples of potential performers and funders of S&E R&D were selected and sent a short screening questionnaire to determine their eligibility for the main study. NSF then tried to collect more detailed information from those NPOs that were, according to their screener responses, eligible for the main study. The screener was also used to determine whether each NPO was a "funder" or "performer." NPOs that both performed R&D in-house and funded R&D at other organizations received the performer questionnaire that also asked for extramural R&D funding.

Two frames were needed for the study, one consisting of possible research funders (independent nonprofit organizations funding S&E R&D) and the other consisting of possible research performers (independent nonprofit organizations conducting science and engineering research and/or development). Both frames were to exclude organizations already eligible for related NSF surveys, e.g., colleges and universities, industry, and government agencies. As in previous nonprofit R&D surveys conducted by NSF, the most serious problems were those generated by the lack of a comprehensive mailing list, the dissimilarity among the types of organizations included within the sector, and shifts of organizations into and out of the sector. An additional problem arose from the complex relationships that exist between organizations within and outside the sector. Various types and degrees of affiliation and cooperation, especially in cases where research institutes maintained close working relationships with universities or hospitals, made it difficult to determine whether a particular organization should be considered independent or not.

For the nonprofit organizations that fund S&E R&D, NSF used a single source, the Foundation Directory published by the Foundation Center in New York. The Foundation Directory is compiled from a public-use Internal Revenue Service database. The sample of research funders was restricted to NPOs listed in the Foundation Directory of the Foundation Center (NY) with assets of at least $2 million or $50,000 in annual giving.

The sample of research performers was based on three lists of highly likely R&D performers plus two different samples from an Internal Revenue Service (IRS) public-use file of approximately 600,000 NPOs. These two IRS file samples were 1) a probability-proportional-to-size sample of likely R&D performers, i.e., those that had a National Taxonomy of Exempt Entities code that indicated an interest in science, engineering or technology, and 2) a probability-proportional-to-size sample of all other NPOs in the IRS file. The National Center for Charitable Statistics (NCCS) trimmed the IRS file to fewer than 185,000 NPOs after removing NPOs that had gross receipts of less than $25,000, and religious organizations. The sample was carefully designed with probability proportional to size and because of reliance on the largest NPO database in existence, coverage at the institutional level is believed to be at or close to 100 percent. There may be minor coverage problems for nonprofit organizations that were established just prior to the 1996 survey year. The three specialized lists were:

Data collection techniques and nonresponse adjustments top

Screening questionnaires were sent to 9,112 NPOs (of an estimated survey universe of approximately 184,000 possible R&D performers and 2,000 possible R&D funders) to determine their eligibility for either of the two R&D questionnaires. Survey questionnaires were mailed in 1998 and 1999 to 1,131 organizations that had indicated on the screening form that they performed or funded R&D worth at least $250,000 in 1996. Through August 1999, NSF mailed follow-up questionnaires to nonrespondent organizations; from August through October 1999, NSF attempted to contact all nonrespondent organizations by telephone. During the course of the data-collection phase of the survey, 126 organizations that neither funded nor performed R&D were deleted from the survey universe. Thus, as of the closeout date of December 15, 1999, the sample comprised 1,005 organizations, including 722 performers and 283 funders.

Of these 1,005 organizations, 352 or 35 percent, returned usable replies. (During post-processing cleaning of the data, the 352 respondents were reduced to 343 respondents.) The final response rate for all NPOs (funders and performers) was 41 percent, adjusting for the nonrespondents that were assumed to be ineligible because they were out of business or could not be traced anywhere in the United States. The separate adjusted response rate for funders was almost 46 percent (110 usable responses) and for performers almost 40 percent (233 usable responses).

The sample was designed to have a representative sample of U.S. R&D-performing and R&D-funding nonprofit organizations. The 41-percent response rate was lower than anticipated and sampling error in individual data cells, especially the smaller cells, are quite high. The national estimates for total nonprofit R&D and major items, such as basic research, applied research and development, are presented with sufficient reliability for use. Smaller data items are reported in tables so that readers may judge the large components in context but the standard errors for items below $1 billion are quite high, making comparisons of small cells in tables inadvisable.

Of the NPOs surveyed for 1996 and 1997, 59 percent did not respond, including 54 percent of the funders and 60 percent of the performers. Separate weighting for funders and performers was carried out in three stages to correct for unit nonresponse. The first stage compensated for the different selection probabilities for the different sample NPOs. The initial weight (W1) for a given NPO was simply the inverse of its probability of selection into the sample. In the second stage, this base weight was adjusted to compensate for different rates of nonresponse to the screening effort. This nonresponse adjustment was calculated as the inverse of the screener response rate within eight weighting cells; the adjusted weight (W2) was the product of the initial weight and the inverse of the response rate for that cell. The final weight incorporated an adjustment for nonresponse to the main survey.

The survey data were weighted to national estimates. The weights were calculated to reflect: (1) different selection probabilities for the different sample NPOs; (2) adjustments to compensate for different rates of nonresponse to the screening effort; and (3) adjustments for non-response to the main survey. Item nonresponse was addressed through logical and hot deck imputation.

In an attempt to complete all items, staff telephoned NPOs that left some data cells blank on their questionnaires. Then staff dealt with item nonresponse through logical and hot deck imputation. Staff identified all missing data that could be resolved logically and used hot deck imputation for the remaining cells.

Trend Data top

The last survey of nonprofit organizations' R&D was for performers only. It collected FY 1973 data. Selected data from FY 1973 are reproduced in this report with the 1996 and 1997 data. Data users should read all technical notes and footnotes in the tables to understand how the 1973 coverage differs from that of the 1996 and 1997 survey.


Previous Section Top of page Next Section Table of Contents Help SRS Home