nsf.gov - NCSES Incentive Experiments: NSF Experiences - US National Science Foundation (NSF)
text-only page produced automatically by LIFT Text
Transcoder Skip all navigation and go to page contentSkip top navigation and go to directorate navigationSkip top navigation and go to page navigation
National Science Foundation National Center for Science and Engineering Statistics
Incentive Experiments: NSF Experiences

Introduction



 

Government surveys have suffered declining response rates in recent years. Low response rates not only reduce effective sample size, but also may cause nonresponse bias. Various measures have been taken to maintain a high response rate. Among them, incentives have been effective in increasing response rates and data quality (For example, see Singer et al. 1999; Church 1993; Shettle and Mooney 1999). This working paper summarizes recent incentives studies conducted in connection with the surveys of the Scientists and Engineers Statistical Data System (SESTAT) of the National Science Foundation (NSF).

SESTAT is a database of the employment, education, and demographic characteristics of a sample of scientists and engineers in the United States. The SESTAT integrated database is constructed from data collected in three separate surveys: the National Survey of College Graduates (NSCG) (representing all individuals in the United States at the time of the decennial census with a bachelor's degree or higher), the National Survey of Recent College Graduates (NSRCG) (representing persons who obtained a bachelor's or master's degree in a science, engineering, or health [SEH] field from a U.S. institution earned within the last 2–3 years), and the Survey of Doctorate Recipients (SDR) (representing persons in the general U.S. population who have earned a doctorate in an SEH field from a U.S. institution). As a whole, SESTAT provides information on the entire U.S. population of scientists and engineers who hold at least a bachelor's degree.

Like many other federal government surveys, NSF's SESTAT surveys have also suffered from declining response rates in recent years. Historically, response rates for SESTAT surveys have been around 80%, but it has become much more challenging to achieve this response rate in recent survey cycles. For example, during the 10-year period of time from 1993 to 2003, the overall unweighted response rates for NSCG, NSRCG, and SDR declined from 80% to 63%, 85% to 67%, and 87% to 79% respectively.[1]

Low response rates reduce effective sample sizes and may cause bias. In order to find effective measures to overcome the ongoing trend of declining response rates, NSF has conducted a series of incentive studies for the SESTAT surveys. For many years incentives have been used in surveys to increase response rates. Research has consistently shown that they are effective at increasing response rates. NSF incentive experiments tested a wide range of treatments: nonmonetary and monetary incentives, differing amounts of monetary incentives, different payment methods, different timing of incentives, etc. This working paper summarizes the evaluations of the impact of the incentive experiments on response rates and data quality. On the whole, the results have indicated positive impacts of monetary incentives on both response rate and data quality. 

In the following sections, the effectiveness of promoting survey response rates and data quality using nonmonetary and monetary incentives is examined (section 1). Monetary incentives have positive impacts on both response rates and data quality; this result has justified further studies of such incentives. In section 2, the impact of alternative ways of offering monetary incentives is examined: prepaid check, which imposes an obligation to the sampled person to respond; or postpaid check, which prevents people from cashing the check without responding to the survey. To limit costs, monetary incentives often are offered toward the end of the data collection process to a small number of sampled persons who have not responded. Such individuals are either potential or known refusals and are the most difficult to convert to respondents. Section 3 examines the effectiveness of monetary incentives for refusal conversion. Section 4 evaluates the differences in impact of the timing of the incentives offered: early versus late incentives. Section 5 examines the response rates for all monetary incentive experiments to determine whether higher monetary incentives result in higher response rates. Section 6 looks at a possible "side effect" of offering monetary incentives to members of a panel survey: are panel members who received an incentive in a previous wave of the survey less likely to respond to the survey if no incentive is offered in the next round? In the following sections, NSF's incentive experiments are summarized by the subjects mentioned above rather than by the time order in which the experiments were conducted.

The measures listed below were used in this paper to evaluate the impact of one or more of the experiments discussed (different definitions that were used for other experiments will be provided as appropriate):

  • Response rate: the number of eligible completes divided by the total number of eligible sample members (which includes the total number of eligible respondents and the estimated total number of eligible nonrespondents)
  • Completion rate: the number of completed surveys divided by the sample size
  • Cooperation rate: the number of respondents divided by the sample size minus the number of final locating problems (respondents who could not be located during the survey despite various efforts by investigators). 
  • Imputation rate: the number of imputed variables divided by the total number of variables for all respondents
  • Data editing rate: the number of variables requiring editing divided by the total number of variables for all respondents

Top of page. Back to Top




 
Incentive Experiments: NSF Experiences
Working Paper | SRS 11-200 | November 2010