banner

Section A.
Technical Notes


These technical notes include information on sampling and weighting, survey methodology, sampling and nonsampling errors, and discussions of data comparisons to previous cycles of the National Survey of Recent College Graduates (NSRCG) and the Integrated Postsecondary Education Data System (IPEDS) data. For a more detailed discussion of survey methodology, readers are referred to the NSRCG:97 Methodology Report.

Overview top

The National Survey of Recent College Graduates (NSRCG) is sponsored by the National Science Foundation (NSF), Division of Science Resources Statistics (SRS). The NSRCG is one of three data collections covering personnel and graduates in science and engineering. The other two surveys are the National Survey of College Graduates (NSCG) and the Survey of Doctorate Recipients (SDR). Together, they constitute NSF's Scientists and Engineers Statistical Data System (SESTAT). These surveys serve as the basis for developing estimates and characteristics of the total population of scientists and engineers in the United States.

The first NSF-sponsored NSRCG (then known as New Entrants) was conducted in 1974. Subsequent surveys were conducted in 1976, 1978, 1979, 1980, 1982, 1984, 1986, 1988, 1990, 1993, 1995, and 1997. The initial survey collected data on only bachelor's degree recipients, but all subsequent surveys included both bachelor's and master's degree recipients.

For the NSRCG:97, a sample of 275 colleges and universities was asked to provide lists of eligible bachelor's and master's degree recipients. All sampled institutions provided the lists. From these lists, a sample of 14,057 graduates (9,978 bachelor's and 4,079 master's recipients) was selected. These graduates were interviewed between November 1997 and October 1998. Computer-assisted telephone interviewing (CATI) served as the primary means of data collection. Mail data collection was used only for those who could not be reached by telephone. The unweighted graduate response rate was 82 percent; the weighted response rate was 81 percent.

The NSRCG questionnaire underwent relatively few revisions for the 1997 survey. The limited revisions incorporated new topics such as alternative arrangements with employers. All revisions were done in coordination with similar revisions to the other SESTAT surveys. Topics covered in the survey include:

Sample Design top

The NSRCG used a two-stage sample design. In the first stage, a stratified nationally representative sample of 275 institutions was selected with probability proportional to size. There were 102 self-representing institutions, also known as certainty units. For each institution, the measure of size was a composite related to both the number of graduates and the proportion of these who were black or Hispanic. The 173 noncertainty institutions were implicitly stratified by sorting the list by type of control (public, private), region, and the percentage of degrees awarded in science or engineering. Institutions were then selected by systematic sampling from the ordered list.

The second stage of the sampling process involved selecting graduates within the sampled institutions by cohort. Each sampled institution was asked to provide lists of graduates for sampling. Within graduation year (cohort), each eligible graduate was then classified into one of 42 strata based on the graduate's major field of study and degree level. While race was not an explicit stratification variable, black, Hispanic, and American Indian/Alaskan Native graduates were assigned a measure of size equal to three, while non-black/non-Hispanic/non-American Indian/Alaskan Native graduates were assigned a measure of size equal to one. This method had the same effect as oversampling black, Hispanic, and American Indian/Alaskan Native graduates by a factor of three. Table 1 lists the major fields and the corresponding sampling rates by cohort and degree. These rates are overall sampling rates for the major field, and include the institution's probability of selection and the within-institution sampling rate. To achieve the within-institution sampling rate, the overall rate was divided by the institution's probability of selection. The sampling rates by stratum were applied within each eligible, responding institution, and resulted in sampling 14,057 graduates.This was slightly larger than the target sample size of 13,500, because persons with unknown majors were also included for complete population coverage.

Table 1. Major fields and corresponding sampling rates, by cohort and degree: April 1997 - Click for Excel version

Graduate Eligibility top

To be included in the sample, the graduates had to meet all of the following criteria:

Data Collection and Response top

Prior to graduate data collection, it was first necessary to obtain the cooperation of the sampled institutions that provided lists of graduates. All eligible sampled institutions provided graduate lists for the NSRCG:97; one sampled institution was ineligible because no S&E degrees were awarded during the two cohort years for the 1997 survey.

Graduate data collection took place between November 1997 and October 1998, with computer-assisted telephone interviewing as the primary means of data collection. Flyers were sent to all graduates announcing the study and asking for the phone numbers at which they could be reached during the survey period. Extensive tracing of graduates was required to obtain the desired response rate. Tracing activities included computerized telephone number searches, national change of address searches (NCOA), school alumni office contacts, school major field department contacts, directory assistance, military locators, post office records, personal referrals from parents or others who knew the graduate, and the use of professional tracing organizations.

Table 2 gives the response rates by cohort, degree, major, type of address, gender, and race/ethnicity. The overall unweighted graduate response rate was 82 percent; the weighted response rate was 81 percent. As can be seen from Table 2, response rates varied somewhat by major field of study and by race/ethnicity. Rates were lowest for those with foreign addresses. It is possible that many unlocated persons with foreign addresses were actually ineligible for the survey due to living outside the United States during the survey reference week. However, a graduate was only classified as ineligible if his/her status was confirmed.

Table 2. Number of graduates, unweighted graduate response rates, and weighted graduate response rates, by graduate characteristic: April 1997 - Click for Excel version

Weight Calculations top

To produce national estimates, the data were weighted. The weighting procedures adjusted for unequal selection probabilities, for nonresponse at the graduate level, and for duplication of graduates on the sampling file (graduates in both cohorts). In addition, a ratio adjustment was made at the institution level, using the number of degrees awarded as reported in IPEDS for specified categories of major and degree. The final adjustment to the graduate weights adjusted for responding graduates who could have been sampled twice. For example, a person who obtained an eligible bachelor's degree in 1995 could have obtained an eligible master's degree in 1996 and could have been sampled for either degree. To make the estimates from the survey essentially unbiased, the weights of all responding graduates who could have been sampled twice were divided by 2. The weights of the graduates who were not eligible to be sampled twice were not adjusted.

The weights developed for the NSRCG:97 comprise both full sample weights for use in computing survey estimates and replicate weights for variance estimation using a jackknife replication variance estimation procedure.

Data Editing top

Most editing checks were included within the CATI system, including range checks, skip pattern rules, and logical consistency checks. Skip patterns were controlled by the CATI system so that inappropriate items were avoided and appropriate items were not missed. For logical consistency check violations, CATI screens appeared that explained the discrepancy and asked the respondent for corrections. Some additional logical consistency checks were added during data preparation. All of the edit checks discussed above were rerun after item nonresponse imputation.

Imputation of Missing Data top

Missing data occurred if the respondent cooperated with the survey but did not answer one or more individual questions. The level of item nonresponse in this study was very low (typically 1 percent or less) due to the use of CATI for data collection and of data retrieval techniques for missing key items. However, imputation for item nonresponse was performed for each survey item to make the study results simpler to present and to allow consistent totals to be obtained when analyzing different questionnaire items. "Not applicable" responses were not imputed since these represented respondents who were not eligible to answer the given item.

Imputation was performed using a hot-deck method. Hot-deck methods estimate the missing value of an item by using values of the same item from other record(s) in the same file. Using the hot-deck procedure, each missing questionnaire item was imputed separately. First, respondent records were sorted by items thought to be related to the missing item. Next, a value was imputed for each item nonresponse recipient from a respondent donor within the same subgroup. The results of the imputation procedure were reviewed to ensure that the plan had been followed correctly. In addition, all edit checks were run on the imputed file to be sure that no data inconsistencies were created in the imputation process.

Accuracy of Estimates top

The survey estimates provided in these tables are subject to two sources of error: sampling and nonsampling errors. Sampling errors occur because the estimates are based on a sample of individuals in the population rather than on the entire population and hence are subject to sampling variability. If the interviews had been conducted with a different sample, the responses would not have been identical; some figures might have been higher, while others might have been lower.

The standard error is the measure of the variability of the estimates due to sampling. It indicates the variability of a sample estimate that would be obtained from all possible samples of a given design and size. Standard errors can be used as a measure of the precision expected from a particular sample. Tables 3 and 4 contain standard errors for key statistics included in the detailed tables.

Table 3. Unweighted number, weighted estimate, and standard errors for 1995 and 1996 science and engineering bachelor's degree recipients, by graduate characteristics: April 1997 - Click for Excel version



Table 4. Unweighted number, weighted estimate, and standard errors for 1995 and 1996 science and engineering master's degree recipients, by graduate characteristics: April 1997 - Click for Excel version

If all possible samples were surveyed under similar conditions, intervals within plus or minus 1.96 standard errors of a particular statistic would include the true population parameter being estimated in about 95 percent of the samples. This is the 95 percent confidence interval. For example, suppose the total number of 1995 and 1996 bachelor's degree recipients majoring in engineering is 115,135 and the estimated standard error is 3,178. The 95 percent confidence interval for the statistic extends from:

confidence interval for the statistic extends from:

115,135 - (3,178 x 1.96) to 115,135 + (3,178 x 1.96) = 108,906 to 121,364

This means that one can be confident that intervals constructed in this way contain the true population parameter for 95 percent of all possible samples.

Estimates of standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistics of interest for each replicate. The mean square error of the replicate estimates around their corresponding full sample estimate provides an estimate of the sampling variance of the statistic of interest. To construct the replicates, 86 stratified subsamples of the full sample were created. Eighty-six jackknife replicates were then formed by deleting one subsample at a time from the full sample. WesVar, a computer program developed at Westat, was used to calculate direct estimates of standard errors for a number of statistics from the survey.

Generalized Variance Functions top

Computing and printing standard errors for each estimate from the survey is a time-consuming and costly effort. For this survey, a different approach was taken for estimating the standard errors of the estimates included in this report. First, the standard errors for a large number of different estimates were directly computed using the jackknife replication procedures described above. Next, models were fitted to the estimates and standard errors and the parameters of these models were estimated from the direct estimates. These models and their estimated parameters were used to approximate the standard error of an estimate from the survey. This process is called the development of generalized variance functions. Models were fitted for the two types of estimates of primary interest: estimated totals and estimated percentages. It should be noted that the models used to estimate the generalized variance functions may not be completely appropriate for all estimates.

Sampling Errors for Totals top

For estimated totals, the generalized variance function applied assumes that the relative variance of the estimate (the square of the standard error divided by the square of the estimate) is a linear function of the inverse of the estimate. Using this model, the standard error of an estimate can be computed as:

Equation 1. Standard error of the estimate y = square root of {a times y^2 + b times y }

where se(y) is the standard error of the estimate y, and a and b are estimated parameters of the model. The parameters of the models were computed separately for 1995 bachelor's and master's recipients and for 1996 bachelor's and master's recipients, as well as for other important domains of interest. The estimates of the parameters are given in Table 5.

The following steps should be followed to approximate the standard error of an estimated total:

  1. obtain the estimated total from the survey,

  2. determine the most appropriate domain for the estimate from Table 5,

  3. refer to Table 5 to get the estimates of a and b for this domain, and

  4. compute the generalized variance using equation (1) above.

For example, suppose that the number of 1995 bachelor's degree recipients in engineering who were currently working in an engineering-related job was 39,400 (y = 39,400). The most appropriate domain from Table 5 is engineering majors with bachelor's degrees from 1995 and the parameters are a = 0.001377 and b = 71.464. Approximate the standard error using equation (1) as:

Equation 2. Standard error of 39,400 = square root of {(.001377 times 39,400) + (71.464 times 39,400)} = 2,226


Table 5. Estimated parameters for computing generalized variances for estimates from the 1997 NSRCG - Click for Excel version

Sampling Errors for Percentages top

The model used to approximate the standard errors for estimates of percentages was somewhat less complex. The generalized variance for estimated percentages assumed that the ratio of the variance of an estimate to the variance of the same estimate from a simple random sample of the same size was a constant. This ratio is called the design effect and is often labeled the DEFF. Since the variance for an estimated percentage, p, from a simple random sample is p(100-p) divided by the sample size, the standard error of an estimated percentage can be written as:

Equation 3. standard error of percentage, p = square root of { DEFF(p) times (100 - p)/n }

where n is the sample size or denominator of the estimated percentage. DEFFs were computed separately for 1995 bachelor's and master's recipients and for 1996 bachelor's and master's recipients, as well as for other important domains of interest. The median or average values of the DEFFs from these computations are given in Table 5.

The following steps should be followed to approximate the standard error of an estimated percentage:

  1. obtain the estimated percentage and sample size from the survey,

  2. determine the most appropriate domain for the estimate from Table 5,

  3. refer to Table 5 to get the estimates of the DEFF for this domain, and

  4. compute the generalized variance using equation (2) above.

For example, suppose that the percentage of 1995 bachelor's degree recipients in engineering who were currently working in an S&E job was 67 percent (p = 67) and the number of engineering majors from the survey (sample size, n) was 1,100. The most appropriate domain from Table 5 is engineering majors with bachelor's degrees from 1995 and the DEFF for this domain is 1.7. Approximate the standard error using equation (2) as:

Equation 4. Standard error of 67% = square root of { 1.7 times 67 times (100 - 67) / 1100 }

Nonsampling Errors top

In addition to sampling errors, the survey estimates are subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage), reporting errors, and errors made in the collection and processing of the data. These errors can sometimes bias the data. The NSRCG:97 included procedures specifically design to minimize nonsampling error. In addition, some special studies conducted during the previous cycles of the NSRCG provided some measures of nonsampling errors that are useful in understanding the data from the current survey as well.

Procedures to minimize nonsampling errors were followed throughout the survey. Extensive questionnaire design work was done by Mathematica Policy Research (MPR), NSF, and Westat. This work included focus groups, expert panel reviews, and mail and CATI pretests. This design work was done in conjunction with the other two SESTAT surveys.

Comprehensive training and monitoring of interviewers and data processing staff helped to ensure the consistency and accuracy of the data file. Data collection was done almost entirely by telephone to help reduce the amount of item nonresponse and item inconsistency. Mail questionnaires were used for cases difficult to complete by telephone. Nonresponse was handled in ways designed to minimize the impact on data quality (through weighting adjustments and imputation). In data preparation, a special effort was made in the area of occupational coding. Respondent-chosen codes were verified by data preparation staff using a variety of information collected on the survey and applying coding rules developed by NSF for the SESTAT system.

While general sampling theory can be used to estimate the sampling variability of a statistic, the measurement of nonsampling error is not easy and usually requires that an experiment be conducted as part of the data collection, or that data external to the study be used. In the NSRCG:95, two quality analysis studies were conducted: (1) an analysis of occupational coding; and (2) a CATI reinterview. As noted above, these special studies can also inform analysts about the 1997 survey data.

The occupational coding report included an analysis of the CATI autocoding of occupation and the best coding operation. During CATI interviewing, each respondent's verbatim occupation description was autocoded by computer into a standard SESTAT code whenever possible. Autocoding included both coding directly to a final category and coding to an intermediate code-selection screen. If the description could not be autocoded, the respondent was asked to select the appropriate occupation category during the interview. For the primary occupation, 22 percent of the responses were autocoded to a final category and 19 percent were autocoded to an intermediate screen. The results of the occupation autocoding were examined, and the process was found to be successful and efficient.

For the best coding operation, an occupational worksheet for each respondent was generated and reviewed by an experienced occupational coder. This review was based on the work-related information provided by the graduate. If the respondent's self-selected occupation code was inappropriate, a new or "best" code was assigned. A total of 17,894 responses were received to the three occupation questions in the 1995 survey cycle. Of these, 25 percent received updated codes during the best coding process, with 16 percent being recoded from the "other" category and 9 percent recoded from the "non-other" categories. This analysis indicated that the best coding activity was necessary to ensure that the most appropriate occupation codes were included on the final data file. As a result of this NSRCG:95 quality study, the best coding procedure was implemented in the 1997 survey as well.

The second quality analysis study conducted in the NSRCG:95 involved a reinterview of a sample of 800 respondents. For this study, sampled respondents were interviewed a second time, and responses to the two interviews were compared. This analysis found that the questionnaire items in which respondents were asked to provide reasons for certain events or behaviors had relatively large index of inconsistency values. Examples include reasons for not working during the reference week and reasons for working part-time. High response variability is typical for items that ask about reasons and beliefs rather than behaviors, and the results were not unusual for these types of items. Some of the other differences between the two interviews were attributed to the time lag between the original interview and reinterview. Overall, the results of the reinterview study did not point to any significant problems with the questionnaire.

For the NSRCG:93, two data quality studies were completed: (1) an analysis of interviewer variance, and (2) a behavioral coding analysis of 100 recorded interviews. The interviewer variance study was designed to measure the impact of interviewer effects on the precision of the estimates. The results showed that interviewer effects for most items were minimal and thus had a very limited effect on the standard error of the estimates. Interviewer variance was highest for open-ended questions.

The behavioral coding study was done to observe the extent to which interviewers were following the structured interview and the extent to which it became necessary for them to give unstructured additional explanation or comments to respondents. As part of the study, 100 interviews were taped and then coded on a variety of behavioral dimensions. This analysis revealed that, on the whole, the interview proceeded in a very structured manner, with 85 percent of all question and answer "dyads" being "asked and answered only." Additional unstructured interaction/discussion took place most frequently for those questions in which there was some ambiguity in the topic. In most cases this interaction was judged to have facilitated obtaining the correct response.

For both survey cycles, results from the quality studies were used to identify those questionnaire items that might need additional revision for the next study cycle. Debriefing sessions concerning the survey were held with interviewers, and this information was also used in revising the survey for the next cycle.

Comparisons of Data With Previous Years' Results top

A word of caution needs to be given concerning comparisons with previous NSRCG results. During the 1993 cycle, the SESTAT system underwent considerable revision in several areas, including survey eligibility, data collection procedures, questionnaire content and wording, and data coding and editing procedures. The changes made for the 1995 cycle were less significant, but may affect data trend analysis. For a detailed discussion of these changes, please see the 1993 and 1995 NSRCG methodology reports.

For the 1997 NSRCG, there were no significant procedural changes that would affect the comparison of results between the 1995 and 1997 survey cycles.

Comparisons With IPEDS Data top

The National Center for Education Statistics (NCES) conducts a survey of the nation's postsecondary institutions, called the Integrated Postsecondary Education Data System (IPEDS). The IPEDS Completions Survey reports on the number of degrees awarded by all major fields of study, along with estimates by gender and race/ethnicity.

Although both the NSRCG and IPEDS are surveys of postsecondary education and both report on completions from those institutions, there are important differences in the target populations for the two surveys that directly affect the estimates of the number of graduates. The reason for the different target populations is that the goals of the surveys are not the same. The IPEDS estimates of degrees awarded are intended to measure the output of the educational system. The NSRCG estimates are intended to measure the supply and utilization of a portion of graduates in the years following their completion of a degree. These goals result in definitions of the target population that are not completely consistent for the two surveys. Other differences between the estimates can be explained to a very large extent by a few important aspects of the design or reporting procedures in the two surveys. The main differences between the two studies that affect comparisons of estimates overall and by race/ethnicity are listed below.

Despite these factors, the NSRCG and IPEDS estimates are consistent when appropriate adjustments for these differences are made. For example, the proportional distributions of graduates by field of study are nearly identical, and the numerical estimates are similar. Further information on the comparison of NSRCG and IPEDS estimates is available in the report, A Comparison of Estimates in the NSRCG and IPEDS, available in the SRS website, at http://www.nsf.gov/statistics/.

Other Explanatory Information top

The following definitions are provided to facilitate the reader's use of the data in this report.

Coverage of tables: The tables in this report present information for two groups of recent graduates. The first of these groups consists of persons who earned bachelor's degrees in S&E fields from U.S. institutions during academic years 1995 and 1996. The second group includes those who earned S&E master's degrees during the same two years.

Major field of study: Derived from the survey major field category most closely related to the respondent's degree field. Exhibit 1 gives a listing of the detailed major field codes used in the survey. Exhibit 2 gives a listing of the summary major field codes developed by NSF and used in the tables. The appendix lists the eligible and ineligible major fields within each summary category.

Occupation: Derived from the survey job list category most closely related to the respondent's primary job. Exhibit 3 gives a listing of the detailed job codes used in the survey, and Exhibit 4 gives the summary occupation codes developed by NSF and used in the tables.

Labor force: The labor force includes individuals working full or part time as well as those not working but seeking work or on layoff. It is a sum of the employed and the unemployed.

Unemployed: The unemployed are those who were not working on April 15 and were seeking work or on layoff from a job.

Type of employer: This is the sector of employment in which the respondent was working on his or her primary job held on April 15, 1997. In this categorization, those working in 4-year colleges and universities or university-affiliated medical schools or research organizations were classified as employed in the "4-year college and university" sector. Those working in elementary, middle, secondary, or 2-year colleges or other educational institutions were categorized in the group "other educational." The other sectors are private, for-profit; self-employed; nonprofit organizations; federal government; and state or local government. Those reporting that they were self-employed but in an incorporated business were classified in the private, for-profit sector.

Primary work activity: This refers to the activity that occupied the most time on the respondent's job. In reporting the data, those who reported applied research, basic research, development, or design work were grouped together in "research and development (R&D)." Those who reported accounting, finance or contracts, employee relations, quality or productivity management, sales and marketing, or managing and supervising were grouped into "management, sales, administration." Those who reported production, operations, maintenance, professional services or other activities were given the code "other."

Full-time salary: This is the annual income for the full-time employed who were not self-employed (either incorporated or not incorporated), whose principal job was not less than 35 hours per week, and who were not full-time students on the reference date (April 15, 1997). To annualize salary, reported hourly salaries were multiplied by the reported number of hours paid per week, then multiplied by 52; reported weekly salaries were multiplied by 52; reported monthly salaries were multiplied by 12. Yearly and academic yearly salaries were left as reported.


Previous Section Top of page Next Section Table of Contents Help SRS Home