Characteristics of Recent Science and Engineering Graduates: 2001

Section A.
Technical Notes




These technical notes on the 2001 National Survey of Recent College Graduates (NSRCG) include information on sampling and weighting, survey methodology, sampling and nonsampling errors, and discussions of data comparisons to previous cycles of the NSRCG and the Integrated Postsecondary Education Data System (IPEDS) data. For a more detailed discussion of survey methodology, readers are referred to the 2001 NSRCG Methodology Report.

Overview top

The National Survey of Recent College Graduates is sponsored by the National Science Foundation (NSF), Division of Science Resources Statistics (SRS). The NSRCG is one of three data collections covering personnel and graduates in science and engineering (S&E). The other two surveys are the National Survey of College Graduates (NSCG) and the Survey of Doctorate Recipients (SDR). Together, they constitute NSF's Scientists and Engineers Statistical Data System (SESTAT). These surveys serve as the basis for developing estimates and characteristics of the total population of scientists and engineers in the United States.

The first NSF-sponsored NSRCG (then known as the New Entrants Survey was conducted in 1974. Subsequent surveys were conducted about every two years. The initial survey collected data on only bachelor's degree recipients, but all subsequent surveys included both bachelor's and master's degree recipients.

For the 2001 NSRCG, a sample of 280 colleges and universities was asked to provide lists of eligible bachelor's and master's degree recipients. From these lists, a sample of 13,516 graduates (9,472 bachelor's and 4,044 master's recipients) was selected. These graduates were interviewed between June 2001 and May 2002. Computer-assisted telephone interviewing (CATI) served as the primary means of data collection. In addition to telephone data collection, mail data collection was used only for those who could not be reached by telephone. The weighted response rates were 99.2 percent for institutions and 79.6 percent for graduates.

The NSRCG questionnaire underwent relatively few revisions for the 2001 survey. All revisions were done in coordination with revisions to the other SESTAT surveys.

Sample Design top

The NSRCG used a two-stage sample design. In the first stage a stratified nationally representative sample of 280 institutions was selected. The first stage sample was drawn in two steps. In the first step, certainty institutions were identified from the list of all institutions; all of these were included in the sample. In the second step, noncertainty institutions were sampled from the list that did not include the certainty institutions. For each institution, the measure of size, a composite related to both the number of eligible graduates and the proportion of these who were black or Hispanic, was assigned. There were 107 self-representing or certainty institutions that were identified and included in the sample in the first step. The remaining noncertainty institutions on the list were implicitly stratified by sorting the list by type of control (public, private), region, and the percentage of degrees awarded in science or engineering fields of study. There were 173 noncertainty units selected by systematically sampling from the ordered list with probability proportional to size in the second step.

The second stage of the sampling process selects S&E graduates within the sampled institutions. Each sampled institution was asked to provide lists of graduates for sampling. Within graduation year (cohort), each eligible graduate was then classified into one of 40 strata based on the graduate's major field of study and degree level. While race was not an explicit stratification variable, black, Hispanic, and American Indian/Alaskan Native graduates were assigned a measure of size equal to three, while all other graduates were assigned a measure of size equal to one. This method had the same effect as oversampling black, Hispanic, and American Indian/Alaskan Native graduates by a factor of three. Table 1 lists the major fields and the corresponding sampling rates by cohort and degree. These rates are overall sampling rates for the major field, and include the institution's probability of selection and the within-institution sampling rate. To achieve the within-institution sampling rate, the overall rate was divided by the institution's probability of selection. The sampling rates by stratum were applied within each eligible, responding institution, and resulted in sampling 13,516 graduates. The sample size for graduates with known majors was 13,271; the sample also included 245 graduates whose major field of study was unknown at the time of sampling.

Graduate Eligibility top

To be included in the sample, the graduates had to meet all of the following criteria:

Data Collection and Response top

Prior to data collection from graduates, it was first necessary to obtain the cooperation of the sampled institutions that provided lists of graduates. Of the 280 sampled institutions, 276 provided lists of graduates for sampling in the 2001 NSRCG, two were not eligible, and two did not provide graduate lists. The response rates for the institutional list collection were 99.3 percent unweighted and 99.2 percent weighted.

Graduate data collection took place between June 2001 and May 2002, with computer-assisted telephone interviewing as the primary means of data collection. Flyers were sent to all graduates announcing the study and asking for the phone numbers at which they could be reached during the survey period. Extensive tracing of graduates was required to obtain the desired response rate. Tracing activities included computerized telephone number searches, national change of address searches (NCOA), school alumni office contacts, school major field department contacts, internet searches, directory assistance, military locators, post office records, personal referrals from parents or others who knew the graduate, and the use of professional tracing organizations.

Table 2 gives the response rates by cohort, degree, major, type of address, sex, and race/ethnicity. The overall unweighted graduate response rate was 80.1 percent; the weighted response rate was 79.6 percent. As can be seen from Table 2, response rates varied somewhat by graduate characteristics. Rates were lowest for graduates identified on the school sampling lists as non-resident aliens. It is possible that many unlocated persons listed as non-resident aliens were actually ineligible for the survey due to living outside the United States during the survey reference week. However, a graduate was only classified as ineligible if his/her ineligibility status could be confirmed.

Weight Calculations top

To produce national estimates, the data were weighted. The weighting procedures adjusted for unequal selection probabilities, for nonresponse at the institution and graduate level, and for duplication of graduates on the sampling file (graduates in both cohorts).[1] In addition, a ratio adjustment was made at the institution level, using the number of degrees awarded as reported in IPEDS for specified categories of major and degree level. Because this adjustment was designed to reduce the variability associated with sampling institutions, it was not affected by the differences in target populations between NSRCG and IPEDS at the person level. These differences between NSRCG and IPEDS are discussed in a later section of these notes. The final adjustment to the graduate weights adjusted for responding graduates who could have been sampled twice. For example, a person who obtained an eligible bachelor's degree in 1999 could have obtained an eligible master's degree in 2000 and could have been sampled for either degree. To make the estimates from the survey essentially unbiased the weights of all responding graduates who could have been sampled twice were divided by 2. The weights of the graduates who were not eligible to be sampled twice were not adjusted.

The weights developed for the 2001 NSRCG comprise both full NSRCG sample weights for use in computing survey estimates, and replicate weights for variance estimation using a jackknife replication variance estimation procedure.

Data Editing top

Editing checks were included within the CATI system, including range checks, skip pattern rules, and logical consistency checks. Skip patterns were controlled by the CATI system so that inappropriate items were avoided and appropriate items were not missed. For logical consistency check violations, CATI screens appeared that explained the discrepancy and asked the respondent for corrections. All of the edit checks discussed above were rerun after data collection and again when item nonresponse imputation was completed.

Imputation of Missing Data top

Missing data occurred if the respondent cooperated with the survey but did not answer one or more individual questions. The level of item nonresponse in this study was generally low (less than 4 percent) due to the use of CATI for data collection and of data retrieval techniques for missing key items. However, imputation for item nonresponse was performed for each survey item to make the study results simpler to present and to allow consistent totals to be obtained when analyzing different questionnaire items. "Not applicable" responses were not imputed since these represented respondents who were not eligible to answer the given item.

Imputation was performed using a hot-deck method. Hot-deck methods estimate the missing value of an item by using values of the same item from other record(s) in the same file. Using the hot-deck procedure, each missing questionnaire item was imputed separately. First, respondent records were sorted by items thought to be related to the missing item. Next, a value was imputed for each item nonresponse recipient from a respondent donor within the same subgroup. The results of the imputation procedure were reviewed to ensure that the plan had been followed correctly. In addition, all edit checks were run on the imputed file to be sure that no data inconsistencies were created in the imputation process.

Accuracy of Estimates top

The survey estimates provided in these tables are subject to two sources of error: sampling and nonsampling errors. Sampling errors occur because the estimates are based on a sample of individuals in the population rather than on the entire population and hence are subject to sampling variability. If the interviews had been conducted with a different sample, the responses would not have been identical; some figures might have been higher, while others might have been lower.

The standard error is the measure of the precision expected from a particular sample. Tables 3 and 4 contain standard errors for key statistics included in the detailed tables.

If all possible samples were surveyed under similar conditions, intervals within plus or minus 1.96 standard errors of a particular statistic would include the statistic computed from all members of the population in about 95 percent of the samples. This is the 95 percent confidence interval. For example, suppose the estimate of the total number of 1999 and 2000 bachelor's degree recipients majoring in engineering is 109,247 and the estimated standard error is 2,536. In this case, the 95 percent confidence interval for the statistic would extend from:

109,247 - (2,536 x 1.96) to 109,247 + (2,536 x 1.96) = 104,276 to 114,218

This means that one can be confident that intervals constructed in this way contain the true population parameter for 95 percent of all possible samples.

Estimates of standard errors were computed using a technique known as jackknife replication. As with any replication method, jackknife replication involves constructing a number of subsamples (replicates) from the full sample and computing the statistics of interest for each replicate. The mean square error of the replicate estimates around their corresponding full sample estimate provides an estimate of the sampling variance of the statistic of interest. To construct the replicates, 86 stratified subsamples of the full sample were created. Eighty-six jackknife replicates were then formed by deleting one subsample at a time from the full sample. WesVar, a computer program developed at Westat, was used to calculate the jacknife estimates of standard errors for the statistics presented in this report.

Generalized Variance Functions top

For this survey, generalized variance functions (GVFs) were developed for estimating the standard errors of the estimates. First, the standard errors for a large number of different estimates were directly computed using the jackknife replication procedures described above. Next, models were fitted to the estimates and standard errors and the parameters of these models were estimated from the direct estimates. These models and their estimated parameters were used to approximate the standard error of an estimate from the survey. This process is called the development of generalized variance functions.

Models were fitted for the two types of estimates of primary interest: estimated totals and estimated percentages. It should be noted that the models used to estimate the generalized variance functions may not be completely appropriate for all estimates (especially median salary estimates).

Sampling Errors for Totals top

For estimated totals, the generalized variance function applied assumes that the relative variance of the estimate (the square of the standard error divided by the square of the estimate) is a linear function of the inverse of the estimate. Using this model, the standard error of an estimate can be computed as:

Equation 1. Standard error of the estimate y = square root of {a times y^2 + b times y }

where se(y) is the standard error of the estimate y, and a and b are estimated parameters of the model. The parameters of the models were computed separately for bachelor's and master's degree recipients for important domains of interest. The estimates of the parameters are given in Table 5.

The following steps should be followed to approximate the standard error of an estimated total:

  1. obtain the estimated total from the survey,

  2. determine the most appropriate domain for the estimate from Table 5,

  3. refer to Table 5 to get the estimates of a and b for this domain, and

  4. compute the generalized variance using equation (1) above.

For example, suppose that the estimate of bachelor's degree recipients in engineering who were employed in an occupation other than S&E is 16,154 (y = 16,154). The most appropriate domain from Table 5 is engineering majors with bachelor's degrees and the parameters are a = -0.000263 and b = 83.473. Approximate the standard error using equation (1) as:

Equation 2. Standard error of 16,154 = square root of {(-0.000263 times (16,154)^2) + (83.473 times 16,154)} = 1,131

Sampling Errors for Percentages top

The model used to approximate the standard errors for estimates of percentages was somewhat less complex. The generalized variance for estimated percentages assumed that the ratio of the variance of an estimate to the variance of the same estimate from a simple random sample of the same size was a constant. This ratio is called the design effect and is often labeled the DEFF. Because the variance for an estimated percentage, p, from a simple random sample is p(100-p) divided by the sample size, the standard error of an estimated percentage can be written as:

Equation 3. standard error of estimated percentage, p = square root of { DEFF(p) times (100 - p)/n }

where n is the sample size or denominator of the estimated percentage. DEFFs were computed separately for bachelor's and master's degree recipients, as well as for other important domains of interest. The average values of the DEFFs from these computations are given in Table 5.

The following steps should be followed to approximate the standard error of an estimated percentage:

  1. obtain the estimated percentage and sample size from the survey,

  2. determine the most appropriate domain for the estimate from Table 5,

  3. refer to Table 5 to get the estimates of the DEFF for this domain, and

  4. compute the generalized variance using equation (2) above.

For example, suppose that the estimated percentage of bachelor's degree recipients in engineering who were currently working in an S&E job was 16 percent (p = 16) and the number of engineering majors from the survey (sample size, n) was 1,956. The most appropriate domain from Table 5 is engineering majors with bachelor's degrees and the DEFF for this domain is 1.5. Approximate the standard error using equation (2) as:

Equation 4. Standard error of 16% = square root of { 1.5 times 16 times (100 - 16) / 1,956 } = 1.02%

Nonsampling Errors top

In addition to sampling errors, the survey estimates are subject to nonsampling errors that can arise because of nonobservation (nonresponse or noncoverage), reporting errors, and errors made in the collection and processing of the data. These errors can sometimes bias the data. The 2001 NSRCG included procedures specifically designed to minimize nonsampling error. In addition, some special studies conducted during the previous cycles of the NSRCG provided some measures of nonsampling errors that are useful in understanding the data from the current survey as well.

Procedures to minimize nonsampling errors were followed throughout the survey. Extensive questionnaire design work was done by Mathematica Policy Research (MPR), NSF, and Westat. This work included focus groups, expert panel reviews, and mail and CATI pretests. This design work was done in conjunction with the other two SESTAT surveys.

Comprehensive training and monitoring of interviewers and data processing staff helped to ensure the consistency and accuracy of the data file. Data collection was done almost entirely by telephone to help reduce the amount of item nonresponse and item inconsistency. Mail questionnaires were used for cases difficult to complete by telephone. Nonresponse was handled in ways designed to minimize the impact on data quality (through weighting adjustments and imputation). In data preparation, a special effort was made in the area of occupational coding. Respondent-chosen codes were verified by data preparation staff using a variety of information collected on the survey and applying coding rules developed by NSF for the SESTAT system.

While general sampling theory can be used to estimate the sampling variability of a statistic, the measurement of nonsampling error is not easy and usually requires that an experiment be conducted as part of the data collection, or that data external to the study be used. In the 1995 NSRCG, two quality analysis studies were conducted: (1) an analysis of occupational coding; and (2) a CATI reinterview. As noted above, these special studies can also inform analysts about the 2001 survey data.

The occupational coding report included an analysis of the 1995 CATI autocoding of occupation and the best coding operation. During CATI interviewing, each respondent's verbatim occupation description was autocoded by computer into a standard SESTAT code whenever possible. Autocoding included both coding directly to a final category and coding to an intermediate code-selection screen. If the description could not be autocoded, the respondent was asked to select the appropriate occupation category during the interview. For the primary occupation, 22 percent of the responses were autocoded to a final category and 19 percent were autocoded to an intermediate screen. The results of the occupation autocoding were examined, and the process was found to be successful and efficient.

For the best coding operation, an occupational worksheet for each respondent was generated and reviewed by an experienced occupational coder. This review was based on the work-related information provided by the graduate. If the respondent's self-selected occupation code was inappropriate, a new or "best" code was assigned. A total of 17,894 responses were received to the three occupation questions in the 1995 survey cycle. Of these, 25 percent received updated codes during the best coding process, with 16 percent being recoded from the "other" category and 9 percent recoded from the "non-other" categories. This analysis indicated that the best coding activity was necessary to ensure that the most appropriate occupation codes were included on the final data file. As a result of this 1995 NSRCG quality study, the best coding procedure was implemented in the 1997, 1999, and 2001 surveys as well.

The second quality analysis study conducted in the 1995 NSRCG involved a reinterview of a sample of 800 respondents. For this study, sampled respondents were interviewed a second time, and responses to the two interviews were compared. This analysis found that the questionnaire items in which respondents were asked to provide reasons for certain events or behaviors had relatively large index of inconsistency values. Examples include reasons for not working during the reference week and reasons for working part-time. High response variability is typical for items that ask about reasons and beliefs rather than behaviors, and the results were not unusual for these types of items. Some of the other differences between the two interviews were attributed to the time lag between the original interview and reinterview.

For the 1993 NSRCG, two data quality studies were completed: (1) an analysis of interviewer variance, and (2) a behavioral coding analysis of 100 recorded interviews. The interviewer variance study was designed to measure the impact of interviewer effects on the precision of the estimates. The results showed that interviewer effects for most items were minimal and thus had a very limited effect on the standard error of the estimates. Interviewer variance was highest for open-ended questions.

The behavioral coding study was done to observe the extent to which interviewers were following the structured interview and the extent to which it became necessary for them to give unstructured additional explanation or comments to respondents. As part of the study, 100 interviews were taped and then coded on a variety of behavioral dimensions. This analysis revealed that, on the whole, the interview proceeded in a very structured manner, with 85 percent of all question and answer "dyads" being "asked and answered only." Additional unstructured interaction/discussion took place most frequently for those questions in which there was some ambiguity in the topic. In most cases this interaction was judged to have facilitated obtaining the correct response.

For both survey cycles, results from the quality studies were used to identify those questionnaire items that might need additional revision for the next study cycle. Debriefing sessions concerning the survey were held with interviewers, and this information was also used in revising the survey for the next cycle.

Comparisons of Data With Previous Years' Results top

A word of caution needs to be given concerning comparisons with previous NSRCG results. During the 1993 cycle, the SESTAT system underwent considerable revision in several areas, including survey eligibility, data collection procedures, questionnaire content and wording, and data coding and editing procedures. The changes made for the 1995 through 2001 cycles were less significant, but might affect some data trend analysis. While the 1993 through 2001 survey data are fairly comparable, care must be taken when comparing results from the 1990s surveys to surveys from the 1980s, due to the significant changes made in 1993. For a detailed discussion of these changes, please see the 1993 through 2001 NSRCG methodology reports.

For the 2001 NSRCG, there were no significant procedural changes that would affect the comparison of results between the 1999 and 2001 survey cycles.

Comparisons With IPEDS Data top

The National Center for Education Statistics (NCES) conducts a survey of the nation's postsecondary institutions, called the Integrated Postsecondary Education Data System (IPEDS). The IPEDS Completions Survey reports on the number of degrees awarded by all major fields of study, along with estimates by sex and race/ethnicity.

Although both the NSRCG and IPEDS are surveys of postsecondary education and both report on completions from those institutions, there are important differences in the target populations for the two surveys that directly affect the estimates of the number of graduates. The reason for the different target populations is that the goals of the surveys are not the same. The IPEDS estimates of degrees awarded are intended to measure the output of the educational system. The NSRCG estimates are intended to measure the supply and utilization of a portion of graduates in the years following their completion of a degree. These goals result in definitions of the target population that are not completely consistent for the two surveys. Other differences between the estimates can be explained to a very large extent by a few important aspects of the design or reporting procedures in the two surveys. The main differences between the two studies that affect comparisons of estimates overall and by race/ethnicity are listed below.

Despite these factors, the NSRCG and IPEDS estimates are consistent when appropriate adjustments for these differences are made. For example, the proportional distributions of graduates by field of study are nearly identical, and the numerical estimates are similar. Further information on the comparison of NSRCG and IPEDS estimates is available in the report, A Comparison of Estimates in the NSRCG and IPEDS, available on the SESTAT website, at http://sestat.nsf.gov in the Research Compendium section.

Other Explanatory Information top

Definitions

The following definitions are provided to facilitate the reader's use of the data in this report.

Major field of study: This is derived from the survey major field category most closely related to the respondent's degree field. Exhibit 1 gives a listing of the detailed major field codes used in the survey. Exhibit 2 gives a listing of the summary major field codes developed by NSF and used in the tables. The appendix lists the eligible and ineligible major fields within each summary category.

Occupation: Occupation is derived from the survey job list category most closely related to the respondent's primary job. Exhibit 3 gives a listing of the detailed job codes used in the survey, and Exhibit 4 gives the summary occupation codes developed by NSF and used in the tables.

Labor force: The labor force includes individuals working full or part time as well as those not working but seeking work or on layoff. It is a sum of the employed and the unemployed.

Unemployed: The unemployed are those who were not working on April 15 and were seeking work or on layoff from a job.

Type of employer: This is the sector of employment in which the respondent was working on his or her primary job held during the week of April 15, 2001. The following are the definitions for each of these categories. Private industry and business includes all private for-profit and private not-for-profit companies, businesses, and organizations, except those reported as educational institutions. It also includes persons reporting they were self-employed. Educational institutions include elementary and secondary schools, 2-year and 4-year colleges and universities, medical schools, university-affiliated research organizations, and all other educational institutions. Government includes local, state, and Federal Government, military, and commissioned corps.

Primary work activity: This refers to the activity that occupied the most time on the respondent's job. In reporting the data, those who reported applied research, basic research, development, or design work were grouped together in "research and development (R&D)." Those who reported accounting, finance or contracts, employee relations, quality or productivity management, sales and marketing, or managing and supervising were grouped into "management, sales, administration." Those who reported production, operations, maintenance, professional services or other activities were given the code "other."

Full-time salary: This is the annual salary for the full-time employed, defined as those who were not self-employed (either incorporated or not incorporated), whose principal job was not less than 35 hours per week, and who were not full-time students on the reference date (April 15, 2001). Graduates who did not receive salaries were asked to report earned income, excluding business expenses. To annualize salary, reported hourly salaries were multiplied by the reported number of hours paid per week, then multiplied by 52; reported weekly salaries were multiplied by 52; reported monthly salaries were multiplied by 12. Yearly and academic yearly salaries were left as reported.

Race/ethnicity: All graduates, both U.S. citizens and non-U.S. citizens, are included in the race/ethnicity data presented in this report. In tables with sufficient sample size, race/ethnicity data are presented by the specific categories of white, non-Hispanic; black, non-Hispanic; Hispanic; Asian or Pacific Islander; and American Indian or Alaskan Native. In tables where the sample size is not sufficient to present data by specific category, the groups of black, Hispanic, and American Indian or Alaskan Native are combined into the underrepresented minority category.

Coverage of Tables top

The tables in this report present information for two groups of recent graduates. The first of these groups consists of persons who earned bachelor's degrees in S&E fields from U.S. institutions during academic years 1999 and 2000. The second group includes those who earned S&E master's degrees during the same two years.

List of Tables top


These tables are available in Excel (.xls) format and Portable Document Format (.pdf).
See Help for more information about viewing publications in different formats.



Table Text Table Name top Excel Spreadsheet (.xls) Portable Document Format  (.pdf)
1 Major fields and corresponding sample rates, by cohort and degree: April 2001 top .xls .pdf
2 Response status of sampled graduates and unweighted and weighted graduate response rates, by graduate characteristics top .xls .pdf
3 Unweighted number, weighted estimates, and standard errors for 1999 and 2000 science and engineering bachelor's degree recipients, by graduate characteristics: April 2001 top .xls .pdf
4 Unweighted number, weighted estimates, and standard errors for 1999 and 2000 science and engineering master's degree recipients, by graduate characteristics: April 2001 top .xls .pdf
5 Estimated parameters for computing generalized variances for estimates from the 2001 NSRCG top .xls .pdf



Footnote

[1] Prior to graduate sampling, we unduplicated the sampling frames (sampling lists received from the institutions) for degrees received within the same academic year, same degree level, and from the same institution. These cases were generally due to double majors. For example, if a graduate received two eligible bachelor's degrees during the 1999 academic year, we kept only one record on the frame, recording one major as the first major and the other as the second major (according to a set protocol).


Previous Section Top of page Next Section Table of Contents Help SRS Home