banner

Section B.
Technical Notes and Technical Tables



Survey Methodology[13] top

Reporting Unit top

The reporting unit for the Survey of Industrial Research and Development is the company[14], defined as a business organization of one or more establishments under common ownership or control. The survey includes two groups of enterprises: (1) companies known to conduct R&D, and (2) a sample representation of companies for which information on the extent of R&D activity is uncertain.

Frame Creation top

The Standard Statistical Establishment List (SSEL), a Bureau of the Census compilation that contains information on more than 3 million establishments with paid employees, was the target population from which the frame used to select the 1999 survey sample was created (see table B-1 for population and sample sizes). For companies with more than one establishment, data were summed to the company level and the resulting company record was used to select the sample and process and tabulate the survey data.

After data were summed to the company level, each company then was assigned a single North American Industrial Classification System (NAICS)[15] code based on payroll. The method used followed the hierarchical structure of the NAICS. The company was first assigned to the economic sector, defined by a 2-digit NAICS code representing manufacturing, mining, trade, etc., that accounted for the highest percentage of its aggregated payroll. Then the company was assigned to a subsector, defined by a 3-digit NAICS code, that accounted for the highest percentage of its payroll within the economic sector. Finally, the company was assigned a 4-digit NAICS code within the subsector, again based on the highest percentage of its aggregated payroll within the subsector. Assignment below the 4-digit level was not done because of the concentration of R&D in relatively few industries and disclosure concerns (see below for detailed discussions of both issues).

The frame from which the survey sample was drawn included all for-profit companies classified in nonfarm industries. For surveys prior to 1992, the frame was limited to companies above certain size criteria based on number of employees.[16] These criteria varied by industry. Some industries were excluded from the frame because it was believed that they contributed little or no R&D activity to the final survey estimates. For the 1992 sample, new industries were added to the frame,[17] and the size criteria were lowered considerably and applied uniformly to firms in all industries. As a result, nearly 2 million enterprises with 5 or more employees[18] were given a chance of selection for subsequent samples, including the 1999 sample. For comparison, the frame for the 1987 sample included 154,000 companies of specified sizes and industries.

Defining Sampling Strata top

A fundamental change initiated in 1995 and repeated for subsequent samples was the redefinition of the sampling strata. For the survey years 1992–94, 165 sampling strata were established, each stratum corresponding to one or more 3-digit-level SIC codes. The objective was to select sufficient representation of industries to determine whether alternative or expanded publication levels were warranted. For the 1995-98 surveys, the sampling strata corresponded to publication level industry aggregations. For each year, 40 publication levels were defined. These correspond to the original 25 groupings of manufacturing industries used as sampling strata before 1992 and an additional 15 groupings of nonmanufacturing industries. For the 1999 survey, with the conversion to NAICS, 29 manufacturing and 20 nonmanufacturing strata were defined corresponding to the 4-digit industries and groups of industries for which statistics were developed and published.

Identifying Certainty Companies top

The criteria for identifying companies selected for the survey with certainty, which were most recently modified in 1996, have remained the same for subsequent surveys. To limit the growth occurring each year in the number of certainty cases within the total sample, the certainty criterion was raised for the 1996 survey from $1 million to $5 million in total R&D expenditures based on data gathered from the 1995 survey. With a fixed total sample size, there was concern that the representation of the very large noncertainty universe by a smaller sample each year would be inadequate. Before 1994, companies with 1,000 or more employees had been selected with certainty, but it was observed that the level of spending varied considerably and that many of these companies reported no R&D expenditures each year. For these reasons, it was determined that these companies should be given chances of selection based upon the size of their R&D spending if they were in the previous survey or upon an estimated R&D value if they were not. Consequently, the size criterion based on the number of employees was dropped for surveys after 1994.

Frame Partitioning top

Partitioning of the frame for noncertainty companies into large and small companies was first introduced in 1994 because of concern arising from a study of 1992 survey results, which showed that a disproportionate number of small companies was being selected for the sample, and often assigned very large weights. These small companies seldom reported R&D activity. This disproportion was a result of the minimum probability rule (see "Sample Size" below) used as part of the independent probability proportionate to size (pps) sampling procedure employed exclusively prior to 1994 (PPS is discussed in detail later under "Sample Selection"). This rule increased the probabilities of selection for several hundred thousand smaller companies. For the 1994 and subsequent surveys, simple random sampling (srs) was applied to the small company partition causing the smaller companies to be sampled more efficiently than with independent PPS sampling since there was little variability in their size (SRS is discussed in detail later under "Sample Selection"). The large company partition continued to be sampled using independent PPS sampling.

In 1994 and 1995, total company payroll was the basis for partitioning the noncertainty frame. For each industry grouping, the largest companies representing the top 90 percent of the total payroll for the industry grouping was included in the PPS frame. The balance, the smaller companies comprising the remaining 10 percent of payroll for the industry grouping, was included in the SRS frame.

Beginning in 1996, total company employment became the basis for partitioning the frame. The total company employment levels defining the partitions were based on the relative contribution to total R&D expenditures of companies in different employment size groups in both the manufacturing and nonmanufacturing sectors. In the manufacturing sector, all companies with total employment of 50 or more were included in the large company partition. In the nonmanufacturing sector, all companies with total employment of 15 or more were included in the large company partition. Companies in the respective sectors with employment below these values were included in the small company partition. In the 1999 survey, the large company partition contained almost 610,000 companies and the small company partition contained approximately 1.25 million companies. These counts were comparable to those in the 1998 survey (550,000 and 1.3 million, respectively).

Identifying "Zero" Industries top

One final modification in the frame development for 1996, which was repeated for the 1997 and 1998 surveys, was the designation of "zero" industries in the large company partition. Zero industries were those three-digit SIC industries having no R&D expenditures reported in survey years 1992-94—the years when estimates by three-digit SIC industry were formed. These industries remained within the scope of the survey, but only a limited sample was drawn from them because it was unlikely that these industries conducted R&D. Simple random sampling was used to control the number of companies selected from these industries. For the 1999 survey, no zero industries were defined because this was the first year NAICS was used. For the next several cycles of the survey, NAICS industries will be evaluated to ascertain if any of them should be designated "zero" industries.

Sample Selection top

Beginning with the 1996 cycle of the survey, a significant revision in the procedure for selecting samples from the partitions led to a change in the development and presentation of estimates. The revised procedure was repeated for subsequent surveys. For the 1995 survey, the sample of companies from the large company partition was selected using probability proportionate to size sampling (see below) in each of the 40 strata (discussed previously under "Defining Sampling Strata"). Likewise, the simple random sampling of the small company partition was done for each of the 40 strata. However, beginning in 1996, the number of strata established for the small company partition was reduced to two. One stratum consisted of small companies classified in manufacturing industries and the second stratum consisted of small companies classified in nonmanufacturing industries. Simple random sampling continued as the selection method for these two strata.

The purpose of selecting the small company panel from these two strata was to reduce the variability in industry estimates largely attributed to the random year-to-year selection of small companies by industry and the high sampling weights that sometimes occurred. As a consequence of this change, estimates for industry groups within manufacturing and nonmanufacturing were not possible from these two strata as noted on affected tables. The statistics for the detailed industry groups were based only on the sample from the large company partition. Estimates from the small company partition were included in statistics for total manufacturing, total nonmanufacturing, and all industries. For completeness, in the affected tables for 1996-98 the estimates also were added to the categories "other manufacturing" and "other nonmanufacturing." For 1999 the estimates are published separately in the "small manufacturing companies" and "small non-manufacturing companies" categories.

Probability Proportionate to Size top

Imputing R&D. It would be ideal if company size could be determined by its R&D expenditures. Unfortunately, except for the companies that were in a previous survey or for which there is information from external sources, it is impossible to know the R&D expenditures for every firm in the universe (i.e., R&D information is not available from the Standard Statistical Establishment List (SSEL)). Consequently, the probability of selection for most companies is based on estimated R&D expenditures. Since total payroll is known for each company in the universe (i.e., payroll information is available from the SSEL), it is possible to estimate R&D from payroll using relationships derived from previous survey data. Imputation factors relating these two variables are derived for each industry grouping. To impute R&D for a given company, the imputation factors are applied to the company payroll in each industry grouping. A final measure is obtained by adding the industry grouping components. The effect, in general, is to give firms with large payrolls higher probabilities of selection in agreement with the assumption that larger companies are more likely to perform R&D. Estimated R&D values are computed for companies in the small company partition as well. The aggregate of reported and estimated R&D from each company in both the large and small company partitions represent a total universe measure of the previous year's R&D expenditures. However, assigning R&D to every company results in an overstatement of this measure. To adjust for the overstatement, the universe measure is scaled down using factors developed from the relationship between the frame measure of the prior year's R&D and the final prior-year survey estimates. These factors, computed at levels corresponding to published industry levels, are used to adjust the originally imputed R&D values so that the new frame total for R&D at these levels approximates the prior year's published values. This adjustment provides for better allocation of the sample among these levels.

For 1999, the distribution of companies by payroll and estimated R&D in the large company partition was skewed as in earlier frames (i.e., the correlation of payroll and R&D was high because R&D had been estimated based on payroll). Because of this skewness, PPS sampling remained the appropriate selection technique for this group. (Had there been a zero-industry stratum in the 1999 sample, it would have been sampled as discussed previously under "Identifying "Zero" Industries"). That is, large companies had higher probabilities of selection than did small companies. However, a different approach to PPS sampling was introduced beginning with the 1998 survey. Historically, PPS sampling had been accomplished using an independent sampling methodology, i.e., the selection (or nonselection) of a given company was independent of the sampling result (select or nonselect) of any other company. This implied that over repeated samplings in a given stratum, different size samples would result. This added more variability to the sample estimates. For 1998, a fixed sample size PPS method was introduced. This method ensured that the sample size desired for a given stratum was achieved, thus eliminating error because of sample size variation from the sample estimates. For a given sample size, the fixed sample size method will produce more precise estimates on average than the independent method. The fixed sample size methodology was repeated for the 1999 survey.

Simple Random Sampling top

As described earlier, only two major strata were defined for samples in the small company partition, manufacturing and nonmanufacturing. The use of SRS implied that each company within a stratum had an equal probability of selection. The total sample allocated to the small company partition was dependent upon the total sample specified for the survey and upon the total sample necessary to satisfy criteria established for the large partition. Once determined, the allocation of this total by stratum was made proportionate to the stratum's payroll contribution to the entire partition.

Sample Stratification and Relative Standard Error Constraints top

The particular sample selected was one of a large number of samples of the same type and size that by chance might have been selected. Statistics resulting from the different samples would differ somewhat from each other. These differences are represented by estimates of sampling error or variance. The smaller the sampling error, the more precise the statistic.

Controlling Sampling Error. Historically, it has been difficult to achieve control over the sampling error of survey estimates. Efforts were confined to controlling the amount of error due to sample size variation, but this was only one component of the overall sampling error. The other component depended on the correlation between the data from the sampling frame used to assign probabilities (namely R&D values either imputed or reported in the previous survey) and the actual current year reported data. The nature of R&D is such that these correlations could not be predicted with any reliability. Consequently, precise controls on overall sampling error were difficult to achieve.

For recent surveys, primary concern was placed on controlling error for the large company partition since nearly all of the R&D activity was identified from that portion of the sample. For the 1998 and 1999 surveys, with the introduction of the fixed sample size sampling procedure, the component of sampling error due to sample size variation was eliminated. However, the amount of error attributable to the remaining component of the sample remained. Since there was still no way to predict how well the data from the sampling frame would correlate with actual survey data, the approach taken to allocate the sample across the various strata was to assign probabilities in the same manner as in the past when independent sampling was used. The probabilities resulting from this allocation technique determined the sample sizes to be selected from each stratum subject to the overall sample size constraint dictated by the survey budget. Although the actual survey sampling errors could not be predicted, the parameters used to assign probabilities, and the use of the minimum probability rule resulted in a desirable number of companies being sampled from the large company partition (see "Sample Size" below).

Sampling Strata and Standard Error Estimates. A limitation of the sample allocation process for the large company partition should be noted. The constraints used to control the sample size in each stratum were based on a universe total that, in large part, was improvised. That is, as previously noted, an R&D value was assigned to every company in the frame, even though most of these companies actually may not have had R&D expenditures. The value assigned was imputed for the majority of companies in the frame and, as a consequence, the estimated universe total and the distribution of individual company values, even after scaling, did not necessarily reflect the true distribution. Assignment of sampling probability was nevertheless based on this distribution. The presumption was that actual variation in the sample design would be less than that estimated, because many of the sampled companies have true R&D values of zero, not the widely varying values that were imputed using total payroll as a predictor of R&D. Previous sample selections indicate that in general this presumption held, but exceptions have occurred when companies with large sampling weights have reported large amounts of R&D spending. See table B-2 for a list by industry of the standard error estimates for selected items and table B-3 for a list of the standard error estimates of total R&D by state.

Nonsampling Error. In addition to sampling error, estimates are subject to nonsampling error. Errors are grouped in five categories: specification, coverage, response, nonresponse, and processing. For detailed discussions on the sources, control, and measurement of each of these types of error, see U.S. Bureau of the Census (1994b and 1994f).

Sample Size top

The parameters set to control sampling error discussed above resulted in a sample size of 18,529 companies from the large company partition. For the small company partition, two strata (manufacturing and nonmanu-facturing) were identified. Also included was a separate stratum of small companies that could not be classified into a NAICS industry because of incomplete industry identification in the SSEL. In 1999, as in the 1994 through 1998 surveys, a small number of companies was selected from this group in the hope that an accurate industry identification could be obtained at a later point. Ultimately, a final sample of 5,902 companies was selected from the small company partition. The sample initially allocated to the two strata was proportionate to its share of total payroll for the small company partition. The total sample size finally determined for the 1999 survey was 24,431. This total included an adjustment to the sample size based on a minimum probability rule and changes in the operational status of some companies. With the use of fixed sample size PPS sampling for the large company partition and simple random sampling for the small company partition (and with no zero-industry stratum for 1999), the target sample size was met.

Minimum Probability Rule. A minimum probability rule was imposed for both partitions. As noted earlier, for the large partition, probabilities of selection proportionate to size were assigned to each company, where size was the reported or imputed R&D value assigned to each company. Selected companies received a sample weight which was the inverse of their probability. Selected companies that ultimately report R&D expenditures vastly larger than their assigned values can have adverse effects on the statistics, which were based on the weighted value of survey responses. To lessen the effects on the final statistics, the maximum weight of a company was controlled by specifying a minimum probability that could be assigned to the company. If the probability, based on company size, was less than the minimum probability, then it was reset to this minimum value. The consequence of raising these original probabilities to the minimum probability was to raise the sample size. Similarly, a maximum weight for each stratum was established for the simple random sampling of the small company partition. If the sample size initially allocated to a stratum resulted in a stratum weight above this maximum value, then the sample size was increased until the maximum weight was achieved.

Changes in Operational Status. Between the time that the frame was created and the survey was prepared for mailing, the operational status of some companies changed. That is, they were merged with or acquired by another company, or they were no longer in business. Before preparing the survey for mailing, the operational status was updated to identify these changes. As a result, the number of companies mailed a survey form was somewhat smaller than the number of companies initially selected for the survey.

Weighting and Maximum Weights top

Weights were applied to each company record to produce national estimates. Within the PPS partitions of the sample, company records were given weights up to a maximum of 50; for companies within the SRS partitions, company records were given weights up to a maximum of 250.

Survey Forms top

Two forms are used each year to collect data for the survey. Known large R&D performers are sent a detailed survey form, Form RD-1.[19] The Form RD-1 requests data on sales or receipts, total employment, employment of scientists and engineers, expenditures for R&D performed within the company with Federal funds and with company and other funds, character of work (basic research, applied research, and development), company-sponsored R&D expenditures in foreign countries, R&D performed under contract by others, federally funded R&D by contracting agency, R&D costs by type of expense, domestic R&D expenditures by state, energy-related R&D and foreign R&D by country. Because companies receiving the Form RD-1 have participated in previous surveys, computer-imprinted data reported by the company for the previous year are supplied for reference. Companies are encouraged to revise or update this imprinted data if they have more current information; however, prior-year statistics that had been previously published were revised only if large disparities were reported.

Small R&D performers and firms included in the sample for the first time were sent Form RD-1A. This form collects the same information as Form RD-1 except for five items: Federal R&D support to the firm by contracting agency, R&D costs by type of expense, domestic R&D expenditures by state, energy-related R&D, and foreign R&D by country. It also includes a screening item that allows respondents to indicate that they do not perform R&D. No prior-year information is made available since the majority of the companies that receive the Form RD-1A have not been surveyed in the previous year.

Recent Survey Form Content Changes top

For the 1997 and 1998 surveys, data on federally-funded and total R&D performed under contract to others (or "contracted-out") were collected to better measure the amount of R&D performed both within and between companies. For earlier years, data were collected only on non-federally funded contracted-out R&D.[20]

Based on information obtained from telephone interviews with a sample of respondents, a new item, R&D depreciation costs, was added to the 1998 Form RD-1. In prior years R&D depreciation was included in the "other costs" category of R&D expenditures. Also beginning with the 1998 survey, items used to collect detailed information on the allocation of R&D expenditures by field of science and engineering and by product class, and R&D expenditures for pollution abatement were eliminated. Further, the amount of detail requested for energy-related R&D was reduced. Item nonresponse on each of these items was unacceptably high relative to their response burden.

For 1999, the survey forms remained as they were for 1998.

Number of Survey Forms Sent top

Form RD-1 was mailed to companies that reported R&D expenditures of $5 million dollars or more in the 1998 survey. Approximately 1,600 companies received Form RD-1 and approximately 22,600 received Form RD-1A. Both survey forms and the instructions provided to respondents are reproduced in section C, Survey Documents.

Follow-up for Survey Nonresponse top

The 1999 survey forms were mailed in March 2000. Recipients of Form RD-1A were asked to respond within 30 days, while Form RD-1 recipients were given 60 days. A follow-up form and letter were mailed to RD-1A recipients every thirty days if their completed survey form had not been received; a total of five follow-up mailings were conducted for delinquent RD-1A recipients.

A letter was mailed to Form RD-1 recipients thirty days after the initial mailing, reminding them that their completed survey forms were due within the next 30 days. A second form and reminder letter were mailed to Form RD-1 respondents after 60 days. Two additional follow-up mailings were conducted for delinquent Form RD-1 recipients.

In addition to the mailings, telephone follow-up was used to encourage response from those firms ranked among the 300 largest R&D performers, based on total R&D expenditures reported in the previous survey. Table B-4 shows the number of companies in each industry or industry group that received a survey form and the percentage that responded to the survey.

Imputation for Item Nonresponse top

For various reasons, many firms chose to return the survey form with one or more blank items.[21] For some firms, internal accounting systems and procedures may not have allowed quantification of specific expenditures. Others may have refused to answer any voluntary questions as a matter of company policy.[22]

When respondents did not provide the requested information, estimates for the missing data were made using various methods. Specific rules govern imputation for missing data depending on the item being imputed. For some items (domestic sales, total employment, total R&D, and number of research scientists and engineers) missing current year data are always imputed. Rates of change are applied to prior year data regardless of whether prior year data were reported or imputed. For other items (e.g., basic research, subcontracted R&D, and foreign R&D) missing current year data are imputed only if the company reported the item in either of the prior two years. A third type of imputation occurs when detail does not sum to the total (e.g. Federal R&D by agency). In this case if prior year detail is not imputed, then current year data are distributed based on the previous distribution pattern of the reporting unit. Otherwise, an industry average distribution is applied to the total to derive a value for each detailed item. Rates of change are calculated by item within each NAICS category or industry. The calculations are based on weighted data for all companies that reported both variables. In the case of inter-item ratios (e.g., R&D to sales), calculations are based on data for all companies that reported both items in the current reporting period. For current to prior year ratios (e.g., employment), calculations are based on data for all companies that reported that item in both years.

Outside sources of information are also used for imputing missing data. During the edit review process, analysts compare data reported to the Survey of Industrial Research and Development by publicly-owned companies with the company's report to the Securities and Exchange Commission (SEC). Data items matched include domestic sales, domestic employment, total or company-funded R&D, and in some cases federally-funded R&D. This comparison provides analysts a means to 1) potentially resolve inconsistencies between current and prior year data on the R&D survey, 2) impute missing data for specific items, and 3) ensure that companies are reporting comparable values in both reports. A second source for verifying or obtaining domestic employment and domestic sales data is the US Census Bureau's Business Register. Data for these items are collected on economic census and annual survey forms.[23] Table B-5 contains imputation rates for the principal survey items.

Response Rates and Mandatory Versus Voluntary Reporting top

Current survey reporting requirements divide survey items into two groups: mandatory and voluntary. Response to four data items on the survey forms, total R&D expenditures, Federal R&D funds, net sales, and total employment, was mandatory, whereas response to the remaining items was voluntary. During the 1990 survey cycle, NSF conducted a test of the effect of reporting on a completely voluntary basis to determine if combining both mandatory and voluntary items on one survey form influences response rates. For this test, the 1990 sample was divided into two panels of approximately equal size. One panel, the mandatory panel, was asked to report as usual on four mandatory items with the remainder voluntary; and the other panel was asked to report all items on a completely voluntary basis. The result of the test was a decrease in the overall survey response rate to 80 percent from levels of 88 percent in 1989 and 89 percent in 1988. The response rates for the mandatory and voluntary panels were 89 and 69 percent, respectively. Detailed results of the test were published in Research and Development in Industry: 1990. For firms that reported R&D expenditures in 1999, table B-6 shows the percentage that also reported data for other selected items.

Character of Work Estimates top

Response to questions about character of work (basic research, applied research, and development) declined in the mid-1980s, and, as a result, imputation rates increased. The general imputation procedure described above became increasingly dependent upon information imputed in prior years, thereby distancing current year estimates from any reported information. Because of the increasing dependence on imputed data, NSF chose not to publish character of work estimates in 1986. The imputation procedure used to develop these estimates was revised in 1987 for use with later data and differs from the general imputation approach. The new method calculated the character of work distribution for a nonresponding firm only if that firm reported a distribution within a 5-year period, extending from 2 years before to 2 years after the year requiring imputation. Imputation for a given year was initially performed in the year the data were collected and was based on a character of work distribution reported in either of the 2 previous years, if any. It was again performed using new data collected in the next 2 years. If reported data followed no previously imputed or reported data, previous period estimates were inserted based on the currently reported information. Similarly, if reported data did not follow 2 years of imputed data, the 2 years of previously imputed data were removed. Thus, character of work estimates were revised as newly reported information became available and were not final for 2 years following their initial publication.

Beginning with 1995, previously estimated values were not removed for firms that did not report in the third year, nor were estimates made for the 2 previous years for firms reporting after 2 years of nonresponse. This process was changed because, in the prior period, revisions were minimal. Estimates continued to be made for 2 consecutive years of nonresponse and discontinued if the firm did not report character of work in the third year. If no reported data were available for a firm, character of work estimates were not imputed. As a consequence, only a portion of the total estimated R&D expenditures were distributed at the firm level. Those expenditures not meeting the requirements of the new imputation methodology were placed in a "not distributed" category. Table B-7 shows the character of work estimates along with the "not distributed" component for 1999.

NSF's objective in conducting the survey has always been to provide estimates for the entire population of firms performing R&D in the United States. However, the revised imputation procedure would no longer produce such estimates because of the "not distributed" component. A baseline estimation method thus was developed to allocate the "not distributed" amounts among the character of work components. In the baseline estimation method, the "not distributed" expenditures were allocated by industry group to basic research, applied research, and development categories using the percentage splits in the distributed category for that industry. The allocation was done at the lowest level of published industry detail only; higher levels were derived by aggregation, just as national totals were derived by aggregation of individual industry estimates, and result in higher performance shares for basic and applied research and lower estimates for development's share than would have been calculated using the previous method. The estimates of basic research, applied research, and development provided in the tables in section A were calculated using the baseline estimation method.

State Estimates top

Form RD-1 requested that the total cost of R&D be distributed for the state(s) where the R&D is performed. An independent source, the Directory of American Research and Development, published by the Data Base Publishing Group of the R. R. Bowker Company, last published for 1997, was used in conjunction with previous survey results to estimate R&D expenditures by state for companies that did not provide this information. The information on scientists and engineers published in the directory was used as a proxy indicator of the proportion of R&D expenditures within each state. R&D expenditures by state were estimated by applying the distribution of scientists and engineers by state from the directory to total R&D expenditures for these companies. These estimates were included with reported survey data to arrive at published estimates of R&D expenditures for each state.

Comparability of Statistics top

This section summarizes survey improvements, enhancements, and changes in procedures and practices that may have affected the comparability of statistics produced from the Survey of Industrial Research and Development over time and with other statistical series.[24]

Industry Classification System top

Beginning with the 1999 cycle of the survey, industry statistics are published using the North American Industrial Classification System (NAICS). The ongoing development of NAICS has been a joint effort of statistical agencies in Canada, Mexico, and the United States. The system replaced the Standard Industrial Classification (1980) of Canada, the Mexican Classification of Activities and Products (1994), and Standard Industrial Classification (SIC, 1987) of the United States.[25] NAICS was designed to provide a production-oriented system under which economic units with similar production processes are classified in the same industry. NAICS was developed with special attention to classifications for new and emerging industries, service industries, and industries that produce advanced technologies. NAICS not only eases comparability of information about the economies of the three North American countries, but it also increases comparability with the two-digit level of the United Nations' International Standard Industrial Classification (ISIC) system. Important for the Survey of Industrial Research and Development is the creation of several new classifications that cover major performers of R&D in the US Among manufacturers, the computer and electronic products classification (NAICS 334) includes makers of computers and peripherals, semiconductors, and navigational and electromedical instruments. Among nonmanufacturing industries are information (NAICS 51) and professional, scientific, and technical services (NAICS 54). Information includes publishing, both paper and electronic, broadcasting, and telecommunications. Professional, scientific, and technical services includes a variety of industries. Of specific importance for the survey are engineering and scientific R&D service industries.

Effects of NAICS on Survey Statistics. The change of industry classification system affects most of the detailed statistical tables produced from the survey. In this report, some tables which contain industry statistics from the 1997 and 1998 cycles of the survey, previously classified using the SIC system, have been reclassified using the new NAICS codes. This has been done to provide a bridge for users who want to make year-to-year comparisons below the aggregate level.

Company Size Classifications top

Beginning with the 1999 cycle of the survey, the number of company size categories used to classify survey statistics was increased. The original 6 categories were expanded to 10 to emphasize the role of small companies in R&D performance. During 1998, companies with fewer than 500 employees spent $30.2 billion on industrial R&D performed in the United States. During 1999, they spent $34.1 billion (NSF 2001a). Of this amount, 21 percent ($7.0 billion) was spent by the smallest companies (those with at least 5 but fewer than 25 employees). The 1999 statistics further show that there was more growth in the amount of R&D performed by smaller companies than in the amount performed by larger companies. The more detailed business size information also facilitates better international comparisons. Generally, statistics produced by foreign countries that measure their industrial R&D enterprise are reported with more detailed company size classifications at the lower end of the scale than US industrial R&D statistics traditionally have been.[26] The new classifications of the US statistics will enable more direct comparisons with other countries' statistics.

Revisions to Historical and Immediate Prior Year Statistics top

Revisions to historical statistics usually have been made because of changes in the industry classification of companies caused by changes in payroll composition detected when a new sample was drawn. Various methodologies have been adopted over the years to revise, or backcast, the data when revisions to historical statistics have become necessary. Documented revisions to the historical statistics from post-1967 surveys through 1992 are summarized in NSF (1994) and in annual reports for subsequent surveys. Detailed descriptions of the specific revisions made to the statistics from pre-1967 surveys are scarce, but US Bureau of the Census (1995) summarizes some of the major revisions.

Changes to reported data can come from three sources: respondents, analysts involved in survey and statistical processing, and the industry reclassification process. Prior to 1995, routine revisions were made to prior year statistics based on information from all three sources. Consequently, results from the current year survey were used not only to develop current year statistics, but also to revise immediate prior year statistics. Beginning with the 1995 survey, this practice was discontinued. The reasons for discontinuation of this practice were annual sampling, continual strengthening of sampling methodology, and improvements in data verification, processing, and nonresponse follow-up. Moreover, it was not clear that respondents or those who processed the survey results had any better information a year after the data were first reported. Thus, it was determined that routinely revising published survey statistics increased the potential for error and often confused users of the statistics. Revisions are now made to historical and immediate prior year statistics only if substantive errors are discovered.

Year-to-Year Changes top

Comparability from year to year may be affected by new sample design, annual sample selection, and industry shifts.

Sample Design top

By far the most profound influence on statistics from recent surveys occurred when the new sample design for the 1992 survey was introduced. Revisions to the 1991 statistics were dramatic (see Research and Development in Industry: 1992 for a detailed discussion). While the allocation of the sample was changed somewhat, the sample designs used for subsequent surveys were comparable to the 1992 sample design in terms of size and coverage.

Annual Sample Selection top

With the introduction of annual sampling in 1992, more year-to-year change has resulted than when survey panels were used. There are two reasons why this was so. First, changes in classification of companies not surveyed are not reflected in the year-to-year movement. Prior to annual sampling, a wedging operation—which was performed when a new sample was selected—was a means of adjusting the data series to account for the changes in classification that occurred in the frame (see the discussion on wedging later under "Time Series Analyses"). Second, yearly correlation of R&D data is lost when independent samples are drawn each year.

Industry Shifts top

The industry classification of companies is redefined each year with the creation of the sampling frame. By redefining the frame, the sample reflects current distributions of companies by size and industry. A company may move from one industry to another because of either changes in its payroll composition, which is used to determine the industry classification code (see previous discussion under "Frame Creation"); changes in the industry classification system itself; or changes in the way the industry classification code was assigned or revised during survey processing.

A company's payroll composition can change because of the growth or decline of product or service lines, the merger of two or more companies, the acquisition of one company by another, divestitures, or the formation of conglomerates. Although an unlikely occurrence, a company's industry designation could be reclassified yearly with the introduction of annual sampling. The result is that a downward movement in R&D expenditures in one industry is balanced by an upward movement in another industry from one year to the next.

From time to time, the industry coding system, used by Federal agencies that publish industry statistics, is changed or revised to reflect the changing composition of US and North American industry. For statistics developed for 1988–91 from the 1988–91 surveys, companies retained the Standard Industrial Classification (SIC) codes assigned for the 1987 sample. These classifications were based on the 1977 SIC system. Since the last major revision of the SIC system was in 1987, this revision was used to classify companies in the 1992-98 surveys. As discussed above, the industrial classification system has been completely changed and, beginning with the 1999 cycle of the survey, the North American Industrial Classification System (NAICS) is now used.

The method used to classify firms during survey processing was revised slightly in 1992. Research has shown that the impact on individual industry estimates was minor.[27] The current method used to classify firms was discussed previously under "Frame Creation." Methods used for past surveys are discussed in US Bureau of the Census (1995).

Capturing Small and Nonmanufacturing R&D Performers[28] top

Before the 1992 survey, the sample of firms surveyed was selected at irregular intervals.[29] In intervening years, a panel of the largest firms known to perform R&D was surveyed. For example, a sample of about 14,000 firms was selected for the 1987 survey. For the 1988–91 studies, about 1,700 of these firms were resurveyed annually; the other firms did not receive survey forms, and their R&D data were estimated. This sample design was adequate during the survey's early years because R&D performance was concentrated in relatively few manufacturing industries. However, as more and more firms began entering the R&D arena, the old sample design proved increasingly deficient because it did not capture births of new R&D-performing firms. The entry of fledgling R&D performers into the marketplace was completely missed during panel years. Additionally, beginning in the early 1970s, the need for more detailed R&D information for nonmanufacturing industries was recognized. At that time, the broad industry classifications "miscellaneous business services" and "miscellaneous services" were added to the list of industry groups for which statistics were published. By 1975, about 3 percent of total R&D was performed by firms in nonmanufacturing industries.

During the mid-1980s, there was evidence that a significant amount of R&D was being conducted by an increasing number of nonmanufacturing firms; again, the number of industries used to develop the statistics for nonmanufacturers was increased. Consequently, since 1987 the annual reports in this series have included separate R&D estimates for firms in the communication, utility, engineering, architectural, research, development, testing, computer programming, and data processing service industries; hospitals; and medical labs. Approximately 9 percent of the estimated industrial R&D performance during 1987 was undertaken by nonmanu-facturing firms.

After the list of industries for which statistics were published was expanded, it became clear that the sample design itself should be changed to reflect the widening population of R&D performers among firms in the nonmanufacturing industries[30] and small firms in all industries so as to account better for births of R&D-performing firms and to produce more reliable statistics. Beginning with the 1992 survey, NSF decided to (1) draw new samples with broader coverage annually, and (2) increase the sample size to approximately 25,000 firms.[31] As a result of the sample redesign, for 1992 the reported nonmanufacturing share was (and has continued to be) 25-30 percent of total R&D.[32]

Time-Series Analyses top

The statistics resulting from this survey on R&D spending and personnel are often used as if they were prepared using the same collection, processing, and tabulation methods over time. Such uniformity has not been the case. Since the survey was first fielded, improvements have been made to increase the reliability of the statistics and to make the survey results more useful. To that end, past practices have been changed and new procedures instituted. Preservation of the comparability of the statistics has, however, been an important consideration in making these improvements. Nonetheless, changes to survey definitions, the industry classification system, and the procedure used to assign industry codes to multi-establishment companies have had some, though not substantial, effects on the comparability of statistics.[33]

The aspect of the survey that had the greatest effect on comparability was the selection of samples at irregular intervals (i.e., 1967, 1971, 1976, 1981, 1987, and 1992) and the use of a subset or panel of the last sample drawn to develop statistics for intervening years. As discussed earlier, this practice introduced cyclical deterioration of the statistics. As compensation for this deterioration, periodic revisions were made to the statistics produced from the panels surveyed between sample years. Early in the survey's history, various methods were used to make these revisions.[34] After 1976 and until the 1992 advent of annual sampling, a linking procedure called wedging was used.[35] In wedging, the 2 sample years on each end of a series of estimates served as benchmarks in the algorithms used to adjust the estimates for the intervening years.[36]

Comparisons to Other Statistical Series top

NSF collects data on federally financed R&D from both Federal funding agencies—using the Survey of Federal Funds for Research and Development—and from performers of the work—industry, Federal labs, universities, and other nonprofit organizations—using the Survey of Industrial Research and Development and other surveys. As reported by Federal agencies, NSF publishes data on Federal R&D budget authority and outlays, in addition to Federal obligations. These terms are defined below:[37]

National R&D expenditure totals in NSF's National Patterns of R&D Resources report series are primarily constructed with data reported by performers and include estimates of Federal R&D funding to these sectors. But until performer-reported survey data on Federal R&D expenditures are available from industry and academia, data collected from the Federal agency funders of R&D were used to project R&D performance. When survey data from the performers subsequently are tabulated, as they were for this report, these statistics replace the projections based on funder expectations. Historically, the two survey systems have tracked fairly closely. For example, in 1980, performers reported using $29.5 billion in Federal R&D funding, and Federal agencies reported total R&D funding between $29.2 billion in outlays and $29.8 billion in obligations (NSF 1996b). In recent years, however, the two series have diverged considerably. The difference in the Federal R&D totals appears to be concentrated in funding of industry, primarily aircraft and missile firms, by the Department of Defense. Overall, industrial firms have reported significant declines in Federal R&D support since 1990 (see table A-1), while Federal agencies have reported level or slightly increased funding of industrial R&D (NSF 1999a). NSF is identifying and examining the factors behind these divergent trends.


Technical Tables top


These tables are available in Excel (.xls) format and Portable Document Format (.pdf).
See Help for more information about viewing publications in different formats.




Table   Excel Spreadsheet (.xls) Portable Document Format  (.pdf)
B-1 Survey of Industrial Research and Development—number of companies in the target population and selected for the sample, by industry and by size of company: 1999 .xls .pdf
B-2 Survey of Industrial Research and Development—relative standard error for survey estimates, by industry and by size of company: 1999 .xls .pdf
B-3 Survey of Industrial Research and Development—relative standard error for estimates of total R&D and percentage of estimates attributed to certainty companies, by state: 1999 .xls .pdf
B-4 Survey of Industrial Research and Development—unit response rates-number and percentage of companies that responded to the survey and percentage of companies that performed R&D, by industry and by type of survey form: 1999 .xls .pdf
B-5 Survey of Industrial Research and Development—imputation rates for survey items, by industry and by size of company: 1999 .xls .pdf
B-6 Survey of Industrial Research and Development—percentage of R&D-performing companies that reported non-zero data for major survey items: 1999 .xls .pdf
B-7 Survey of Industrial Research and Development—funds for and number of companies that performed industrial basic research, applied research, and development, in the US and funds and percent of funds not distributed, by industry and by size of company, by source of funds: 1999 .xls .pdf


Survey Definitions top

Employment, FTE R&D Scientists and Engineers. Number of people domestically employed by R&D-performing companies who were engaged in scientific or engineering work at a level that required knowledge, gained either formally or by experience, of engineering or of the physical, biological, mathematical, statistical, or computer sciences equivalent to at least that acquired through completion of a 4-year college program with a major in one of those fields. The statistics show full-time-equivalent (FTE) employment of persons employed by the company during the January following the survey year who were assigned full time to R&D, plus a prorated number of employees who worked part time on R&D.

Employment, Total. Number of people domestically employed by R&D-performing companies in all activities during the pay period that includes the 12th of March, the date most employers use when paying first quarter employment taxes to the Internal Revenue Service.

Federally Funded R&D Centers (FFRDCs). R&D-performing organizations administered by industrial, academic, or other institutions on a nonprofit basis, and exclusively or substantially financed by the Federal Government. For the statistics in this report, R&D expenditures of industry-administered FFRDCs were included with the Federal R&D data of the industry classification of each of the administering firms. The industry-administered FFRDCs included in the 1999 survey, their corporate administrators, and location are indicated below.

FFRDCs Supported by the Department of Energy
FFRDC Supported by the Department of Health and Human Services, National Institutes of Health

Funds for R&D, Company and Other Non-Federal. The cost of R&D performed within the company and funded by the company itself or by other nonfederal sources; does not include the cost of R&D supported by the company but contracted to outside organizations such as research institutions, universities and colleges, nonprofit organizations, or—to avoid double-counting—other companies.

Funds for R&D, Federal. The cost of R&D performed within the company under Federal R&D contracts or subcontracts and R&D portions of Federal procurement contracts and subcontracts; does not include the cost of R&D supported by the Federal Government but contracted to outside organizations such as research institutions, universities and colleges, nonprofit organizations, or other companies.

Funds for R&D, Total. The cost of R&D performed within the company in its own laboratories or in other company-owned or company-operated facilities, including expenses for wages and salaries, materials and supplies, property and other taxes, maintenance and repairs, depreciation, and an appropriate share of overhead; does not include capital expenditures or the cost of R&D contracted to outside organizations such as research institutions, universities and colleges, nonprofit organizations, or—to avoid double-counting—other companies.

Funds per R&D Scientist or Engineer. All costs associated with the performance of industrial R&D (salaries, wages, and fringe benefits paid to R&D scientists and engineers; materials and supplies used for R&D; depreciation on capital equipment and facilities used for R&D; and any other R&D costs) divided by the number of R&D scientists and engineers employed. To obtain a per person cost of R&D for a given year, the total R&D expenditures of that year were divided by an approximation of the number of full-time-equivalent (FTE) scientists and engineers engaged in the performance of R&D for that year. For accuracy, this approximation was the mean of the numbers of such FTE R&D-performing scientists and engineers as reported in January for the year in question and the subsequent year. For example, the mean of the numbers of FTE R&D scientists and engineers in January 1999 and January 2000 was divided into total 1999 R&D expenditures for a total cost per R&D scientist or engineer in 1999.

Net Sales and Receipts. Dollar values for goods sold or services rendered by R&D-performing companies to customers outside the company—including the Federal Government—less such items as returns, allowances, freight, charges, and excise taxes. Domestic intracompany transfers and sales by foreign subsidiaries were excluded, but transfers to foreign subsidiaries and export sales to foreign companies were included.

R&D and Industrial R&D. R&D is the planned, systematic pursuit of new knowledge or understanding toward general application (basic research); the acquisition of knowledge or understanding to meet a specific, recognized need (applied research); or the application of knowledge or understanding toward the production or improvement of a product, service, process, or method (development). Basic research analyzes properties, structures, and relationships toward formulating and testing hypotheses, theories, or laws; applied research is undertaken either to determine possible uses for the findings of basic research or to determine new ways of achieving some specific, predetermined objectives; and development draws on research findings or other scientific knowledge for the purpose of producing new or significantly improving products, services, processes, or methods. As used in this survey, industrial basic research is the pursuit of new scientific knowledge or understanding that does not have specific immediate commercial objectives, although it may be in fields of present or potential commercial interest; industrial applied research is investigation that may use findings of basic research toward discovering new scientific knowledge that has specific commercial objectives with respect to new products, services, processes, or methods; and industrial development is the systematic use of the knowledge or understanding gained from research or practical experience directed toward the production or significant improvement of useful products, services, processes, or methods, including the design and development of prototypes, materials, devices, and systems. The survey covers industrial R&D performed by people trained—either formally or by experience—in engineering or in the physical, biological, mathematical, statistical, or computer sciences and employed by a publicly or privately owned firm engaged in for-profit activity in the United States. Specifically excluded from the survey are quality control, routine product testing, market research, sales promotion, sales service, and other nontechnological activities; routine technical services; and research in the social sciences or psychology.

References top

National Science Foundation (NSF). 1956. Science and Engineering in American Industry: Final Report on a 1953-54 Survey. NSF 56-16. Washington, DC: US Government Printing Office.

---. 1960. Science and Engineering in American Industry: 1956. NSF 59-50. Washington, DC: US Government Printing Office.

 

---. 1994. "1992 R&D Spending by US Firms Rises, NSF Survey Improved." SRS Data Brief. NSF 94-325. Arlington, VA.

---. 1995. "1993 Spending Falls for US Industrial R&D, Nonmanufacturing Share Increases." SRS Data Brief. NSF 95-325. Arlington, VA.

---. 1996a. "1994 Company Funding of US Industrial R&D Rises as Federal Support Continues to Decline." SRS Data Brief. NSF 96-310. Arlington, VA.

---. 1996b. National Patterns of R&D Resources: 1996. NSF 96-333. Arlington, VA.

---. 1997a. "1995 US Industrial R&D Rises, NSF Survey Statistics Expanded to Emphasize Role of Nonmanufacturing Industries." SRS Data Brief. NSF 97-332. Arlington, VA.

 

---. 1998a. "1996 US Industrial R&D: Firms Continue to Increase Their Investment." SRS Data Brief. NSF 98-317. Arlington, VA.

---. 1999a. National Patterns of R&D Resources: 1998. NSF 99-335. Arlington, VA.

---. 1999b. "1997 US Industrial R&D Performers." SRS Topical Report. NSF 99-355. Arlington, VA. NSF 99-335. Arlington, VA.

---. 2000a. Federal Funds for Research and Development: Fiscal Years 1998–2000, Volume 48. NSF 00-317. Arlington, VA.

---. 2000b. "1998 US Industrial R&D Performers Report Increase R&D." SRS Data Brief. NSF 00-320. Arlington, VA.

---. 2001a. "US Industrial R&D Performers Report Increased R&D in 1999; New Industry Coding and Size Classifications for NSF Survey." SRS Data Brief. NSF 01-326. Arlington, VA.

---. 2001b. Federal Research and Development Funding by Budget Function: Fiscal Years 1999-2001. NSF 01-316. Arlington, VA.

US Bureau of the Census. 1993. "Effects of the 1987 SIC Revision on Company Classification in the Survey of Industrial Research and Development (R&D)." Technical Memorandum. December 6.

---. 1994a. "Comparison of Company Coding Between 1992 and 1993 for the Survey of Industrial Research and Development." Technical Memorandum. November 3. Washington, DC.

---. 1994b. Documentation of Nonsampling Issues in the Survey of Industrial Research and Development. RR94/03. Washington, DC.

---. 1994c. An Evaluation of Imputation Methods for the Survey of Industrial Research and Development. ESMD-9404. Washington, DC.

---. 1994d. "Evaluation of Total Employment Cut-Offs in the Survey of Industrial Research and Development." Technical Memorandum. November 3. Washington, DC.

---. 1994e. "Reclassification of Companies in the 1992 Survey of Industrial Research and Development (R&D) for the Generation of the 'Analytical' Series." Technical Memorandum. October 25. Washington, DC.

---. 1994f. A Study of Processing Errors in the Survey of Industrial Research and Development. ESMD-9403. Washington, DC.

---. 1994g. "Wedging Considerations for the 1992 Research and Development (R&D) Survey." Technical Memorandum. June 10. Washington, DC.

---. 1995. Documentation of the Survey Design for the Survey of Industrial Research and Development: A Historical Perspective. Washington, DC.

 


Footnotes

[13] Information for this section was provided by the Manufacturing and Construction Division of the US Bureau of the Census, which collected and compiled the survey data for NSF. Copies of the technical papers cited can be obtained from NSF's Research and Development Statistics Program in the Division of Science Resources Statistics.

[14] In the Survey of Industrial Research and Development and in the publications presenting statistics resulting from the survey, the terms "company," "firm," and "enterprise" are used interchangeably. "Industry" refers to the 2-, 3-, or 4-digit North American Industrial Classification System (NAICS) codes or group of NAICS codes used to publish statistics resulting from the survey.

[15] The 1999 survey was the first year that companies were classified using NAICS. Prior to 1999, the Standard Industrial Classification (SIC) system was used. The two systems are discussed later under "Comparability of Statistics."

[16] See US Bureau of the Census (1994d).

[17] These industries are listed and discussed below under "Comparability of Statistics."

[18] The survey excludes companies with fewer than 5 employees to limit burden on small business enterprises in compliance with the Office of Management and Budget's (OMB) charge to Federal government agencies to limit "significant economic impact on...small entities."

[19] Form RD-1 is a revised version of the Form RD-1L, formerly used to collect data from large R&D performers for odd-numbered years. For even-numbered years, an abbreviated questionnaire, Form RD-1S was used. Beginning in 1998 the Form RD-1L was streamlined, renamed Form RD-1, and the odd/even-numbered year cycle abandoned.

[20] The tables produced from the data collected in both the 1997 and 1998 surveys were "spotty." That is, since federally funded R&D contracted-out to others was reported by so few companies, most of the resulting statistics arrayed by industry had to be suppressed because of confidentiality and, consequently, the tables were not published. In the 1997 table, even the "all industries" total had to be suppressed, so no meaningful estimate can be made for that year. However, for 1998, the "all industries" total was $4.3 billion. We will continue to tabulate this item and report the aggregated figure when possible.

[21] For detailed discussions on the sources, control, and measurement error resulting from item nonresponse, see US Bureau of the Census (1994b).

[22] All but four items—total R&D, Federal R&D, net sales, and total employment, which are included in the Census Bureau's annual mandatory statistical program—are voluntary. See further discussion under "Response Rates and Mandatory Versus Voluntary Reporting" later in this section.

[23] For detailed descriptions and analyses of the imputation methods and algorithms used, see US Bureau of the Census (1994c).

[24] See also US Bureau of the Census (1995).

[25] For a detailed comparison of NAICS to the Standard Industrial Classification (1987) of the United States, visit http://www.census.gov/epcd/www/naics.html.

[26] For more information, visit the Organisation for Economic Co-operation and Development (OECD) website at http://www.oecd.org.

[27] The effects of changes in the way companies were classified during survey processing are discussed in detail in US Bureau of the Census (1994e and 1994a).

[28] See also NSF (1994, 1995, and 1996a).

[29] Until 1967, samples were selected every 5 years. Subsequent samples were selected for 1971, 1976, 1981, and 1987.

[30] For the 1992 survey, 25 new nonmanufacturing industry and industry groups were added to the sample frame: agricultural services (SIC 07); fishing, hunting, and trapping (SIC 09); wholesale trade–nondurables (SIC 51); stationery and office supply stores (SIC 5112); industrial and personal service paper (SIC 5113); groceries and related products (SIC 514); chemicals and allied products (SIC 516); miscellaneous nondurable goods (SIC 519); home furniture, furnishings, and equipment stores (SIC 57); radio, TV, consumer electronics, and music stores (SIC 573); eating and drinking places (SIC 581); miscellaneous retail (59); nonstore retailers (SIC 596); real estate (SIC 65); holding and other investment offices (SIC 67); hotels, rooming houses, camps, and other lodging places (SIC 70); automotive repair, services, and parking (SIC 75); miscellaneous repair services (SIC 76); amusement and recreation services (SIC 79); health services (SIC 80); offices and clinics of medical doctors (SIC 801); offices and clinics of other health practitioners (SIC 804); miscellaneous health and allied services not elsewhere classified (SIC 809); engineering, accounting, research, management, and related services (SIC 87); and management and public relations services (SIC 874).

[31] Annual sampling also remedies the cyclical deterioration of the statistics that results from changes in a company's payroll composition because of product line and corporate structural changes.

[32] See also NSF (1997a, 1998a, 1999b, and 2000b).

[33] For discussions of each of these changes, see US Bureau of the Census (1994g); for considerations of comparability, see US Bureau of the Census (1994e and 1993).

[34] See US Bureau of the Census (1995).

[35] The process was dubbed wedging because of the wedgelike area produced on a graph that compares originally reported statistics with the revised statistics that resulted after linking.

[36] For a full discussion of the mathematical algorithm used for the wedging process that linked statistics from the 1992 survey with those from the 1987 survey, see US Bureau of the Census (1994g). In general, wedging

takes full advantage of the fact that in the first year of a new panel [when a new sample is selected], both current year and prior-year estimates are derived. Thus, two independent estimates exist for the prior year. The estimates from the new panel are treated as superior primarily because the new panel is based on updated classifications [the industry classifications in the prior panel are frozen] and is more fully representative of the current universe (the prior panel suffers from panel deterioration, especially a lack of birth updating). The limitations in the prior panel caused by these factors are naturally assumed to increase with time, so that in the revised series, we desire a gradual increase in the level or revision over time which culminates in the real difference observed between the two independent sample estimates of the prior year. At the same time, we desire that the annual movement of the original series be preserved to the degree possible in the revised series (US Bureau of the Census, 1994).

To that end, the wedging algorithm does not change estimates from sample years and adjusts estimates from panel years, recognizing that deterioration of the panel is progressive over time. One of the primary reasons for deciding to select a new sample annually rather than at irregular intervals was to avoid applying global revision processes such as wedging. Consequently, the 1992 survey was intended to be the last one affected by the wedging procedure.

[37] See also NSF (2000a).


Previous Section Top of page Next Section Table of Contents Help SRS Home