Much of the information for this appendix was provided by the Manufacturing and Construction Division of the U.S. Bureau of the Census, which collected and compiled the survey data. Copies of the technical papers cited can be obtained from NSF's Research and Development Statistics Program in the Division of Science Resources Statistics. The first part of this appendix focuses on recent changes to the survey methodology; major historical changes are discussed later in Comparability of Statistics. More detailed historical information is available from individual annual reports (http://www.nsf.gov/statistics/industry/).
The reporting unit for the Survey of Industrial Research and Development is the company, defined as a business organization of one or more establishments under common ownership or control. The survey includes two groups of enterprises: (1) companies known to conduct R&D, and (2) a sample representation of companies for which information on the extent of R&D activity is uncertain.
The Standard Statistical Establishment List (SSEL), a Bureau of the Census compilation that contains information on more than 3 million establishments with paid employees, was the target population from which the frame used to select the 2002 survey sample was created (see table A-1 for population and sample sizes). For companies with more than one establishment, data were summed to the company level and the resulting company record was used to select the sample and process and tabulate the survey data.
After data were summed to the company level, each company then was assigned a single North American Industry Classification System (NAICS) code based on payroll. The method used followed the hierarchical structure of the NAICS. The company was first assigned to the economic sector, defined by a 2-digit NAICS code representing manufacturing, mining, trade, etc., that accounted for the highest percentage of its aggregated payroll. Then the company was assigned to a subsector, defined by a 3-digit NAICS code, that accounted for the highest percentage of its payroll within the economic sector. Finally, the company was assigned a 4-digit NAICS code within the subsector, again based on the highest percentage of its aggregated payroll within the subsector. Assignment below the 4-digit level was not done because of the concentration of R&D in relatively few industries and disclosure concerns. Both issues are discussed later in this appendix.
A fundamental change initiated in 1995 and repeated for subsequent samples was the redefinition of the sampling strata. For the survey years 1992–94, 165 sampling strata were established, each stratum corresponding to one or more 3-digit-level SIC codes. The objective was to select sufficient representation of industries to determine whether alternative or expanded publication levels were warranted. For the 1995–98 surveys, the sampling strata corresponded to publication-level industry aggregations. For each year, 40 publication levels were defined. These correspond to the original 25 groupings of manufacturing industries used as sampling strata before 1992 and an additional 15 groupings of nonmanufacturing industries. Beginning with the 1999 survey, with the conversion to NAICS, 29 manufacturing and 20 nonmanufacturing strata were defined corresponding to the 4-digit industries and groups of industries for which statistics were developed and published.
To limit the growth occurring each year in the number of companies selected for the survey with certainty, the certainty criterion was raised for the 1996 survey from $1 million to $5 million in total R&D expenditures based on data gathered from the 1995 survey. With a fixed total sample size, there was concern that the representation of the very large noncertainty universe by a smaller sample each year would be inadequate. Because the sample size was increased for 2002, the certainty criterion was lowered from $5 million to $3 million in total R&D expenditures.
Beginning in 2001, the methodology to produce statistics by state was modified from previous years to address the recurring problem of large year-to-year variation in many state estimates. Under the new methodology, selected with certainty are the largest 50 companies in each state based on payroll thus providing more coverage of R&D performers (see State Estimates below).
Beginning in 1996, total company employment became the basis for partitioning the frame. The total company employment levels defining the partitions were based on the relative contribution to total R&D expenditures of companies in different employment size groups in both the manufacturing and nonmanufacturing sectors. In the manufacturing sector, all companies with total employment of 50 or more were included in the large company partition. In the nonmanufacturing sector, all companies with total employment of 15 or more were included in the large company partition. Companies in the respective sectors with employment below these values were included in the small company partition.
For the 2002 survey, the frame was partitioned into three groups: (1) companies known to conduct R&D, (2) companies that previously only reported zero R&D, and (3) companies for which information about the extent of R&D activity is uncertain. There were 5,817 companies in the first group, 55,374 companies in the second group, and 1,770,658 companies in the third group.
Beginning with the 1996 cycle of the survey, a significant revision in the procedure for selecting samples from the partitions led to a change in the development and presentation of estimates. The revised procedure was repeated for subsequent surveys. For the 1995 survey, the sample of companies from the large-company partition was selected using probability proportionate to size sampling (see below) in each of the 40 strata (discussed previously under Defining Sampling Strata). Likewise, the simple random sampling of the small-company partition was done for each of the 40 strata. However, beginning in 1996, the number of strata established for the small-company partition was reduced to two. One stratum consisted of small companies classified in manufacturing industries and the second stratum consisted of small companies classified in nonmanufacturing industries. Simple random sampling continued as the selection method for these two strata.
The purpose of selecting the small company panel from these two strata was to reduce the variability in industry estimates largely attributed to the random year-to-year selection of small companies by industry and the high sampling weights that sometimes occurred. As a consequence of this change, estimates for industry groups within manufacturing and nonmanufacturing were not possible from these two strata as noted on affected tables. The statistics for the detailed industry groups were based only on the sample from the large-company partition. Prior to 1999, estimates from the small-company partition were included in statistics for total manufacturing, total nonmanufacturing, and all industries. For the reports for 1999 and 2000, the estimates were published separately in the "small manufacturing companies" and "small nonmanufacturing companies" categories. For 2001, the frame still had the small-company partition, but the tabulation of the small companies was incorporated with the large companies. For 2002, the small-company partitions were eliminated from the survey frame. In this report, small-company data were incorporated with that from large companies for 1999–2002.
Imputing R&D. It would be ideal if company size could be determined by its R&D expenditures. Unfortunately, except for the companies that were in a previous survey or for which there is information from external sources, it is impossible to know the R&D expenditures for every firm in the universe (i.e., R&D information is not available from the Standard Statistical Establishment List (SSEL)). Consequently, the probability of selection for most companies is based on estimated R&D expenditures. Since total payroll is known for each company in the universe (i.e., payroll information is available from the SSEL), it is possible to estimate R&D from payroll using relationships derived from previous survey data. Imputation factors relating these two variables are derived for each industry grouping. To impute R&D for a given company, the imputation factors are applied to the company payroll in each industry grouping. A final measure is obtained by adding the industry grouping components. The effect, in general, is to give firms with large payrolls higher probabilities of selection in agreement with the assumption that larger companies are more likely to perform R&D. Estimated R&D values are computed for companies in the small-company partition as well. The aggregate of reported and estimated R&D from each company in both the large- and small-company partitions represent a total universe measure of the previous year's R&D expenditures. However, assigning R&D to every company results in an overstatement of this measure. To adjust for the overstatement, the universe measure is scaled down using factors developed from the relationship between the frame measure of the prior year's R&D and the final prior-year survey estimates. These factors, computed at levels corresponding to published industry levels, are used to adjust the originally imputed R&D values so that the new frame total for R&D at these levels approximate the prior year's published values. This adjustment provides for better allocation of the sample among these levels.
For 2001, the distribution of companies by payroll and estimated R&D in the large-company partition was skewed as in earlier frames (i.e., the correlation of payroll and estimated R&D was high because estimated R&D had been calculated based on payroll). Because of this skewness, probability proportionate to size (pps) sampling remained the appropriate selection technique for this group. That is, large companies had higher probabilities of selection than did small companies. However, a different approach to pps sampling was introduced beginning with the 1998 survey. Historically, pps sampling had been accomplished using an independent sampling methodology, i.e., the selection (or nonselection) of a given company was independent of the sampling result (select or nonselect) of any other company. This implied that over repeated samplings in a given stratum, different size samples would result. This added more variability to the sample estimates. For 1998, a fixed sample size pps method was introduced. This method ensured that the sample size desired for a given stratum was achieved, thus eliminating error because of sample size variation from the sample estimates. For a given sample size, the fixed sample size method will produce more precise estimates on average than the independent method. The fixed sample size methodology has been replicated for every survey year since the 1998 survey.
There was a significant change in assigning probabilities for the 2002 survey year. If a company in the frame reported any positive R&D data in the previous four years, then that company received a measure of size equal to the most recent reported positive R&D expenditures and was placed in the first frame partition. If a company reported zero R&D expenditures in the previous four years, then that company was placed in the second frame partition. The remaining companies received a measure of size equal to their company payroll, and then were placed in the third frame partition. Relative standard error (RSE) constraints by industry and by state were imposed separately in the first partition and the company received a probability of selection for each industry in which it had activity, as well as each state. The company's final probability was the maximum of these industry and state probabilities. The same was done for companies in the third partition.
For 1996-2000, only two major strata were defined for samples in the small-company partition, manufacturing and nonmanufacturing. The use of simple random sampling (srs) implied that each company within a stratum had an equal probability of selection with the exception of the preassigned certainties. The total sample allocated to the small-company partition was dependent upon the total sample specified for the survey and upon the total sample necessary to satisfy criteria established for the large partition. Once determined, the allocation of this total by stratum was made proportionate to the stratum's payroll contribution to the entire partition.
The small manufacturing, small nonmanufacturing, and an unknown NAICS strata, which was added in 2001 to account for companies with an incomplete or missing NAICS code, were eliminated for the srs portion of the 2002 survey frame. The srs portion of the frame consisted of only those companies in the second partition, i.e., those companies that reported only zero for R&D expenditures within the previous four years. These companies were split into two simple random sampling strata. Companies in the first stratum were classified in manufacturing and received a probability of selection of roughly 0.01. Companies in the second stratum were classified in nonmanufacturing and assigned a probability of selection around 0.004.
The particular sample selected was one of a large number of samples of the same type and size that by chance might have been selected. Statistics resulting from the different samples would differ somewhat from each other. These differences are represented by estimates of sampling error or variance. The smaller the sampling error, the more precise the statistic, that is, the closer it is to the result that would have occurred had a complete canvass been done under the same definitions and procedures. The accuracy of the estimate, how close it is to the true value, is a function not only of sampling error but also of nonsampling error.
Controlling Sampling Error. Historically, it has been difficult to achieve control over the sampling error of survey estimates. Efforts were confined to controlling the amount of error due to sample size variation, but this was only one component of the overall sampling error. The other component depended on the correlation between the data from the sampling frame used to assign probabilities (namely R&D values either imputed or reported in the previous survey) and the actual current year reported data. The nature of R&D is such that these correlations could not be predicted with any reliability. Consequently, precise controls on overall sampling error were difficult to achieve.
For recent surveys, primary concern was placed on controlling error for the large-company partition because nearly all of the R&D activity was identified from that portion of the sample. Since 1998, with the introduction of the fixed sample size sampling procedure, the component of sampling error due to sample size variation was eliminated. However, the amount of error attributable to the remaining component of the sample remained. Since there was still no way to predict how well the data from the sampling frame would correlate with actual survey data, the approach taken to allocate the sample across the various strata was to assign probabilities in the same manner as in the past when independent sampling was used. The probabilities resulting from this allocation technique determined the sample sizes to be selected from each stratum subject to the overall sample size constraint dictated by the survey budget. Although the actual survey sampling errors could not be predicted, the parameters used to assign probabilities, and the use of the minimum probability rule resulted in a desirable number of companies being sampled from the large-company partition (see Sample Size below).
Sampling Strata and Standard Error Estimates. A limitation of the sample allocation process for the large-company partition should be noted. The constraints used to control the sample size in each stratum were based on a universe total that, in large part, was improvised. That is, as previously noted, an R&D value was assigned to every company in the frame, even though most of these companies actually may not have had R&D expenditures. The value assigned was imputed for the majority of companies in the frame and, as a consequence, the estimated universe total and the distribution of individual company values, even after scaling, did not necessarily reflect the true distribution. Assignment of sampling probability was nevertheless based on this distribution. The presumption was that actual variation in the sample design would be less than that estimated, because many of the sampled companies have true R&D values of zero, not the widely varying values that were imputed using total payroll as a predictor of R&D. Previous sample selections indicate that in general this presumption held, but exceptions have occurred when companies with large sampling weights have reported large amounts of R&D spending. See table A-2 for a list by industry of the standard error estimates for selected items and table A-3 for a list of the standard error estimates of total R&D by state.
Nonsampling Error. In addition to sampling error, estimates are subject to nonsampling error. Errors are grouped in five categories: specification, coverage, response, nonresponse, and processing. For detailed discussions on the sources, control, and measurement of each of these types of error, see U.S. Bureau of the Census (1994b and 1994f).
For 2002, the parameters set to control sampling error discussed above resulted in sample sizes of 4,395 companies from the first partition of the pps portion of the frame, 262 companies from the second frame partition, and 26,525 companies from the third frame partition. The overall final sample consisted of 31,182 companies. This total included an adjustment to the sample size based on a minimum probability rule and changes in the operational status of some companies.
Changes in Operational Status. Between the time that the frame was created and the survey was prepared for mailing, the operational status of some companies changed. That is, they were merged with or acquired by another company, or they were no longer in business. Before preparing the survey for mailing, the operational status was updated to identify these changes. As a result, the number of companies mailed a survey form was somewhat smaller than the number of companies initially selected for the survey.
The 2002 sample design for the survey was complex and oversampled known research and development performers. Thus, sample weights were required to produce nationally representative estimates. The sample weights used were the inverse of the probability of selection. The 2002 sample design, as was the case since 1997, included a restriction on the maximum weight (or minimum probability of selection) of any company in the sample. The maximum weight for companies classified in manufacturing was restricted to 100 while the maximum weight for companies classified in non-manufacturing was restricted to 250.
Two forms are used each year to collect data for the survey. Known large R&D performers are sent a detailed survey form, Form RD-1. The Form RD-1 requests data on sales or receipts, total employment, employment of scientists and engineers, expenditures for R&D performed within the company with federal funds and with company and other funds, character of work (basic research, applied research, and development), company sponsored R&D expenditures in foreign countries, R&D performed by others, R&D performed in collaboration with others, federally funded R&D by contracting agency, R&D costs by type of expense, R&D costs by technology area, domestic R&D expenditures by state, energy related R&D, R&D done in collaboration with others, and foreign R&D by country. Because companies receiving the Form RD-1 have participated in previous surveys, computer-imprinted data reported by the company for the previous year are supplied for reference. Companies are encouraged to revise or update the prior-year data if they have more current information; however, prior-year statistics that had been previously published were revised only if large disparities were reported.
Small R&D performers and firms included in the sample for the first time were sent Form RD-1A. This form collects the same information as Form RD-1 except for five items: Federal R&D support to the firm by contracting agency, R&D costs by type of expense, domestic R&D expenditures by state, energy-related R&D, and foreign R&D by country. It also includes a screening item that allows respondents to indicate that they do not perform R&D. No prior-year information is made available since the majority of the companies that receive the Form RD-1A have not been surveyed in the previous year.
For 2001, a new item asking for R&D costs by technology area was added to both survey forms. In 2002, a new item asking for R&D done in collaboration with others was added to the Form RD-1.
For the 2002 survey, Form RD-1 was mailed to companies that reported R&D expenditures of $3 million or more in the 2001 survey. Approximately 2,376 companies received Form RD-1 and approximately 28,828 received Form RD-1A. Both survey forms and the instructions provided to respondents are reproduced in appendix B, Survey Documents.
The 2002 survey forms were mailed in March 2003. Recipients of Form RD-1A were asked to respond within 30 days, while Form RD-1 recipients were given 60 days. A follow-up form and letter were mailed to RD-1A recipients every 30 days if their completed survey form had not been received; a total of five follow-up mailings were conducted for delinquent RD-1A recipients.
A letter was mailed to Form RD-1 recipients 30 days after the initial mailing, reminding them that their completed survey forms were due within the next 30 days. A second form and reminder letter were mailed to Form RD-1 respondents after 60 days. Two additional followup mailings were conducted for delinquent Form RD-1 recipients.
In addition to the mailings, telephone followup was used to encourage response from those firms ranked among the 300 largest R&D performers, based on total R&D expenditures reported in the previous survey. Table A-4 shows the number of companies in each industry or industry group that received a survey form and the percentage that responded to the survey.
For various reasons, many firms chose to return the
survey form with one or more blank items. (For detailed
discussions on the sources, control, and measurement of
error resulting from item nonresponse, see U.S. Bureau
of the Census 1994b.) For some firms, internal accounting
systems and procedures may not have allowed quantification
of specific expenditures. Others may have refused to answer any voluntary questions as a matter of
company policy. (All but five items are voluntary. See further discussion under Response Rates and Mandatory/Voluntary Reporting.)
When respondents did not provide the requested information, estimates for the missing data were made using imputation algorithms. In general, the imputation algorithms computed values for missing items by applying the average percentage change for the target item in the nonresponding firm's industry to the item's prior-year value for that firm, reported or imputed. This approach, with minor variation, was used for most items. (For detailed descriptions and analyses of the imputation methods and algorithms used, see U.S. Bureau of the Census 1994c.) Table A-5 contains imputation rates for the principal survey items.
Survey reporting requirements divided survey items into two groups: mandatory and voluntary. Responses to five data items were mandatory; responses to the remaining items were voluntary. The mandatory items were total R&D expenditures, federal R&D funds, net sales, total employment (which are included in the Census Bureau's annual mandatory statistical program) and the distribution of R&D by state. During the 1990 survey cycle, NSF conducted a test of the effect of reporting on a completely voluntary basis to determine whether combining both mandatory and voluntary items on one survey form influences response rates. For this test, the 1990 sample was divided into two panels of approximately equal size. One panel, the mandatory panel, was asked to report as usual on four mandatory items with the remainder voluntary, and the other panel was asked to report all items on a completely voluntary basis. The result of the test was a decrease in the overall survey response rate to 80 percent from levels of 88 percent in 1989 and 89 percent in 1988. The response rates for the mandatory and voluntary panels were 89 percent and 69 percent, respectively. Detailed results of the test were published in Research and Development in Industry: 1990. For firms that reported R&D expenditures in 2002, table A-6 shows the percentage that also reported data for other selected items.
Response to questions about character of work (basic research, applied research, and development) declined in the mid-1980s, and, as a result, imputation rates increased. The general imputation procedure described above became increasingly dependent upon information imputed in prior years, thereby distancing current-year estimates from any reported information. Because of the increasing dependence on imputed data, NSF chose not to publish character of work estimates in 1986. The imputation procedure used to develop these estimates was revised in 1987 for use with later data and differs from the general imputation approach. The new method calculated the character of work distribution for a nonresponding firm only if that firm reported a distribution within a five-year period, extending from two years before to two years after the year requiring imputation. Imputation for a given year was initially performed in the year the data were collected and was based on a character of work distribution reported in either of the two previous years, if any. It was again performed using new data collected in the next two years. If reported data followed no previously imputed or reported data, previous period estimates were inserted based on the currently reported information. Similarly, if reported data did not follow two years of imputed data, the two years of previously imputed data were removed. Thus, character of work estimates were revised as newly reported information became available and were not final for two years following their initial publication.
Beginning with 1995, previously estimated values were not removed for firms that did not report in the third year, nor were estimates made for the two previous years for firms reporting after two years of nonresponse. This process was changed because in the prior period revisions were minimal. Estimates continued to be made for two consecutive years of nonresponse and discontinued if the firm did not report character of work in the third year. If no reported data were available for a firm, character of work estimates were not imputed. As a consequence, only a portion of the total estimated R&D expenditures were distributed at the firm level. Those expenditures not meeting the requirements of the new imputation methodology were placed in a "not distributed" category.
NSF's objective in conducting the survey has always been to provide estimates for the entire population of firms performing R&D in the United States. However, the revised imputation procedure would no longer produce such estimates because of the not distributed component. A baseline estimation method thus was developed to allocate the not distributed amounts among the character of work components. In the baseline estimation method, the not distributed expenditures were allocated by industry group to basic research, applied research, and development categories using the percentage splits in the distributed category for that industry. The allocation was done at the lowest level of published industry detail only; higher levels were derived by aggregation, just as national totals were derived by aggregation of individual industry estimates, and result in higher performance shares for basic and applied research and lower estimates for development's share than would have been calculated using the previous method.
Using data collected during the 1999 and 2000 cycles of the survey, reporting anomalies for the character of work survey items, especially for basic research, were investigated. It was discovered that a number of large companies known to develop and manufacture products reported all of their R&D as basic research. This phenomenon is not logical and prompted a renewed effort to strengthen character of work estimates produced from the survey. Identification of the anomalous reporting patterns was completed and edit checks were improved for processing of the 2001 and 2002 data. Consequently, publication of character of work distributions of R&D has been resumed, and the tables containing historical basic research, applied research, and development estimates have been revised and footnoted accordingly.
Form RD-1 requests a distribution of the total cost of R&D among the states where R&D was performed. Prior to the 1999 survey, an independent source, the Directory of American Research and Development, published by the Data Base Publishing Group of the R. R. Bowker Company was used in conjunction with previous survey results to estimate R&D expenditures by state for companies that did not provide this information. The information on scientists and engineers published in the directory was used as a proxy indicator of the proportion of R&D expenditures within each state. R&D expenditures by state were estimated by applying the distribution of scientists and engineers by state from the directory to total R&D expenditures for these companies. These estimates were included with reported survey data to arrive at published estimates of R&D expenditures for each state. However, the practice of using outside information to formulate or adjust estimates of R&D expenditures for each state has been discontinued because a suitable source for supporting information is no longer available. State estimates resulting from the 1999 and 2000 surveys were based solely on respondent reports and information internal to the survey.
Beginning with the 2001 survey, because of the lack of a reliable, comprehensive outside source of information, in an effort to improve the quality of reported data, NSF sought and was granted authorization to require reporting of the distribution of R&D by state from the Office of Management and Budget (OMB), the federal agency that oversees and controls burden on respondents.
Also beginning in 2001, the sampling and estimation methodologies used to produce state estimates were modified from previous years to yield better accuracy and precision and to reduce erroneous fluctuations in year-to-year estimates due to small sample sizes of R&D performers by state. The new sampling methodology selects known R&D performers with a higher probability than nonperformers and selects with certainty the largest 50 companies in each state based on payroll thus providing more coverage of R&D performers. The new estimation methodology for state estimates takes the form of a hybrid estimator combining the unweighted reported amount by state with a weighted amount apportioned across states with industrial activity. The hybrid estimator smoothes the estimate over states with R&D activity by industry and accounts for real change within a state. The Horvitz-Thompson estimator continues to be used to estimate the number of R&D performers by state.
This section summarizes major survey improvements, enhancements, and changes in procedures and practices that may have affected the comparability of statistics produced from the Survey of Industrial Research and Development over time and with other statistical series (see also NSF 2002a and U.S. Bureau of the Census 1995). This section focuses on major historical changes. More detailed historical information is available from individual annual reports (http://www.nsf.gov/statistics/industry/).
Beginning with the 1999 cycle of the survey, industry statistics are published using the North American Industrial Classification System (NAICS). The ongoing development of NAICS has been a joint effort of statistical agencies in Canada, Mexico, and the United States. The system replaced the Standard Industrial Classification (1980) of Canada, the Mexican Classification of Activities and Products (1994), and Standard Industrial Classification (SIC 1987) of the United States. (For a detailed comparison of NAICS to the Standard Industrial Classification (1987) of the United States, visit http://www.census.gov/epcd/www/naics.html.) NAICS was designed to provide a production-oriented system under which economic units with similar production processes are classified in the same industry. NAICS was developed with special attention to classifications for new and emerging industries, service industries, and industries that produce advanced technologies. NAICS not only eases comparability of information about the economies of the three North American countries, but it also increases comparability with the two-digit level of the United Nations' International Standard Industrial Classification (ISIC) system. Important for the Survey of Industrial Research and Development is the creation of several new classifications that cover major performers of R&D in the United States. Among manufacturers, the computer and electronic products classification (NAICS 334) includes makers of computers and peripherals, semiconductors, and navigational and electromedical instruments. Among nonmanufacturing industries are information (NAICS 51) and professional, scientific, and technical services (NAICS 54). Information includes publishing, both paper and electronic; broadcasting; and telecommunications. Professional, scientific, and technical services include a variety of industries. Of specific importance for the survey are engineering and scientific R&D service industries.
Effects of NAICS on Survey Statistics. The change of industry classification system affects most of the detailed statistical tables produced from the survey. Prior to the 1999 report, tables classified by industry contained the current survey's statistics plus statistics for 10 previous years. Because of the new classification system, the tables classified in this report contain only statistics for the current year (2002) and previous years back to 1999. However, to provide a bridge for users who want to make year-to-year comparisons below the aggregate level, in several tables in Research and Development in Industry: 2000 statistics from the 1997 and 1998 cycles of the survey, which were previously classified and published using the SIC system, were reclassified using the new NAICS codes. These reclassified statistics were slotted using their new NAICS classifications alongside the 1999 and 2000 statistics, which were estimated using NAICS from the outset.
Beginning with the 1999 cycle of the survey, the
number of company size categories used to classify survey
statistics was increased. The original 6 categories
were expanded to 10 to emphasize the role of small companies
in R&D performance. The more detailed business
size information also facilitates better international
comparisons. Generally, statistics produced by foreign
countries that measure their industrial R&D enterprise
are reported with more detailed company size classifications at the lower end of the scale than U.S. industrial R&D statistics traditionally have been. (For more information, visit the Organisation for Economic Co-operation and Development (OECD) website at http://www.oecd.org.) The new classifications of the U.S. statistics enable more direct comparisons with other countries' statistics.
Revisions to historical statistics usually have been made because of changes in the industry classification of companies caused by changes in payroll composition detected when a new sample was drawn. Various methodologies have been adopted over the years to revise, or backcast, the data when revisions to historical statistics have become necessary. Documented revisions to the historical statistics from post-1967 surveys through 1992 are summarized by NSF (1994) and in annual reports for subsequent surveys. Detailed descriptions of the specific revisions made to the statistics from pre-1967 surveys are scarce, but U.S. Bureau of the Census (1995) summarizes some of the major revisions.
Changes to reported data can come from three sources: respondents, analysts involved in survey and statistical processing, and the industry reclassification process. Prior to 1995, routine revisions were made to prior-year statistics based on information from all three sources. Consequently, results from the current-year survey were used not only to develop current-year statistics but also to revise immediate prior-year statistics. Beginning with the 1995 survey, this practice was discontinued. The reasons for discontinuation of this practice were annual sampling; continual strengthening of sampling methodology; and improvements in data verification, processing, and nonresponse followup. Moreover, it was not clear that respondents or those who processed the survey results had any better information a year after the data were first reported. Thus, it was determined that routinely revising published survey statistics increased the potential for error and often confused users of the statistics. Revisions are now made to historical and immediate prior-year statistics only if substantive errors are discovered.
For 1999, an error in the sample frame caused one very large company (based on payroll) to be selected for the sample and its statistical record to be assigned a large weight (see Frame Creation and Weighting and Maximum Weights above). Because the company's record had received a large weight during 1999 sampling, the company was selected with certainty for the 2000 sample and assigned a weight of one (see Identifying Certainty Companies above). This sampling artifact caused an abnormally large decrease in the company's data, especially for sales and employment, when comparing the 2000 statistics with the statistics originally published for 1999. The weight in the company's record in the 1999 statistical file was corrected, and revised 1999 statistics are included in the tables in this report. R&D estimates for the company also were affected; however, the amount of R&D reported was relatively small, even after weighting.
As summarized above under Character of Work Estimates,
reporting anomalies for the character of work
survey items, especially for basic research, were discovered
and investigated using data collected during the 1999
and 2000 cycles of the survey. Companies known to develop
and manufacture products but that reported all of their R&D as basic research were contacted and queried
regarding their R&D activities. After reviewing the definitions
of basic research, applied research, and development, all but several changed their distribution of R&D. Census, the collection and tabulation agent for the survey, was able to go back as far as 1998 and correct the statistical files. Consequently, the tables containing historical basic research, applied research, and development estimates have been revised and footnoted accordingly.
Comparability from year to year may be affected by new sample design, annual sample selection, and industry shifts.
By far the most profound influence on statistics from recent surveys occurred when the new sample design for the 1992 survey was introduced. Revisions to the 1991 statistics were dramatic (see Research and Development in Industry: 1992 for a detailed discussion). While the allocation of the sample was changed somewhat, the sample designs used for subsequent surveys were comparable to the 1992 sample design in terms of size and coverage.
With the introduction of annual sampling in 1992, more year-to-year change has resulted than when survey panels were used for two reasons. First, changes in classification of companies not surveyed are not reflected in the year-to-year movement. Prior to annual sampling, a wedging operation, which was performed when a new sample was selected, was a means of adjusting the data series to account for the changes in classification that occurred in the frame (see the discussion on wedging later under Time Series Analyses). Second, yearly correlation of R&D data is lost when independent samples are drawn each year.
The industry classification of companies is redefined each year with the creation of the sampling frame. By redefining the frame, the sample reflects current distributions of companies by size and industry. A company may move from one industry to another because of either changes in its payroll composition, which is used to determine the industry classification code (see previous discussion under Frame Creation); changes in the industry classification system itself; or changes in the way the industry classification code was assigned or revised during survey processing.
A company's payroll composition can change because of the growth or decline of product or service lines, the merger of two or more companies, the acquisition of one company by another, divestitures, or the formation of conglomerates. Although an unlikely occurrence, a company's industry designation could be reclassified yearly with the introduction of annual sampling. When companies shift industry classifications, the result is a downward movement in R&D expenditures in one industry that is balanced by an upward movement in another industry from one year to the next.
From time to time, the industry coding system used by federal agencies that publish industry statistics is changed or revised to reflect the changing composition of U.S. and North American industry. The Standard Industrial Classification (SIC) system, as revised in 1987, was used for statistics developed from the 1988–91 panel surveys and the 1992–98 annual surveys. As discussed above, the industrial classification system has been completely changed, and beginning with the 1999 cycle of the survey, the North American Industrial Classification System (NAICS) is now used.
The method used to classify firms during survey processing was revised slightly in 1992. Research has shown that the impact on individual industry estimates was minor. (The effects of changes in the way companies were classified during survey processing are discussed in detail in U.S. Bureau of the Census 1994a and 1994e). The current method used to classify firms was discussed previously under Frame Creation. Methods used for past surveys are discussed in U.S. Bureau of the Census (1995). Large year-to-year changes may occur because of the way industry classifications are assigned during statistical processing. As discussed above, a company's industry classification is a function of its primary activity based on payroll, which is not necessarily the primary source of its R&D activity. If the majority of a company's payroll shifts to an activity other than an R&D-related activity, for example trade, all of its R&D similarly shifts to the new activity. Further, the design of the statistical sample sometimes contributes to large year-to-year changes in industry estimates. Since relatively few companies perform R&D and there is no national register of industrial R&D performers, a large statistical "net" must be cast to capture new R&D performers. When these companies are sampled for the first time, they are often given weights much higher than they would be given if the their size and the amount of R&D they perform were known at the time of sampling. After the size of the company and the amount of R&D performed are discovered via the first survey, the weight assigned for subsequent surveys is adjusted.
Before the 1992 survey, the sample of firms surveyed was selected at irregular intervals; until 1967, samples were selected every 5 years. Subsequent samples were selected for 1971, 1976, 1981, and 1987. In intervening years, a panel of the largest firms known to perform R&D was surveyed. For example, a sample of about 14,000 firms was selected for the 1987 survey. For the 1988–91 studies, about 1,700 of these firms were resurveyed annually; the other firms did not receive survey forms, and their R&D data were estimated. This sample design was adequate during the survey's early years because R&D performance was concentrated in relatively few manufacturing industries. However, as more and more firms began entering the R&D arena, the old sample design proved increasingly deficient because it did not capture births of new R&D-performing firms. The entry of fledgling R&D performers into the marketplace was completely missed during panel years. Additionally, beginning in the early 1970s, the need for more detailed R&D information for nonmanufacturing industries was recognized. At that time, the broad industry classifications"miscellaneous business services" and "miscellaneous services" were added to the list of industry groups for which statistics were published. By 1975, about 3 percent of total R&D was performed by firms in nonmanufacturing industries. (See also NSF 1994, 1995, and 1996a.)
During the mid-1980s, there was evidence that a significant amount of R&D was being conducted by an increasing number of companies classified among the nonmanufacturing industries. Again the number of industries used to develop the statistics for nonmanufacturers was increased. Consequently, the annual reports in this series for 1987-91 included separate R&D estimates for firms in the communication, utility, engineering, architectural, research, development, testing, computer programming, and data processing service industries; hospitals; and medical labs. Approximately 9 percent of the estimated industrial R&D performance during 1987 was undertaken by nonmanufacturing firms.
After the list of industries for which statistics were published was expanded, it became clear that the sample design itself should be changed to reflect the widening population of R&D performers among firms in the nonmanufacturing industries (NSF 1995a) and small firms in all industries so as to account better for births of R&D-performing firms and to produce more reliable statistics. Beginning with the 1992 survey, NSF decided (1) to draw new samples with broader coverage annually and (2) to increase the sample size to approximately 25,000 firms. As a result of the sample redesign, for 1992 the reported nonmanufacturing share was (and has continued to be) 25–30 percent of total R&D. (See also NSF 1997, 1998, 1999, 2000, 2001, and 2002b.)
The statistics resulting from this survey on R&D spending and personnel are often used as if they were prepared using the same collection, processing, and tabulation methods over time. Such uniformity has not been the case. Since the survey was first fielded, improvements have been made to increase the reliability of the statistics and to make the survey results more useful. To that end, past practices have been changed and new procedures instituted. Preservation of the comparability of the statistics has, however, been an important consideration in making these improvements. Nonetheless, changes to survey definitions, the industry classification system, and the procedure used to assign industry codes to multiestablishment companies have had some, though not substantial, effects on the comparability of statistics. (For discussions of each of these changes, see U.S. Bureau of the Census 1994g; for considerations of comparability, see U.S. Bureau of the Census 1993 and 1994e.)
The aspect of the survey that had the greatest effect
on comparability was the selection of samples at irregular
intervals and the use of a subset or panel of the last
sample drawn to develop statistics for intervening years.
As discussed earlier, this practice introduced cyclical
deterioration of the statistics. As compensation for this
deterioration, periodic revisions were made to the statistics
produced from the panels surveyed between sample
years. Early in the survey's history, various methods were
used to make these revisions (U.S. Bureau of the Census
1995). After 1976 and until the 1992 advent of annual
sampling, a linking procedure called wedging was used. In wedging, the 2 sample years on each end of a series of
estimates served as benchmarks in the algorithms used
to adjust the estimates for the intervening years. (The
process was dubbed wedging because of the wedgelike
area produced on a graph that compares originally reported
statistics with the revised statistics that resulted
after linking. For a full discussion of the mathematical
algorithm used for the wedging process that linked statistics
from the 1992 survey with those from the 1987
survey, see U.S. Bureau of the Census 1994g and NSF
NSF collects data on federally financed R&D from both federal funding agencies, using the Survey of Federal Funds for Research and Development, and from performers of the R&D—industry, federal labs, universities, and other nonprofit organizations—using the Survey of Industrial Research and Development and other surveys (http://www.nsf.gov/statistics/survey.cfm). As reported by federal agencies, NSF publishes data on federal R&D budget authority and outlays, in addition to federal obligations. These terms are defined below (NSF 2002b):
National R&D expenditure totals in NSF's National Patterns of R&D Resources report series are primarily constructed with data reported by performers and include estimates of federal R&D funding to these sectors. But until performer-reported survey data on federal R&D expenditures are available from industry and academia, data collected from the federal agency funders of R&D were used to project R&D performance. When survey data from the performers subsequently are tabulated, as they were for this report, these statistics replace the projections based on funder expectations. Historically, the two survey systems have tracked fairly closely. For example, in 1980, performers reported using $29.5 billion in federal R&D funding, and federal agencies reported total R&D funding between $29.2 billion in outlays and $29.8 billion in obligations (NSF 1996b). In recent years, however, the two series have diverged considerably. The difference in the federal R&D totals appears to be concentrated in funding of industry, primarily aircraft and missile firms, by the Department of Defense. Overall, industrial firms have reported significant declines in federal R&D support since 1990 (table A-1), while federal agencies have reported level or slightly increased funding of industrial R&D (NSF 2005). NSF continues to identify and examine the factors behind these divergent trends.
|A-1||Companies in the target population and selected for the sample, by industry and size of company: 2002||.xls|
|A-2||Relative standard error for survey estimates, by industry and size of company: 2002||.xls|
|A-3||Relative standard error for estimates of total R&D and percentage of estimates attributed to certainty companies, by state: 2002||.xls|
|A-4||Unit response rates, companies that responded to the survey, and percentage of companies that performed R&D, by industry and type of survey form: 2002||.xls|
|A-5||Imputation rates for survey items, by industry and size of company: 2002 (corrected July 2011)||.xls|
|A-6||R&D-performing companies that reported nonzero data for major survey items: 2002||.xls|
|A-7||Funds for and number of companies that performed industrial basic research, applied research, and development performed in the United States, and funds and percent of funds not distributed, by industry and size of company and by source of funds: 2002||.xls|
Employment, FTE R&D scientists and engineers. Number of people employed in the 50 U.S. states and DC by R&D-performing companies who were engaged in scientific or engineering work at a level that required knowledge, gained either formally or by experience, of engineering or of the physical, biological, mathematical, statistical, or computer sciences equivalent to at least that acquired through completion of a 4-year college program with a major in one of those fields. The statistics show full-time-equivalent (FTE) employment of persons employed by the company during the January following the survey year who were assigned full time to R&D, plus a prorated number of employees who worked on R&D only part of the time.
Employment, total. Number of people employed in the 50 U.S. states and DC by R&D-performing companies in all activities during the pay period that included the 12th of March of the study year (March 12 is the date most employers use when paying first quarter employment taxes to the Internal Revenue Service).
Federally funded R&D centers (FFRDCs). R&D-performing organizations administered by industrial, academic, or other institutions on a nonprofit basis and exclusively or substantially financed by the federal government. To avoid the possibility of disclosing company specific information and therefore violating the confidentiality provisions of Title 13 of the United States Code, beginning in 2001 data for industry-administered FFRDCs are now collected through NSF's annual academic R&D expenditure survey, the Survey of Research and Development Expenditures at Universities and Colleges, as are data from FFRDCs administered by academic institutions and nonprofit organizations. More information about this survey is available from NSF's Division of Science Resources Statistics website at http://www.nsf.gov/statistics/srvyrdexpenditures/. For current lists of FFRDCs, visit http://www.nsf.gov/statistics/ffrdc/.
Funds for R&D, company and other nonfederal. The cost of R&D performed within the company and funded by the company itself or by other nonfederal sources in the 50 U.S. states and DC; does not include the cost of R&D funded by the company but contracted to outside organizations such as research institutions, universities and colleges, nonprofit organizations, or—to avoid double counting—other companies.
Funds for R&D, federal. The cost of R&D performed within the company in the 50 U.S. states and DC funded by federal R&D contracts, subcontracts, R&D portions of federal procurement contracts and subcontracts, grants, or other arrangements; does not include the cost of R&D supported by the federal government but contracted to outside organizations such as research institutions, universities and colleges, nonprofit organizations, or other companies.
Funds for R&D, total. The cost of R&D performed within the company in its own laboratories or in other company-owned or company-operated facilities in the 50 U.S. states and DC, including expenses for wages and salaries, fringe benefits for R&D personnel, materials and supplies, property and other taxes, maintenance and repairs, depreciation, and an appropriate share of overhead; does not include capital expenditures or the cost of R&D contracted to outside organizations such as research institutions, universities and colleges, nonprofit organizations, or—to avoid double-counting—other companies.
Funds per R&D scientist or engineer. All costs associated with the performance of industrial R&D (salaries, wages, and fringe benefits paid to R&D personnel; materials and supplies used for R&D; depreciation on capital equipment and facilities used for R&D; and any other R&D costs) divided by the number of R&D scientists and engineers employed in the 50 U.S. states and DC. To obtain a per person cost of R&D for a given year, the total R&D expenditures of that year were divided by an approximation of the number of full-time-equivalent (FTE) scientists and engineers engaged in the performance of R&D for that year. For accuracy, this approximation was the mean of the numbers of such FTE R&D-performing scientists and engineers as reported in January for the year in question and the subsequent year. For example, the mean of the numbers of FTE R&D scientists and engineers in January 2002 and January 2003 was divided into total 2002 R&D expenditures for a total cost per R&D scientist or engineer in 2002.
Net sales and receipts. Dollar values for goods sold or services rendered by R&D-performing companies to customers outside the company, including the federal government, less such items as returns, allowances, freight, charges, and excise taxes. Domestic intracompany transfers and sales by foreign subsidiaries were excluded, but transfers to foreign subsidiaries and export sales to foreign companies were included.
R&D and industrial R&D. R&D is the planned, systematic pursuit of new knowledge or understanding toward general application (basic research); the acquisition of knowledge or understanding to meet a specific, recognized need (applied research); or the application of knowledge or understanding toward the production or improvement of a product, service, process, or method (development). Basic research analyzes properties, structures, and relationships toward formulating and testing hypotheses, theories, or laws; applied research is undertaken either to determine possible uses for the findings of basic research or to determine new ways of achieving specific, predetermined objectives; and development draws on research findings or other scientific knowledge for the purpose of producing new or significantly improving products, services, processes, or methods. As used in this survey, industrial basic research is the pursuit of new scientific knowledge or understanding that does not have specific immediate commercial objectives, although it may be in fields of present or potential commercial interest; industrial applied research is investigation that may use findings of basic research toward discovering new scientific knowledge that has specific commercial objectives with respect to new products, services, processes, or methods; and industrial development is the systematic use of the knowledge or understanding gained from research or practical experience directed toward the production or significant improvement of useful products, services, processes, or methods, including the design and development of prototypes, materials, devices, and systems. The survey covers industrial R&D performed by people trained, either formally or by experience, in engineering or in the physical, biological, mathematical, statistical, or computer sciences and employed by a publicly or privately owned firm engaged in for-profit activity in the United States. Specifically excluded from the survey are quality control, routine product testing, market research, sales promotion, sales service, and other nontechnological activities; routine technical services; and research in the social sciences or psychology.
National Science Foundation (NSF). 1956. Science and Engineering in American Industry: Final Report on a 1953−54 Survey. NSF 56-16. Washington, DC: U.S. Government Printing Office.
National Science Foundation (NSF). 1960. Science and Engineering in American Industry: 1956. NSF 59-50. Washington, DC: U.S. Government Printing Office.
National Science Foundation (NSF). 1994. "1992 R&D Spending by U.S. Firms Rises, NSF Survey Improved." SRS Data Brief. NSF 94-325. Arlington, VA.
National Science Foundation (NSF). 1995. "1993 Spending Falls for U.S. Industrial R&D, Nonmanufacturing Share Increases." SRS Data Brief. NSF 95-325. Arlington, VA.
National Science Foundation (NSF). 1995a. Research and Development in Industry: 1992. NSF 95-324. Arlington, VA.
National Science Foundation (NSF). 1996a. "1994 Company Funding of U.S. Industrial R&D Rises as Federal Support Continues to Decline." SRS Data Brief. NSF 96-310. Arlington, VA.
National Science Foundation (NSF). 1996b. National Patterns of R&D Resources: 1996. NSF 96-333. Arlington, VA.
National Science Foundation (NSF). 1997. "1995 U.S. Industrial R&D Rises, NSF Survey Statistics Expanded to Emphasize Role of Nonmanufacturing Industries." SRS Data Brief. NSF 97-332. Arlington, VA.
National Science Foundation (NSF). 1998. "1996 U.S. Industrial R&D: Firms Continue to Increase Their Investment." SRS Data Brief. NSF 98-317. Arlington, VA.
National Science Foundation (NSF). 1999. "1997 U.S. Industrial R&D Performers." SRS Topical Report. NSF 99-355. Arlington, VA.
National Science Foundation (NSF). 2000. "1998 U.S. Industrial R&D Performers Report Increase R&D." SRS Data Brief. NSF 00-320. Arlington, VA.
National Science Foundation (NSF). 2001a. "U.S. Industrial R&D Performers Report Increased R&D in 1999; New Industry Coding and Size Classifications for NSF Survey." SRS Data Brief. NSF 01-326. Arlington, VA.
National Science Foundation (NSF). 2001b. "U.S. Industrial R&D: NSF Announces New Information Retrieval System and Historical Database." SRS Data Brief. NSF 01-338. Arlington, VA.
National Science Foundation (NSF).2002a. "U.S. Industrial R&D Expenditures and R&D-to-Sales Ratio Reach Historical Highs in 2000." SRS InfoBrief. NSF 03-306. Arlington, VA.
National Science Foundation (NSF). 2002b. Federal Research and Development Funding by Budget Function: Fiscal Years 2000–2002. NSF 02-301. Arlington, VA.
National Science Foundation (NSF). 2003. "U.S. Industry Sustains R&D Expenditures Despite Decline in Performers' Sales and Employment During 2001." SRS InfoBrief. NSF 04-301. Arlington, VA.
National Science Foundation (NSF). 2004. "Largest Single-Year Decline in U.S. Industrial R&D Expenditures Reported for 2002." SRS InfoBrief. NSF 04-320. Arlington, VA.
National Science Foundation (NSF). 2005. National Patterns of R&D Resources: 2003. NSF 05-308. Arlington, VA.
U.S. Bureau of the Census. 1993. "Effects of the 1987 SIC Revision on Company Classification in the Survey of Industrial Research and Development (R&D)." Technical Memorandum. December 6.
U.S. Bureau of the Census. 1994a. "Comparison of Company Coding Between 1992 and 1993 for the Survey of Industrial Research and Development." Technical Memorandum. November 3. Washington, DC.
U.S. Bureau of the Census. 1994b. Documentation of Nonsampling Issues in the Survey of Industrial Research and Development. RR94/03. Washington, DC.
U.S. Bureau of the Census. 1994c. An Evaluation of Imputation Methods for the Survey of Industrial Research and Development. ESMD-9404. Washington, DC.
U.S. Bureau of the Census. 1994d. "Evaluation of Total Employment Cut-Offs in the Survey of Industrial Research and Development." Technical Memorandum. November 3. Washington, DC.
U.S. Bureau of the Census. 1994e. "Reclassification of Companies in the 1992 Survey of Industrial Research and Development (R&D) for the Generation of the ‘Analytical' Series." Technical Memorandum. October 25. Washington, DC.
U.S. Bureau of the Census. 1994f. A Study of Processing Errors in the Survey of Industrial Research and Development. ESMD-9403. Washington, DC.
U.S. Bureau of the Census.1994g. "Wedging Considerations for the 1992 Research and Development (R&D) Survey." Technical Memorandum. June 10. Washington, DC.
U.S. Bureau of the Census. 1995. Documentation of the Survey Design for the Survey of Industrial Research and Development: A Historical Perspective. Washington, DC.
 In the Survey of Industrial Research and Development and in the publications presenting statistics resulting from the survey, the terms company, firm, and enterprise are used interchangeably. Industry refers to the 2-, 3-, or 4-digit North American Industry Classification System (NAICS) codes or group of NAICS codes used to publish statistics resulting from the survey.
 The 1999 survey was the first year that companies were classified using NAICS. Prior to 1999, the Standard Industrial Classification (SIC) system was used. The two systems are discussed later under Comparability of Statistics.
 Form RD-1 is a revised version of the Form RD-1L, formerly used to collect data from large R&D performers for odd-numbered years. For even-numbered years, an abbreviated questionnaire, Form RD-1S was used. Beginning in 1998 the Form RD-1L was streamlined, renamed Form RD-1, and the odd/even-numbered year cycle abandoned.
 Annual sampling also remedies the cyclical deterioration of the statistics that results from changes in a company's payroll composition because of product line and corporate structural changes.