Research and Development in Industry: 2004
Appendix A. Technical Notes and Technical Tables
Much of the information for this appendix was provided by the Manufacturing and Construction Division of the U.S. Bureau of the Census, which collected and compiled the survey data. Copies of the technical papers cited can be obtained from NSF's Research and Development Statistics Program in the Division of Science Resources Statistics. The first part of this appendix focuses on recent changes to the survey methodology; major historical changes are discussed later in Comparability of Statistics. More detailed historical information is available from individual annual reports (http://www.nsf.gov/statistics/industry/).
The reporting unit for the Survey of Industrial Research and Development is initially the company, defined as a business organization of one or more establishments under common ownership or control. Some companies, at their own request, comprise multiple reporting units. These reporting units are compiled into a single company record at the time of tabulation.
The Business Register (BR), a Bureau of the Census database, containing industry, geographic, employment, and payroll information, was the foundation from which the frame used to select the 2005 survey sample was created (see table A-1 for population and sample sizes). For companies with more than one establishment, data were summed to the company level and the resulting company record was used to select the sample and process and tabulate the survey data.
After data were summed to the company level, each company then was assigned a single North American Industry Classification System (NAICS) code based on payroll. The method used followed the hierarchical structure of the NAICS. The company was first assigned to the economic sector, defined by a 2-digit NAICS code, or combination thereof, representing manufacturing, mining, trade, etc., that accounted for the highest percentage of its aggregated payroll. Then the company was assigned to a subsector, defined by a 3-digit NAICS code, that accounted for the highest percentage of its payroll within the economic sector. Then the company was assigned a 4-digit NAICS code within the subsector, again based on the highest percentage of its aggregated payroll within the subsector. Finally, the company was assigned a 6-digit NAICS code within the 4-digit NAICS, based on the highest percentage of its aggregated payroll within the 4-digit NAICS. Assignment below the 6-digit level was not done because the 6-digit level was the lowest level needed to guarantee publication-level industry classification. Some originally assigned industry codes were revised later during statistical processing (see “Industry Reclassification,” below).
For the 2005 survey, the frame was partitioned into four groups: (1) top 500 R&D-performing companies still in the frame from the 2004 survey year and any company classified in NAICS 42 or 55 with R&D greater than $3 million, (2) other companies known to conduct R&D in any of the previous five survey years, (3) companies that previously only reported zero R&D in all of the previous five survey years, and (4) companies for which information about the extent of R&D activity was uncertain. There were 868 companies in the first group, 12,447 companies in the second group, 85,304 companies in the third group, and 1,962,184 companies in the fourth group for a total of 2,060,803 companies.
Defining Sampling Strata
For the first, second, and fourth partitioned groups the sampling strata were defined corresponding to the 4-digit industries and groups of industries for which statistics were developed and published. There were 28 manufacturing and 23 nonmanufacturing strata in each of these partitioned groups. (Note: one manufacturing and one nonmanufacturing were “unclassifieds.”) Also, the second partitioned group was divided into two strata, one certainty and the other noncertainty.
Ad hoc certainty companies were companies selected with certainty independent of relative standard error (RSE) constraints. There were different criteria defining an ad hoc certainty company depending on the partitioned group the company was in. Companies in the first two partitioned groups that also had previously reported or had imputed R&D of $3 million or more were ad hoc certainties. Companies in the third partition that had any establishments in NAICS 5417 were ad hoc certainties. Companies in the fourth partition, which were also in the top 50 of their strata by payroll or in the top 50 of their state by payroll, were ad hoc certainties.
Probability Proportionate to Size
The distribution of companies by R&D in the first and second partitioned groups or by payroll in the fourth partitioned group was skewed as in earlier frames. Because of this skewness, a fixed sample probability proportionate to size (pps) method remained the appropriate selection technique for these partitioned groups. That is, with the pps method large companies had higher probabilities of selection than did small companies. The fixed sample size methodology has been replicated for every survey year since the 1998 survey.
Companies in the first partitioned group received a probability of one. Companies in the second partitioned group received a measure of size equal to the most recently reported positive R&D expenditures. Companies in the third and fourth partitioned groups received a measure of size equal to their company payroll. RSE constraints by industry and by state were imposed separately in the second and fourth partitioned groups, and the company received a probability of selection for each industry in which it had activity.
Simple Random Sampling
The third partitioned group was split into two strata, certainty and noncertainty. The noncertainty stratum was sampled using simple random sampling. Companies in the noncertainty stratum received a probability of selection of roughly 0.01.
Sample Stratification and Relative Standard Error Constraints
The particular sample selected was one of a large number of samples of the same type and size that by chance might have been selected. Statistics resulting from the different samples would differ somewhat from each other. These differences are represented by estimates of sampling error or variance. The smaller the sampling error is, the less variable the statistic. The accuracy of the estimate, that is, how close it is to the true value, is also a function of nonsampling error.
Controlling Sampling Error. Historically, it has been difficult to achieve control over the sampling error of survey estimates. Efforts were confined to controlling the amount of error due to sample size variation, but this was only one component of the overall sampling error. The other component depended on the correlation between the data from the sampling frame used to assign probabilities (namely R&D values either imputed or reported in the previous survey) and the actual current year reported data. The nature of R&D is such that these correlations could not be predicted with any reliability. Consequently, precise controls on overall sampling error were difficult to achieve (U.S. Bureau of the Census 1994d).
Sampling Strata and Standard Error Estimates. The constraints used to control the sample size in each stratum were based on a universe total that, in large part, was improvised. That is, as previously noted, a prior R&D value for the first two partitioned groups and payroll for the fourth partitioned group were assigned to companies in their respective groups. Assignment of sampling probability was, nevertheless, based on this distribution. The presumption was that actual variation in the sample design would be less than that estimated, because many of the sampled companies in the fourth partitioned group have true R&D values of zero, not the widely varying values that were imputed using total payroll as a predictor of R&D. Previous sample selections indicate that in general, this presumption held, but exceptions have occurred when companies with large sampling weights have reported large amounts of R&D spending. See table A-2 for a list by industry of the standard error estimates for selected items and table A-3 for a list of the standard error estimates of total R&D by state.
Nonsampling Error. In addition to sampling error, estimates are subject to nonsampling error. Errors are grouped in five categories: specification, coverage, response, nonresponse, and processing. For detailed discussions on the sources, control, and measurement of each of these types of error, see U.S. Bureau of the Census 1994b and 1994f.
The parameters set to control sampling error resulted in sample sizes of 868 companies from the first frame partition, 9,290 companies from the second frame partition, 1,083 companies from the third frame partition, and 20,759 companies from the fourth frame partition. The overall final sample consisted of 32,000 companies. This total included an adjustment to the sample size based on a minimum probability rule and changes in the operational status of some companies.
Minimum Probability Rule. A minimum probability rule was imposed for both the second and fourth partitions. As noted earlier, probabilities of selection proportionate to size were assigned to each company in these partitions, where size was the prior reported R&D or payroll value assigned to each company. Selected companies received a sample weight that was the inverse of their probability. Selected companies that ultimately report R&D expenditures vastly larger than their assigned values can have adverse effects on the statistics, which were based on the weighted value of survey responses. In order to minimize these effects on the final statistics, a minimum probability rule was imposed to control the maximum weight of a company. If the probability based on company size was less than the minimum probability, then it was reset to this minimum value. The consequence of raising these original probabilities to the specific minimum probability was to raise the final sample size.
Changes in Operational Status. Between the time that the frame was created and the survey was prepared for mailing, the operational status of some companies changed. That is, they were merged with or acquired by another company, or they were no longer in business. Before preparing the survey for mailing, the operational status was updated to identify these changes. As a result, the number of companies mailed a survey questionnaire was somewhat smaller than the number of companies initially selected for the survey.
Sample weights were applied to each company record to produce national estimates. Within the second partition of the sample, consisting of known R&D performers (positive R&D expenditures), the maximum sample weight was roughly 20. For the fourth partition, consisting of companies with uncertain R&D activity, the maximum sample weight was roughly 100 for companies classified in manufacturing and 250 for those classified in nonmanufacturing.
Two questionnaires are used each year to collect data for the survey. Known large R&D performers are sent a detailed survey form, Form RD-1. The Form RD-1 collects data on sales or receipts, total employment, employment of scientists and engineers, expenditures for R&D performed within the company with federal funds and with company and other funds, character of work (basic research, applied research, and development), company-sponsored R&D expenditures in foreign countries, R&D performed by others, R&D performed in collaboration with others, federally funded R&D by contracting agency, R&D costs by type of expense, R&D costs by technology area, domestic R&D expenditures by state, energy-related R&D, R&D done in collaboration with others, and foreign R&D by country. Because companies receiving the Form RD-1 have participated in previous surveys, computer-imprinted data reported by the company for the previous year are supplied for reference. Companies are encouraged to revise or update the data for a prior year if they have more current information; however, already published prior-year statistics were revised in this report series only if large disparities were reported.
Small R&D performers and firms included in the sample for the first time were sent Form RD-1A. This questionnaire collects the same information as Form RD-1 except for five items: Federal R&D support to the firm by contracting agency, R&D costs by type of expense, domestic R&D expenditures by state, energy-related R&D, and foreign R&D by country. It also includes a screening item that allows respondents to indicate that they did not perform R&D during the study year. No prior-year information was made available because the majority of the companies that received the Form RD-1A were not surveyed in the previous year.
Recent Survey Questionnaire Content Changes
For 2005, the five mandatory items remained total R&D expenditures (question 5d, column 3), federally funded R&D (question 5d, column 1), net sales (question 2), total employment (question 3), and the distribution of R&D by state (question 17). The question that asked for the cost of the R&D performed in collaboration with other organizations was moved from the end of the questionnaire (question 17 for 2004) toward the beginning of the questionnaire (question 9 for 2005), following the question that asked for the cost of the R&D performed outside the company by other organizations. This change was made based on cognitive interviews that concluded that respondents are confused by the two concepts. Research continues to make the distinction clear and to improve response to both of these questions. A question asking whether the company's R&D involved nanotechnology (question 15) was added.
Number of Survey Questionnaires Sent
Form RD-1 (for companies that reported R&D expenditures of $3 million or more in the 2004 survey) was mailed to 3,356 companies and 28,491 companies received Form RD-1A. Both survey questionnaires and the instructions provided to respondents are reproduced in appendix B, Survey Documents.
Follow-Up for Survey Nonresponse
The 2005 survey questionnaires were mailed in February 2006. Recipients of Form RD-1A were asked to respond within 30 days, and Form RD-1 recipients were given 60 days. A follow-up questionnaire and letter were mailed to Form RD-1A recipients every 30 days up to a total of five times, if their completed survey form had not been received. After questionnaire and letter follow-ups, three additional automated telephone follow-ups were conducted for the remaining delinquent Form RD-1A recipients.
A letter was mailed to Form RD-1 recipients 30 days after the initial mailing, reminding them that their completed survey questionnaires were due within the next 30 days. A second questionnaire and reminder letter were mailed after 60 days. Two additional follow-ups (one mail, two telephone) were conducted for delinquent Form RD-1 recipients not ranked among the 500 largest R&D performers based on total R&D expenditures reported in the previous year. For the top 500 performers, multiple special telephone follow-ups were used to encourage response. Table A-4 shows the number of companies in each industry or industry group that received survey questionnaires, by type of form, and the percentage that responded to the survey.
For various reasons, many firms chose to return the survey questionnaire with one or more blank items. For some firms, internal accounting systems and procedures may not have allowed quantification of specific expenditures. Others may have refused to answer any voluntary questions as a matter of company policy.
When respondents did not provide the requested information, estimates for the missing data were made using imputation algorithms. In general, the imputation algorithms computed values for missing items by applying the average percentage change for the target item in the nonresponding firm's industry to the item's prior-year value for that firm, reported or imputed. This approach, with minor variation, was used for most items. Table A-5 contains imputation rates for the principal survey items.
Response Rates and Mandatory/Voluntary Reporting
Survey reporting requirements divided survey items into two groups: mandatory and voluntary. Responses to five data items were mandatory; responses to the remaining items were voluntary. The mandatory items were total R&D expenditures, federal R&D funds, net sales, total employment and the distribution of R&D by state (the first four were included in the Census Bureau's annual mandatory statistical program). Table A-6 shows the percentage of R&D-performing companies that also reported data for other selected items.
During the 1990 survey cycle, NSF conducted a test of the effect of reporting on a completely voluntary basis to determine whether combining both mandatory and voluntary items on one survey questionnaire influences response rates. For this test, the 1990 sample was divided into two panels of approximately equal size. One panel, the mandatory panel, was asked to report as usual on four mandatory items with the remainder voluntary, and the other panel was asked to report all items on a completely voluntary basis. The result of the test was a decrease in the overall survey response rate to 80% from levels of 88% in 1989 and 89% in 1988. The response rates for the mandatory and voluntary panels were 89% and 69%, respectively. Detailed results of the test were published in Research and Development in Industry: 1990 (NSF 1993).
Beginning with the 2004 survey and continuing for 2005, some companies' electronically assigned industry codes were manually examined and changed. Beginning in the late 1990s, increasingly large amounts of R&D were attributed to the wholesale trade industries, resulting from the payroll-based methodology used to assign industry classifications (see “Frame Creation and Industry Classification,” above) and the change from the Standard Industrial Classification (SIC) system to the North American Industry Classification System (NAICS) in 1999. Such classification artifacts were of particular concern for companies traditionally thought of as pharmaceutical or computer-manufacturing firms. As these firms increasingly marketed their own products and more of their payroll involved employees in selling and distribution activities, the potential for the companies to be classified among the wholesale trade industries increased. To maintain the relevance and usefulness of the industrial R&D statistics, NSF evaluated ways to ameliorate the negative effects of the industry classification methodology and change in classification systems. In addition to firms originally assigned NAICS codes among the wholesale trade (NAICS 42) industries, firms in the information (NAICS 51), professional, scientific, and technical services (NAICS 54), and management of companies and enterprises (NAICS 55) industries using the payroll-based methodology were manually reviewed by NSF and Census. These firms were reclassified based on primary R&D activity, which in most cases corresponded to their primary products or service activities. The result was that most of the R&D previously attributed to NAICS 42 and 55 industries was redistributed. Statistics resulting from the old and new industry classification methods were published in tables A-9 and A-10 in Research and Development in Industry: 2004 (NSF 2009). Also, for detailed information about the industry reclassification methodology, see NSF 2007d.
Response to questions about character of work (basic research, applied research, and development) declined in the mid-1980s, and as a result, imputation rates increased. The general imputation procedure described above became increasingly dependent upon information imputed in prior years, thereby distancing current-year estimates from any reported information. Because of the increasing dependence on imputed data, NSF chose not to publish character-of-work estimates in 1986. The imputation procedure used to develop these estimates was revised in 1987 for use with later data and differs from the general imputation approach. The new method calculated the character-of-work distribution for a nonresponding firm only if that firm reported a distribution within a 5-year period, extending from 2 years before to 2 years after the year requiring imputation. Imputation for a given year was initially performed in the year the data were collected and was based on a character-of-work distribution reported in either of the 2 previous years, if any. It was again performed using new data collected in the next 2 years. If reported data followed no previously imputed or reported data, previous period estimates were inserted based on the currently reported information. Similarly, if reported data did not follow 2 years of imputed data, the 2 years of previously imputed data were removed. Thus, character-of-work estimates were revised as newly reported information became available and were not final for 2 years following their initial publication.
Beginning with 1995, previously estimated values were not removed for firms that did not report in the third year, nor were estimates made for the 2 previous years for firms reporting after 2 years of nonresponse. This process was changed because in the prior period revisions were minimal. Estimates continued to be made for 2 consecutive years of nonresponse and discontinued if the firm did not report character of work in the third year. If no reported data were available for a firm, character of work estimates were not imputed. As a consequence, only a portion of the total estimated R&D expenditures were distributed for character of work at the firm level. Those expenditures not meeting the requirements of the new imputation methodology were placed in a “not distributed” category.
NSF's objective in conducting the survey has always been to provide estimates for the entire population of firms performing R&D in the United States. However, the revised imputation procedure would no longer produce such estimates because of the not-distributed component. A baseline estimation method thus was developed to allocate the not-distributed amounts among the character of work components. In the baseline estimation method, the not-distributed expenditures were allocated by industry group to basic research, applied research, and development categories using the percentage splits in the distributed category for that industry. The allocation was done at the lowest level of published industry detail only; higher levels were derived by aggregation, just as national totals were derived by aggregation of individual industry estimates, and result in higher performance shares for basic and applied research and lower estimates for development's share than would have been calculated using the previous method.
Using data collected during the 1999 and 2000 cycles of the survey, reporting anomalies for the character-of-work survey items, especially for basic research, were investigated. It was discovered that a number of large companies known to develop and manufacture products reported all of their R&D as basic research. This phenomenon is not logical and prompted a renewed effort to strengthen character-of-work estimates produced from the survey. Identification of the anomalous reporting patterns was completed and edit checks were improved for processing of the 2001 and 2002 data. Publication of character-of-work distributions of R&D was resumed, and the tables containing historical basic research, applied research, and development estimates have been revised and footnoted accordingly. Edit checks for this phenomenon continue to be performed annually and extensive follow-up with respondents undertaken.
Form RD-1 requests a distribution of the total cost of R&D among the states where R&D was performed. Prior to the 1999 survey, an independent source, the Directory of American Research and Technology, published by the Data Base Publishing Group of the R. R. Bowker Company was used in conjunction with previous survey results to estimate R&D expenditures by state for companies that did not provide this information. The information on scientists and engineers published in the directory was used as a proxy indicator of the proportion of R&D expenditures within each state. R&D expenditures by state were estimated by applying the distribution of scientists and engineers by state from the directory to total R&D expenditures for these companies. These estimates were included with reported survey data to arrive at published estimates of R&D expenditures for each state. However, the practice of using outside information to formulate or adjust estimates of R&D expenditures for each state had to be discontinued because a suitable source for supporting information was no longer available. State estimates resulting from the 1999 and 2000 surveys were based solely on respondent reports and information internal to the survey. Beginning with the 2001 survey, because of the lack of a reliable, comprehensive outside source of information, in an effort to improve the quality of reported data, NSF sought and was granted authorization by the Office of Management and Budget, the federal agency that oversees and controls burden on respondents, to make reporting of the distribution of R&D by state mandatory.
Also beginning in 2001, the sampling and estimation methodologies used to produce state estimates were modified from previous years to yield better accuracy and precision and to reduce erroneous fluctuations in year-to-year estimates due to small sample sizes of R&D performers by state. The new sampling methodology selects known R&D performers with a higher probability than nonperformers and selects with certainty the largest 50 companies in each state based on payroll, thus providing more coverage of R&D performers. The new estimation methodology for state estimates takes the form of a hybrid estimator combining the unweighted reported amount by state with a weighted amount apportioned (or raked) across states with industrial activity. The hybrid estimator smoothes the estimate over states with R&D activity by industry and accounts for real change within a state. The Horvitz-Thompson estimator continues to be used to estimate the number of R&D performers by state.
Comparability of Statistics
This section summarizes major survey improvements, enhancements, and changes in procedures and practices that may have affected the comparability of statistics produced from the Survey of Industrial Research and Development over time and with other statistical series (see also NSF 2002a and U.S. Bureau of the Census 1995). This section focuses on major historical changes. Detailed descriptions of the specific revisions made to the pre-1967 surveys are scarce, but the U.S. Bureau of the Census (1995) summarizes some of the major revisions. Documented revisions to the post-1967 surveys through 1992 are summarized in NSF 1994a and 1994b. For subsequent surveys, detailed information is available from individual annual reports at http://www.nsf.gov/statistics/industry/.
Industry Classification System
Beginning with the 1999 cycle of the survey, industry statistics are published using the North American Industry Classification System (NAICS). The ongoing development of NAICS has been a joint effort of statistical agencies in Canada, Mexico, and the United States. The system replaced the Standard Industrial Classification (1980) of Canada, the Mexican Classification of Activities and Products (1994), and Standard Industrial Classification (SIC 1987) of the United States. NAICS was designed to provide a production-oriented system under which economic units with similar production processes are classified in the same industry. NAICS was developed with special attention to classifications for new and emerging industries, service industries, and industries that produce advanced technologies. NAICS not only eases comparability of information about the economies of the three North American countries, but it also increases comparability with the two-digit level of the United Nations' International Standard Industrial Classification system. Important for the Survey of Industrial Research and Development is the creation of several new classifications that cover major performers of R&D in the United States. Among manufacturers, the computer and electronic products classification (NAICS 334) includes makers of computers and peripherals, semiconductors, and navigational and electromedical instruments. Among nonmanufacturing industries are information (NAICS 51) and professional, scientific, and technical services (NAICS 54). Information includes publishing, both print and electronic; broadcasting; and telecommunications. Professional, scientific, and technical services include a variety of industries. Of specific importance for the survey are engineering (NAICS 5413) and scientific R&D service industries (NAICS 5417).
The change of industry classification system affected most of the detailed statistical tables produced from the survey. Prior to the 1999 report, tables classified by industry contained the current survey's statistics plus statistics for 10 previous years. Because of the new classification system, the tables classified in the 1999–2005 reports contain only statistics for the study year and previous years back to 1999. However, to provide a bridge for users who wanted to make year-to-year comparisons below the aggregate level, in several tables in Research and Development in Industry: 1999 (NSF 2002c) and Research and Development in Industry: 2000 (NSF 2003a) statistics from the 1997 and 1998 cycles of the survey, previously classified and published using the SIC system, were reclassified using the new NAICS codes. These reclassified statistics were slotted using their new NAICS classifications alongside the 1999 and 2000 statistics, which were estimated using NAICS from the outset.
Industry Classification Methodology
For 1999 and 2000, the frame from which the statistical samples were selected was divided into two partitions based on total company employment. In the manufacturing sector, companies with employment of 50 or more were included in the large-company partition. In the nonmanufacturing sector, companies with employment of 15 or more were included in the large-company partition. Companies in the respective sectors with employment below these values but with at least 5 employees were included in the small-company partition. The purpose of partitioning the sample this way was to reduce the variability in industry estimates largely attributed to the random year-to-year selection of small companies by industry and the high sampling weights that sometimes were assigned to them. Therefore, in the 1999 and 2000 reports detailed industry statistics were published only from the large-company partition; detailed industry statistics from the small-company partition were not. Statistics from the small-company partition were included in the manufacturing, nonmanufacturing, and all industries totals but were aggregated into "small-manufacturing" and "small-nonmanufacturing" classifications instead of being included in their respective industry classifications. Beginning with the 2001 survey, this practice was evaluated and discontinued because it was determined that the data for small companies are more useful if they are included in their respective industries even given the sampling concerns described above.
Beginning with the 2004 survey and continuing for 2005, some companies' electronically assigned industry codes were manually examined and changed. Beginning in the late 1990s, increasingly large amounts of R&D were attributed to the wholesale trade industries, resulting from the payroll-based methodology used to assign industry classifications (see “Frame Creation and Industry Classification,” above) and the change from the SIC system to NAICS in 1999. Such classification artifacts were of particular concern for companies traditionally thought of as pharmaceutical or computer-manufacturing firms. As these firms increasingly marketed their own products and more of their payroll involved employees in selling and distribution activities, the potential for the companies to be classified among the wholesale trade industries increased. To maintain the relevance and usefulness of the industrial R&D statistics, NSF evaluated ways to ameliorate the negative effects of the industry classification methodology and change in classification systems. In addition to firms originally assigned NAICS codes among the wholesale trade (NAICS 42) industries, firms in the information (NAICS 51); professional, scientific, and technical services (NAICS 54), and management of companies and enterprises (NAICS 55) industries using the payroll-based methodology were manually reviewed by NSF and Census. These firms were reclassified based on primary R&D activity, which in most cases corresponded to their primary products or service activities. The result was that most of the R&D previously attributed to NAICS 42 and 55 industries was redistributed. Statistics resulting from the old and new industry classification methods were published in tables A-9 and A-10 in Research and Development in Industry: 2004 (NSF 2009). Also, for detailed information about the industry reclassification methodology, see NSF 2007d.
Company Size Classifications
Beginning with the 1999 cycle of the survey, the number of company size categories used to classify survey statistics was increased (NSF 2001a). The original 6 categories were expanded to 10 to emphasize the role of small companies in R&D performance. The more detailed business size information also facilitates better international comparisons; the new classifications of the U.S. statistics enable more direct comparisons with other countries' statistics. Generally, statistics produced by foreign countries measuring their industrial R&D enterprise are reported with more detailed company size classifications at the lower end of the scale than U.S. industrial R&D statistics traditionally have been. (For more information, visit the Organisation for Economic Co-operation and Development website at http://www.oecd.org/.)
Revisions to Historical and Immediate Prior-Year Statistics
Revisions to historical statistics usually have been made because of changes in the industry classification of companies caused by changes in payroll composition detected when a new sample was drawn. Various methodologies have been adopted over the years to revise, or backcast, the data when revisions to historical statistics have become necessary. Documented revisions to the historical statistics from post-1967 surveys through 1992 are summarized in NSF 1994a and in annual reports for subsequent surveys. Detailed descriptions of the specific revisions made to the statistics from pre-1967 surveys are scarce, but the U.S. Bureau of the Census (1995) summarizes some of the major revisions.
Changes to reported data can come from three sources: respondents, analysts involved in survey and statistical processing, and the industry reclassification process. Prior to 1995, routine revisions were made to prior-year statistics based on information from all three sources. Consequently, results from the current-year survey were used not only to develop current-year statistics but also to revise immediate prior-year statistics. Beginning with the 1995 survey, this practice was discontinued. The reasons for discontinuation of this practice were annual sampling; continual strengthening of sampling methodology; and improvements in data verification, processing, and nonresponse follow-up. Moreover, it was not clear that respondents or those who processed the survey results had any better information a year after the data were first reported. Thus, it was determined that routinely revising published survey statistics increased the potential for error and often confused users of the statistics. Revisions are now made to historical and immediate prior-year statistics only if substantial errors are discovered.
For 1999, an error in the sample frame caused one very large company (based on payroll) to be selected for the sample and its statistical record to be assigned a large weight (see “Frame Creation and Industry Classification” and “Weighting, Maximum Weights, and Probabilities of Selection,” above). Because the company's record had received a large weight during 1999 sampling, the company was selected with certainty for the 2000 sample and assigned a weight of one (see “Identifying Ad Hoc Certainty Companies,” above). This sampling artifact caused an abnormally large decrease in the industry data, especially for sales and employment, when comparing the 2000 statistics with the statistics originally published for 1999. The weight in the company's record in the 1999 statistical file was corrected, and the 1999 statistics were revised and included in subsequent reports. R&D estimates for the company also were affected; however, the amount of R&D was relatively small, even after weighting.
As summarized above under “Character-of-Work Estimates,” reporting anomalies for the character-of-work survey items, especially for basic research, were discovered and investigated using data collected during the 1999 and 2000 cycles of the survey. Companies known to develop and manufacture products but that reported all of their R&D as basic research were contacted and queried regarding their R&D activities. After reviewing the definitions of basic research, applied research, and development, all but several changed their distribution of R&D. Census, the collection and tabulation agent for the survey, was able to go back as far as 1998 and correct the statistical files. Consequently, the tables containing historical basic research, applied research, and development estimates have been revised and footnoted accordingly.
During statistical processing for the 2003 survey two problems were discovered. The first involved a very large company classified among the manufacturing industries. The company was properly sampled for the survey and sent a questionnaire but did not respond. The company had responded to the survey in the late 1990s but not since then. In such cases, estimates for the missing data are made using imputation algorithms (see “Imputation for Item Nonresponse,” above). Using publicly available information, it was discovered that the amount of R&D imputed for the company for 2003 was much lower than the amount from the public sources. Further, amounts imputed since the company's last report were similarly much lower. The company was contacted and it provided a corrected amount for 2003 and updated R&D amounts for several previous years. Consequently, the historical statistics for 1999–2002 were revised and affected tables footnoted accordingly. The second problem involved another very large company that significantly revised its 2003 data, which was preprinted (see “Survey Questionnaires,” above) on its 2004 questionnaire. During 2003 the company had acquired a portion of another company that also had been in the survey in previous years. Through contact with the acquiring company, it was discovered that a significant amount of R&D had been reported twice to the 2003 and 2004 surveys. The double-counted portion of the data was corrected in both survey files, and tables in this report reflect the corrections.
Comparability from year to year may be affected by new sample design, annual sample selection, and industry shifts.
By far the most profound influence on statistics from recent surveys occurred when the new sample design for the 1992 survey was introduced. Revisions to the 1991 statistics were dramatic (see Research and Development in Industry: 1992 (NSF 1995b) for a detailed discussion). Although the allocation of the sample has been changed somewhat, the sample designs used for subsequent surveys have been comparable to the 1992 sample design in terms of size and coverage.
Annual Sample Selection
With annual sampling (introduced in 1992), more year-to-year change is evident than when survey panels were used, for two reasons. First, prior to annual sampling, a wedging operation, which was performed when a new sample was selected, adjusted the data series gradually to account for the changes in classification (see the discussion on wedging later under “Time-Series Analyses,” below). Second, yearly correlation of R&D data is weakened when independent samples are drawn each year.
The industry classification of companies is redefined each year with the creation of the sampling frame. By redefining the frame, the sample reflects current distributions of companies by size and industry. A company may move from one industry to another because of either changes in its payroll composition, which is used to determine the industry classification code changes in the industry classification system itself; or changes in the way the industry classification code was assigned or revised during survey processing.
A company's payroll composition can change because of the growth or decline of product or service lines, the merger of two or more companies, the acquisition of one company by another, divestitures, or the formation of conglomerates. Although an unlikely occurrence, a company's industry designation could be reclassified yearly with the introduction of annual sampling. When companies shift industry classifications, the result is a downward movement in R&D expenditures in one industry that is balanced by an upward movement in another industry from one year to the next.
From time to time, the industry coding system used by federal agencies that publish industry statistics is changed or revised to reflect the changing composition of U.S. and North American industry. The SIC system, as revised in 1987, was used for statistics developed from the 1988–91 panel surveys and the 1992–98 annual surveys. As discussed above, the industrial classification system has been completely changed, and beginning with the 1999 cycle of the survey, NAICS is now used.
Large year-to-year changes may occur because of the way industry classifications are assigned during statistical processing. As discussed above, a company's industry classification is a function of its primary activity based on payroll, which is not necessarily the primary source of its R&D activity. If the majority of a company's payroll shifts to an activity other than an R&D-related activity, for example, trade, the classification of its R&D similarly shifts to the new activity. Further, the design of the statistical sample sometimes contributes to large year-to-year changes in industry estimates. Relatively few companies perform R&D, and there is no national register of industrial R&D performers; thus, a large statistical “net” must be cast to capture new R&D performers. When these companies are sampled for the first time, they are often given weights much higher than they would be given if the size and the amount of R&D they perform were known at the time of sampling. After the size of the company and the amount of R&D performed are discovered via the first survey, the weight assigned for subsequent surveys is adjusted. The method used to classify firms during survey processing was revised slightly in 1992. The effect of the changes, which research has shown was minor, is discussed in detail in U.S. Bureau of the Census 1994a and 1994e. The current method used to classify firms was discussed previously under “Frame Creation and Industry Classification,” above. Methods used for past surveys are discussed in U.S. Bureau of the Census 1995.
Capturing Small and Nonmanufacturing R&D Performers
Until the 1967 cycle of the survey, the sample was selected at irregular intervals; after 1967, samples were selected approximately every 5 years for 1971, 1976, 1981, and 1987. In intervening years, a panel of the largest firms known to perform R&D was surveyed. For example, a sample of about 14,000 firms was selected for the 1987 survey. For the 1988–91 studies, about 1,700 of these firms were resurveyed annually; the other firms did not receive survey questionnaires, and their R&D data were estimated. This sample design was adequate during the survey's early years because R&D performance was concentrated in relatively few manufacturing industries. However, as more and more firms began entering the R&D arena, the old sample design proved increasingly inefficient because it did not capture births of new R&D-performing firms. The entry of fledgling R&D performers into the marketplace was completely missed during panel years. Additionally, beginning in the early 1970s, the need for more detailed R&D information for nonmanufacturing industries was recognized. At that time, the broad industry classifications “miscellaneous business services” and “miscellaneous services” were added to the list of industry groups for which statistics were published. By 1975, about 3% of total R&D was performed by firms in nonmanufacturing industries. (NSF 1994a, 1995a, and 1996a.)
During the mid-1980s there was evidence that a significant amount of R&D was being conducted by an increasing number of companies classified among the nonmanufacturing industries. Again, the number of industries used to develop the statistics for nonmanufacturing was increased. Consequently, the annual reports in this series for 1987–91 included separate R&D estimates for firms in the communication, utility, engineering, architectural, research, development, testing, computer programming, and data processing service industries; hospitals; and medical labs. Approximately 9% of the estimated industrial R&D performance during 1987 was undertaken by nonmanufacturing firms.
After the list of industries for which statistics were published was expanded, it became clear that the sample design itself should be changed to reflect the widening population of R&D performers among firms in the nonmanufacturing industries (NSF 1995a) and small firms in all industries so as to account better for births of R&D-performing firms and to produce more reliable statistics. Beginning with the 1992 survey, NSF decided to (1) draw new samples with broader coverage annually and (2) increase the sample size to approximately 25,000 firms. As a result of the sample redesign, for 1992 the reported nonmanufacturing share was (and has continued to be) 25%–30% of total R&D (NSF 1997a, 1998a, 1999a, 2000a, 2001a, 2002a, 2003b, 2004, 2005a, 2006a, 2007c, and 2008a).
The statistics resulting from this survey are better indicators of changes in, rather than absolute levels of, R&D spending and personnel. In spite of that, the statistics are often used as if they were prepared using the same collection, processing, and tabulation methods over time for longitudinal analyses. Such uniformity has not been the case. Since the survey was first fielded, improvements have been made to increase the reliability of the statistics and to make the survey results more useful. To that end, past practices have been changed and new procedures instituted. Preservation of the comparability of the statistics has, however, been an important consideration in making these improvements. Nonetheless, changes to survey definitions, the industry classification system, and the procedure used to assign industry codes to multiestablishment companies have had some, though not substantial, effects on the comparability of statistics. For discussions of each of these changes, see U.S. Bureau of the Census 1994g; for considerations of comparability, see U.S. Bureau of the Census 1993 and 1994e.
The aspect of the survey that had the greatest effect on comparability was the selection of samples at irregular intervals and the use of a subset or panel of the last sample drawn to develop statistics for intervening years. As discussed earlier, this practice introduced cyclical deterioration of the statistics. As compensation for this deterioration, periodic revisions were made to the statistics produced from the panels surveyed between sample years. Early in the survey's history, various methods were used to make these revisions (U.S. Bureau of the Census 1995). After 1976 and until the 1992 advent of annual sampling, a linking procedure called wedging was used. In wedging, the two sample years on each end of a series of estimates served as benchmarks in the algorithms used to adjust the estimates for the intervening years. The process was dubbed wedging because of the wedgelike area produced on a graph that compares originally reported statistics with the revised statistics that resulted after linking. For a full discussion of the mathematical algorithm used for the wedging process that linked statistics from the 1992 survey with those from the 1987 survey, see U.S. Bureau of the Census 1994g and NSF 1995b.
Comparisons to Other Statistical Series
NSF collects data on federally financed R&D from both federal funding agencies, using the Survey of Federal Funds for Research and Development, and from performers of the R&D—industry, federally funded research and development centers, universities, and other nonprofit organizations—using the Survey of Industrial Research and Development and other surveys (http://www.nsf.gov/statistics/survey.cfm). NSF publishes data on federal R&D budget authority and outlays, in addition to federal obligations, all supplied by federal agencies. These terms are defined below (NSF 2008b):
National R&D expenditure totals in NSF's National Patterns of R&D Resources report series are primarily constructed with data reported by performers and include estimates of federal R&D funding to these sectors. But until performer-reported survey data on federal R&D expenditures are available from industry and academia, data collected from the federal agency funders of R&D are used to project R&D performance. When survey data from the performers subsequently are tabulated, as they were for this report, these statistics replace the projections. Until the mid-1990s, the two survey systems tracked fairly closely. For example, in 1980, performers reported using $29.5 billion in federal R&D funding, and federal agencies reported total R&D funding between $29.2 billion in outlays and $29.8 billion in obligations (NSF 1996b). However, since the mid-1990s, the two series have diverged considerably. The difference in the federal R&D totals appears to be concentrated in funding of industry, primarily aircraft and missile firms, by the Department of Defense. Overall, industrial firms have reported significant declines in federal R&D support since 1990 (table 2), whereas federal agencies have reported level or slightly increased funding of industrial R&D (NSF 2006b, 2007b, and 2008c). NSF continues to identify and examine the factors behind these divergent trends.
Employment, FTE R&D scientists and engineers. Number of people employed in the 50 U.S. states and DC by R&D-performing companies who were engaged in scientific or engineering work at a level that required knowledge, gained either formally or by experience, of engineering or of the physical, biological, mathematical, statistical, or computer sciences equivalent to at least that acquired through completion of a 4-year college program with a major in one of those fields. The statistics show full-time-equivalent (FTE) employment of persons employed by the company during the January following the survey year who were assigned full time to R&D, plus a prorated number of employees who worked on R&D only part of the time.
Employment, total. Number of people employed in the 50 U.S. states and DC by R&D-performing companies in all activities during the pay period that included the 12th of March of the study year (March 12 is the date most employers use when paying first quarter employment taxes to the Internal Revenue Service).
Federally funded R&D centers (FFRDCs). R&D-performing organizations administered by industrial, academic, or other institutions on a nonprofit basis and exclusively or substantially financed by the federal government. To avoid the possibility of disclosing company-specific information and therefore violating the confidentiality provisions of Title 13 of the United States Code, beginning in 2001 data for industry-administered FFRDCs are now collected through NSF's annual academic R&D expenditure survey, the Survey of Research and Development Expenditures at Universities and Colleges, as are data from FFRDCs administered by academic institutions and nonprofit organizations. More information about this survey is available from NSF's Division of Science Resources Statistics website at http://www.nsf.gov/statistics/rdexpenditures/. For current lists of FFRDCs, visit http://www.nsf.gov/statistics/ffrdc/.
Funds for R&D, company and other nonfederal. The cost of R&D performed within the company and funded by the company itself or by other nonfederal sources in the 50 U.S. states and DC; does not include the cost of R&D funded by the company but contracted to outside organizations, such as research institutions, universities and colleges, nonprofit organizations, or—to avoid double counting—other companies.
Funds for R&D, federal. The cost of R&D performed within the company in the 50 U.S. states and DC funded by federal R&D contracts, subcontracts, R&D portions of federal procurement contracts and subcontracts, grants, or other arrangements; does not include the cost of R&D supported by the federal government but contracted to outside organizations, such as research institutions, universities and colleges, nonprofit organizations, or other companies.
Funds for R&D, total. The cost of R&D performed within the company in its own laboratories or in other company-owned or company-operated facilities in the 50 U.S. states and DC, including expenses for wages and salaries, fringe benefits for R&D personnel, materials and supplies, property and other taxes, maintenance and repairs, depreciation, and an appropriate share of overhead; does not include capital expenditures or the cost of R&D contracted to outside organizations, such as research institutions, universities and colleges, nonprofit organizations, or—to avoid double-counting—other companies.
Funds per R&D scientist or engineer. All costs associated with the performance of industrial R&D (salaries, wages, and fringe benefits paid to R&D personnel; materials and supplies used for R&D; depreciation on capital equipment and facilities used for R&D; and any other R&D costs) divided by the number of R&D scientists and engineers employed in the 50 U.S. states and DC. To obtain a per person cost of R&D for a given year, the total R&D expenditures of that year were divided by an estimate of the number of full-time-equivalent (FTE) scientists and engineers engaged in the performance of R&D for that year. For accuracy, this estimate was the mean of the number of FTE R&D-performing scientists and engineers reported in January for the year in question and in January of the subsequent year.
Net sales and receipts. Dollar values for goods sold or services rendered by R&D-performing companies to customers outside the company, including the federal government, less such items as returns, allowances, freight, charges, and excise taxes. Domestic intracompany transfers and sales by foreign subsidiaries were excluded, but transfers to foreign subsidiaries and export sales to foreign companies were included.R&D and industrial R&D. R&D is the planned, systematic pursuit of new knowledge or understanding toward general application (basic research); the acquisition of knowledge or understanding to meet a specific, recognized need (applied research); or the application of knowledge or understanding toward the production or improvement of a product, service, process, or method (development). Basic research analyzes properties, structures, and relationships toward formulating and testing hypotheses, theories, or laws; applied research is undertaken either to determine possible uses for the findings of basic research or to determine new ways of achieving specific, predetermined objectives; and development draws on research findings or other scientific knowledge for the purpose of producing new or significantly improving products, services, processes, or methods. As used in this survey, industrial basic research is the pursuit of new scientific knowledge or understanding that does not have specific immediate commercial objectives, although it may be in fields of present or potential commercial interest; industrial applied research is investigation that may use findings of basic research toward discovering new scientific knowledge that has specific commercial objectives with respect to new products, services, processes, or methods; and industrial development is the systematic use of the knowledge or understanding gained from research or practical experience directed toward the production or significant improvement of useful products, services, processes, or methods, including the design and development of prototypes, materials, devices, and systems. The survey covers industrial R&D performed by people trained, either formally or by experience, in engineering or in the physical, biological, mathematical, statistical, or computer sciences and employed by a publicly or privately owned firm engaged in for-profit activity in the United States. Specifically excluded from the survey are quality control, routine product testing, market research, sales promotion, sales service, and other nontechnological activities; routine technical services; and research in the social sciences or psychology.
 In the Survey of Industrial Research and Development and in the publications presenting statistics resulting from the survey, the terms firm, company, and enterprise are used interchangeably. Industry refers to the 2-, 3-, or 4-digit North American Industry Classification System (NAICS) codes or group of NAICS codes used to publish statistics resulting from the survey.
 The 1999 survey was the first in which companies were classified using NAICS. Prior to 1999, the Standard Industrial Classification (SIC) system was used. The two systems are discussed later under Comparability of Statistics.
 Form RD-1 is a revised version of the Form RD-1L, formerly used to collect data from large R&D performers for odd-numbered years. For even-numbered years, an abbreviated questionnaire, Form RD-1S was used. Beginning in 1998 the Form RD-1L was streamlined, renamed Form RD-1, and the odd/even-numbered year cycle abandoned.
 Annual sampling also remedies the cyclical deterioration of the statistics that results from changes in a company's payroll composition because of product line and corporate structural changes.