The data used to describe financial and infrastructure resources for academic R&D are derived from three National Science Foundation (NSF) surveys. These surveys use similar but not always identical definitions, and the nature of the respondents also differs across the surveys. The three main surveys are as follows:
The first survey collects data from federal agencies, whereas the last two collect data from universities and colleges.
Data presented in the first part of this section, "Academic R&D Within the National R&D Enterprise," are derived from the NSF series National Patterns of R&D Resources, which sums results from several NSF surveys of the various sectors of the U.S. economy (for example, universities, businesses, and the federal government) so that the components of the overall R&D effort are placed in a national context. These data are reported on a calendar-year basis, and the data for 2008 are preliminary. Since 1998, the series has also attempted to eliminate double counting in the academic sector by subtracting current fund expenditures for separately budgeted S&E R&D that are passed through to other institutions via subcontracts and similar collaborative research arrangements.
Data in subsequent portions of the section derive from the Survey of Research and Development Expenditures at Universities and Colleges (Academic R&D Expenditures Survey). They are reported on an academic fiscal-year basis (e.g., FY 2008 covers July 2007 to June 2008 for most institutions) and do not net out the funds passed through to other institutions; therefore, they differ from those reported earlier. Data on major funding sources, funding by institution type, distribution of R&D funds across academic institutions, and expenditures by field and funding source are also derived from this survey.
The data on "Top Agency Supporters" and "Agency Support by Character of Work" in the "Federal Support of Higher Education R&D" section come from NSF's Survey of Federal Funds for Research and Development. This survey collects data on R&D obligations for each federal fiscal year (e.g., FY 2008 covers October 2007 through September 2008) from 30 federal agencies. Data for FY 2008–09 are preliminary estimates. The amounts reported for FY 2008–09 are based on administration budget proposals and do not necessarily represent actual appropriations. It should be noted that federal obligation data (e.g., $25.7 billion in federal FY 2008) do not match the federally funded expenditures data reported by academic institutions ($31.2 billion in academic FY 2008) for several reasons. First, the period covered by the two surveys is slightly different; second, there is necessarily a lag between the obligation date and the beginning of project expenditures and some awards span multiple years; and third, some of the expenditures data double count federal R&D awards that are reported both by the primary institution receiving the funds and again by an academic subrecipient to whom funds are passed through (about $1.5 billion in FY 2008).
Data on research equipment are taken from the Survey of Research and Development Expenditures at Universities and Colleges. Data on research facilities and cyberinfrastructure are taken from the Survey of Science and Engineering Research Facilities and are also reported by academic fiscal year. The population for this survey is a subset of the population for the Academic R&D Expenditures Survey and includes all institutions reporting $1 million or more in current fund expenditures for R&D. The Facilities survey was broadened starting in FY 2003 to include data on computing and networking capacity. Although terms are defined specifically in each survey, in general, facilities expenditures are classified as capital projects, are fixed items such as buildings, often cost millions of dollars, and are not included in R&D expenditures as reported here. Research equipment, however, is purchased with current funds (those in the yearly operating budget for ongoing activities) and is included within R&D expenditures. Because the categories are not mutually exclusive, some large instrument systems could be classified as either facilities or equipment. Generally, academic institutions account separately for capital projects and current fund expenditures.
Redesign of the Survey of R&D
Expenditures at Universities and Colleges
The Survey of Research and Development Expenditures at Universities and Colleges has been conducted annually since 1972. In 2007, NSF began an intensive 3-year effort to evaluate and redesign the survey. The goals of the redesign were to (1) update the survey instrument to reflect current accounting principles in order to obtain more valid and reliable measurements of the amount of academic R&D spending in the United States, (2) expand the current survey items to collect the additional detail most often requested by data users, and (3) evaluate the feasibility of expanding the scope of data collected beyond that of R&D expenditures.
As part of the redesign effort, NSF held data user workshops and expert panel meetings, worked with accounting and survey methodology experts, and visited more than 40 institutions to receive input on possible changes to the survey. A pilot test of the redesigned survey was administered to 40 institutions during the fall of 2009, and full implementation of the redesigned survey is planned for the fall of 2010.
The new survey, now titled the "Higher Education R&D Survey," will continue to capture core information on R&D expenditures by sources of funding and field. In addition, it will include the following data:
In addition to these changes, NSF has also been working with data users and experts to explore the feasibility of collecting systematic data on both R&D personnel and intellectual property and commercialization within universities and colleges. It is expected that additional questions on these topics will be added to the Higher Education R&D Survey in future years
Academic earmarking is the congressional practice of providing federal funds to educational institutions for facilities or projects without merit-based peer review. Obtaining exact figures for either the amount of funds or the number of projects earmarked for universities and colleges, overall or for research, is difficult. There is no accepted definition of an earmark, and funding legislation is often obscure in its description of the earmarked projects. Broad estimates using a consistent approach in compiling these data are as follows.
Academic earmarks stood at an estimated $2.3 billion in FY 2008 (Brainard and Hermes 2008), a 15% increase over an estimated $2.0 billion reported last in FY 2003 in the Chronicle of Higher Education (Brainard and Borrego 2003). Approximately two-thirds ($1.6 billion) of the FY 2008 funds and $1.4 billion of FY 2003 funds were for R&D projects, R&D equipment, or construction or renovation of R&D laboratories.
EPSCoR, the Experimental Program to Stimulate Competitive Research, originated as a response to a number of stated federal objectives. Section 3(e) of the National Science Foundation Act of 1950, as amended, states that "it shall be an objective of the Foundation to strengthen research and education in the sciences and engineering, including independent research by individuals, throughout the United States, and to avoid undue concentration of such research and education."
In 1978, Congress authorized NSF to implement EPSCoR in response to broad public concerns about the extent of geographical concentration of federal funding for R&D. Eligibility for EPSCoR participation was limited to those jurisdictions that historically had received lesser amounts of federal R&D funding and had demonstrated a commitment to develop their research bases and to improve the quality of S&E research conducted at their universities and colleges.
The success of the NSF EPSCoR programs during the 1980s subsequently prompted the creation of EPSCoR and EPSCoR-like programs in six other federal agencies: the Departments of Energy, Defense, and Agriculture; the National Aeronautics and Space Administration; the National Institutes of Health; and the Environmental Protection Agency. In FY 1992, the EPSCoR Interagency Coordinating Committee (EICC) was established by the federal agencies with EPSCoR or EPSCoR-like programs. The major objectives of the EICC focused on improving coordination among and between the federal agencies in implementing EPSCoR and EPSCoR-like programs consistent with the policies of the participating agencies.
EPSCoR seeks to increase the R&D competitiveness of an eligible state through the development and utilization of the science and technology (S&T) resources residing in its colleges and universities. It strives to achieve this objective by (1) stimulating sustainable S&T infrastructure improvements at the state and institutional levels that significantly increase the ability of EPSCoR researchers to compete for federal and private sector R&D funding and (2) accelerating the movement of EPSCoR researchers and institutions into the mainstream of federal and private sector R&D support.
In FY 2008, the seven ElCC agencies invested a total of
$419 million on EPSCoR and EPSCoR-like programs, up
from approximately $97 million in 1999 (see table
In a congressionally mandated study of women faculty in research universities, the National Research Council (2009) found that women faculty do as well as or better than men in hiring, promotions, and access to university resources. The study focused on tenured or tenure-track faculty in 6 disciplines (biology, chemistry, civil engineering, electrical engineering, mathematics, and physics) at 89 research universities. Women constituted 12% of the faculty in the disciplines and universities studied. The study found that in these research universities, women were a lower percentage of applicants for tenured or tenure-track positions than they were of recent doctorates, especially in chemistry and biology. However, women were a higher percentage of interviewees than of applicants and a higher percentage of those hired than of interviewees. The study also found that women constituted a lower percentage of tenure candidates than of assistant professors, but that among those up for tenure review, women were more likely than men to receive tenure. The study found little difference in lab space, equipment, or percentage of time teaching or doing research and little difference in outcomes (e.g., honors, funding, salaries) of tenured or tenure-track faculty, with a few exceptions, including salaries of full professors and publications. Because of its specific mandate, the report did not address women who did not apply to research universities or those who left research universities, but noted the need for further research in these areas. By necessity, it did not address other types of academic employment, other types of academic institutions, or other issues affecting women's employment in academia, including dual careers, effects of children and family obligations, or institutional climate. Many other studies of women in academia address some of these issues (e.g., Long 2001; COSEPUP 2007; NSF 2004; Ginther 2001; Hosek et al. 2005; Rosser, Daniels, and Wu 2006; Fox 2005).
The article counts, coauthorship data, and citations discussed in this section are derived from S&E articles, notes, and reviews published in a set of scientific and technical journals tracked by Thomson Reuters in the Science Citation Index (SCI) and Social Sciences Citation Index (SSCI) (http://thomsonreuters.com/products_services/science/). The data exclude letters to the editor, news stories, editorials, and other material whose purpose is not the presentation or discussion of scientific data, theory, methods, apparatus, or experiments. The data are refined in a database prepared for NSF by The Patent Board™, formerly CHI Research, Inc., under a license agreement between The Patent Board™ and Thomson Reuters.
Journal Selection. Since Science and Engineering Indicators 2004, this section has used a changing set of journals that reflects the current mix of journals and articles in the world, rather than a fixed journals set. Thomson Reuters selects journals each year as described at http://www.thomsonreuters.com/products_services/science/free/essays/journal_selection_process/, and the selected journals become part of the SCI and SSCI portions of the Web of Science, a digital data product. Using citation data, Thomson Reuters then creates subsets of the SCI and SSCI that are available on CD-ROM and in print. These published data files are notable for the relatively high citation rank of the journals within their corresponding S&E subfields and the exclusion of journals of only regional interest, especially in the social sciences. Likewise, a declining citation rank can result in the removal of a journal from these highly selective data products.
Using the CD-ROM data, the Patent Board™ updates the NSF master file of journals; the number of journals analyzed by NSF from SCI/SSCI was 4,093 in 1988 and 5,266 in 2008. These journals give good coverage of a core set of internationally recognized peer-reviewed scientific journals. The coverage extends to electronic-only journals and print journals with electronic versions. In the period 1995–2008, the database contained 9,358,420 S&E notes, reviews, and articles.
Article Data. Except where noted, author means departmental or institutional author. Articles are attributed to countries or sectors by the country or sector of the institutional address(es) given in the articles. If no institutional affiliation is listed, the article is excluded from the counts in this chapter. Likewise, coauthorship refers to institutional coauthorship. An article is considered coauthored only if it shows different institutional affiliations or different departments of the same institution; multiple listings of the same department of an institution are considered as one institutional author. The same logic applies to cross-sector and international collaboration.
Two methods of counting articles are used: fractional and whole counts. Fractional counting is used for article and citation counts. In fractional counting, credit for multiauthor articles is divided among the collaborating institutions or countries based on the proportion of their participating departments or institutions. Whole counting is used for coauthorship data. In whole counting, each institution or country receives one credit for its participation in the article.
Several changes introduced in this edition of Indicators inhibit comparison with data from the same source used in previous editions.
The regions and countries/economies included in the
bibliometric data are listed in appendix table
Iran-based authors produced 4,400 articles in 2007, and Iran's S&E publication growth rate has been the fastest in the world. Growth in publications has been strong across many fields, resulting in a 2007 publication portfolio weighted toward chemistry (30% of the total), engineering (15%), the medical sciences (14%), the biological sciences (14%), and physics (14%) (appendix tables 5-25 through 5-38).
Iran has an evolving science policy framework and a growing number of research institutions to carry out the framework (UNCTAD 2005). The country has a growing adult literacy rate (77% in 2003), but its economy is dominated by extraction and export of oil and gas. Current policy envisions a more diversified economy and a transition to development and production of petrochemicals and other high-technology products.
Iran's pattern of international coauthorship has mirrored
that of other countries with immature S&E capabilities:
they coauthor internationally at very high rates,
but these rates decline as domestic capacity builds. Iran's
rate of international coauthorship was 42% in 1988 but
had declined to 25% in 2008, near the world average of
Despite a declining rate of international coauthorship
Iran's total number of international coauthorships has been
growing steadily, and coauthorships with each of its main
foreign coauthor countries have also been growing. Table
To address the need for indicators of interdisciplinary research (IDR), NSF/SRS commissioned a panel of researchers* to review recent attempts to measure the growth of interdisciplinary S&E research. The panel reviewed 74 publications dealing with IDR. It concluded that, despite increased study of IDR in the literature, existing indicators of IDR based solely on bibliometric data were unsatisfactory for management and policy purposes and relied on an overly simplistic concept of IDR (Wagner, Roessner, and Bobb 2009; Wagner et al. 2009). The panel also found that problems with current data sources and analytical techniques raise questions about the validity of these measures.
The panel concluded that conceptualization of IDR involves both the outputs of research and research processes: it stressed that both social developments (e.g., new S&E working relationships, new career trajectories, new institutions) and cognitive developments (e.g., new theory, new ways of using existing data, new problem frameworks) are essential markers of IDR. Bibliometric data alone do not capture these dimensions of IDR.
The panel identified an emerging consensus that studies of IDR need measures of knowledge integration that could be applied to the work of either a team of researchers or an individual. However, they found limited agreement on what such integration entails and even less agreement on what would count as evidence of it.
The panel also assessed the limitations of current attempts at measurement of IDR, most of which use Thomson Reuters data products. These are organized into a structure based on the discipline of the journal in which articles are published. Studies then measure the "cognitive distance" reflected by the diversity of citations in their target data (authors, articles, journals) from the Thomson Reuters journal structure and treat this distance as the measure of IDR.
Alternative analytical techniques are under development. These use statistical and visualization techniques that seek to detect certain hidden structures in the data that may indicate IDR. However, these techniques still require validation. Bibliometric measures will also need to be supplemented by survey data, ethnographic studies, expert review, and other evidence to confirm the degree of interdisciplinarity in research output. Indicators of IDR may also vary depending on user needs. For example, measurements of IDR appropriate for projects, programs, and nations are likely to be different. The panel summarized its conclusions as follows (Wagner, Roessner, and Bobb 2009, p 9-10, 16):
*The assessment was performed by three researchers at SRI International, Caroline S. Wagner, J. David Roessner, and Kamau Bobb, working with the following experts on interdisciplinarity and visualization: Katy Börner, Indiana University; Kevin W. Boyack, SciTech Strategies, Inc.; Joann Keyton, North Carolina State University; Julie Thompson Klein, Wayne State University; and Ismael Rafols, University of Sussex. These eight researchers are referred to as the "panel" in this sidebar.
The publication of research results in peer-reviewed
scientific journals is a key output of scientific research.
In the early 1990s, the number of S&E articles published
by U.S. academic scientists and engineers in the world's
major peer-reviewed journals plateaued while resource
inputs—funds and personnel—kept increasing (figure
An examination of relationships among publications,
resource inputs, and institutional characteristics in the
top 200 academic R&D institutions found that, with
the possible exception of S&E faculty and the number
of S&E doctoral recipients, inflation-adjusted resources
for publications have increased faster than the number
of publications. From 1990 to 2001, resource inputs increased
per publication, with about 29% more resources
consumed per fractional count publication in 2001 than
in 1990. This pattern of increasing inputs required to
yield the same quantity of publication outputs occurred
across the entire U.S. academic system. Possible reasons
for the increasing inputs per article include a rise in the
complexity of research required for publication; costs for
faculty, postdocs, S&E doctoral recipients, and research
materials and equipment that are increasing faster than
the gross domestic product implicit price deflator; and
increased communication costs for collaboration (NSF/SRS 2010, forthcoming). In figure