The Changing Research and Publication Environment in American Research Universities
Researchers produce publications by doing research and then disseminating the results. Changes in the quantity of articles can occur because of changes in resources—funds, personnel, facilities, and equipment—being devoted to research. Quantitative data are well suited to describing these changes. Other phases of the SRS study explore the relationship between changes in resources and publication output.
Changes in article counts can also occur when researchers adopt either different ways of doing research or different systems of dissemination. Qualitative data from interviews with knowledgeable informants provide valuable indications of these kinds of changes. This report presents what the study's university informants said about what is and is not changing in each of these two areas. The section on the research process deals with the changing characteristics of the people, sources of information, equipment, and facilities that are involved in generating research findings. The section on how research is disseminated describes consequential developments in the way researchers report results and get credit in their professions for having generated findings.
Changes in research and dissemination processes, in turn, are affected by the larger institutional and global environments in which scientific inquiry occurs. Institutions such as universities, funding agencies, and the commercial market affect how academic scientists and engineers do their work. The section on the institutional environment deals with changes in the opportunities and constraints these institutions create for university-based researchers. The focus is on actions that these institutions induce in research and publication. The section on the global environment addresses how research in other countries and the relationship between U.S. and world science are changing.
The Research Process
University researchers and administrators say that research has become more collaborative in practically all respects. Scientific articles more frequently involve authors from more laboratories, more institutions, and institutions in more countries. Collaborators are more often trained in different disciplines. Some of the study's informants also noted that collaborations with researchers in other institutional sectors, especially industry, were becoming more common.
When researchers refer to collaboration, they describe a range of relationships falling between two polar types. At one pole are collaborations among people who have complementary data, access to complementary equipment, or expertise in complementary techniques. In these collaborations, coauthors are responsible for different parts of an article, and few issues arise that affect the article as a whole or require substantial interaction among coauthors. At the other pole are collaborations among researchers with fundamentally different perspectives who need a common understanding of the issues that their work will address and cannot divide their joint work into discrete pieces that individual scientists can handle autonomously. These collaborations involve extensive discussion among cooperating researchers and can transform the way individual researchers approach the problems they study. For brevity, this report refers to these two types of collaboration as "complementary" and "integrative," respectively.
Since researchers themselves do not conventionally divide collaborations into these categories, they did not always clearly indicate which type of collaboration accounted for the overall increase in collaboration. The study team received strong indications, however, that both played a part. Numerous factors facilitate the increase in collaboration:
The two polar types of collaboration can have different implications for output. Complementary collaborations take advantage of the efficiencies ordinarily associated with the division of labor when applied to well-understood tasks. Integrative collaborations involve the communications overhead associated with overcoming differences and developing teams. In describing integrative collaborations, some researchers praised the intellectual excitement of bringing different perspectives to bear on a common problem and pointed to innovations that resulted. But researchers also said that the effort to develop shared perspectives in a diverse group means that the distinctive products of this type of collaboration take more time to produce.
Researchers report that they do more interdisciplinary work. Some of this involves collaboration with what are conventionally thought of as neighboring disciplines. Thus one astronomer stated that the different subdisciplines within astronomy are more likely to collaborate than in the past. But some interdisciplinary work connects researchers in historically disparate fields. The study team was often told that biology in particular had become markedly and radically more interdisciplinary, developing increasingly strong links to physics, mathematics, statistics, engineering, and various kinds of environmental science. Stronger ties among the disciplines within biology, manifested in studies examining relationships among biological phenomena at levels ranging from molecular to behavioral, were also mentioned frequently. Computational sciences, including mathematics and statistics as well as computer sciences, were another area where researchers reported substantial growth in interdisciplinary work. Aided by increasingly sophisticated computers, researchers with mathematical training were finding new opportunities to collaborate with those involved in various fields of empirical research.
Researchers often, but not always, associated interdisciplinarity with larger research groups and with centers constituted around interdisciplinary problems. But university departments themselves appear to be becoming more interdisciplinary. In a meeting of department chairs, for example, the study team was struck by how many chairs described their departments as spanning different disciplines and including faculty members with affiliations to research units in many parts of the university. Although some of these departments were explicitly constituted around an interdisciplinary vision, many represented "traditional" or "core" disciplines that had expanded their boundaries to include faculty members with backgrounds and affiliations that would not hitherto have been represented.
Access to the Literature
Although researchers were making continuing, and perhaps increasing, use of library services, they said that they rarely visited libraries any longer. Instead, they were using the electronic search capabilities and database subscriptions that their university libraries provide to find relevant literature. Researchers consequently were doing more targeted searching using key words, which has enabled them to access a broader range of directly relevant literature more efficiently. At the same time, they reported doing less undirected browsing, especially outside the journals with the highest visibility. Some lamented that their more targeted reading had led them to make fewer serendipitous discoveries in the journals. They also noted that older literature, when it is not covered in the electronic databases, effectively becomes inaccessible. Many researchers noted that the Internet has made the literature more accessible to them and increased their productivity as a result. But the study team was also told that time to read and think was becoming scarce and that electronic access neither reduced the importance of these activities nor meaningfully increased the efficiency with which they could be done.
For the most part, experienced researchers reported little or no change in the availability of capable subordinate personnel—graduate students, postdocs, and technicians. They expressed various, mostly tentative, generalizations about how younger researchers differed from the young researchers of their generation—more careerist, more interested in translating research into practical applications, less concerned with the big scientific questions, more interested in commercial activities, less characterized (especially if U.S. born) by a single-minded devotion to science. However, they almost never associated these differences with any substantial effect on output volume.
Equipment needs have increased in many fields. Researchers frequently complained that their own laboratories or universities lacked equipment that was necessary for their research, though they generally said they were able to get access to such equipment in shared facilities, either at their own universities or elsewhere. Start-up packages to enable new faculty members to equip a laboratory have become substantially more expensive in many fields.
How Research is Disseminated
Journal publication has traditionally been the conventional way to disseminate research results and other significant scientific contributions. Although other outlets for dissemination, such as conference presentations, books and book chapters, and government reports, have also existed, scientists generally have looked to journal articles for reports of new findings by their colleagues.
Journal publication has also been the most important way for scientists to secure credit for their research contributions. Because journals, unlike some other publication outlets, publish articles only after expert reviewers conclude that the work is worthy of being published, publication signifies that an article has sufficient merit to survive the scrutiny of peer review.
Researchers interviewed for this study described numerous changes in the ways research findings are disseminated, many involving new ways in which journals do business in an era of electronic communication. Some of these changes challenge the centrality of journals to the dissemination of scientific findings. However, informants described almost no changes in how scientists accrue credit for their work. For assigning credit, unlike for dissemination, the centrality of journals is unchallenged and undiminished. Only the field of computer sciences, discussed separately below, is an exception to this generalization.
The Peer-Reviewed Article as Indicator
The researchers interviewed strongly affirmed the continuing centrality of peer review to the evaluation of scientific work. None of them indicated that peer review had in recent years become less important to assigning scientists credit for their research. The study informants said that journals added value to research reports in large part by administering a peer review process. Accordingly, they viewed journal quality as mostly a function of the quality and rigor of the peer review process. Similarly, in discussing tenure and promotion, they stated that peer-reviewed research that appears in journals is the cornerstone of a candidate's dossier and that other kinds of research output are at most supplementary. In the rare disciplines that gave substantial weight to research reports disseminated in media other than journals, validation through peer review was an essential element of assigning credit.
More typically, according to study informants, because peer-reviewed journals were the route to scientific credit, scientists remained as committed as ever to reporting their significant findings in journals and nearly all significant contributions appeared in journals. This was true even in fields where additional modes of dissemination had become increasingly important and where versions of the research were initially disseminated through other modes. Thus, study informants agreed that article counts are at least as good an indicator of scientific productivity now as they were in the past and that any limitations of article counts in this regard have not become more significant in recent years.
Scientists continue to report research in books, book chapters, conference presentations, and annual review articles and in recent years have begun contributing to Web archives as well. Research reports in traditional paper media other than journals have generally not increased in importance, and there were some indications that they may have declined. Publications in books have suffered increasingly from lack of timeliness in keeping pace with the rapidly moving scientific frontier, especially in the biosciences. Opinions about review compendia were mixed: Some perceived them as more important in helping readers gain an overall perspective on a burgeoning literature, but others believed they suffered from the same timeliness problems as books and book chapters. Conference proceedings, except in the computer sciences (discussed below), were not seen as competing with journals for high quality material. Similarly, non-peer-reviewed papers posted on the Web, though significant in some fields as a means of disseminating new findings, were not considered a substitute for journal publications.
Advances in Information Technology
Although technology has not substantially altered the role of peer reviewed journals in the allocation of credit for scientific contributions, it has been changing the ways that researchers make their findings known. In interviews, researchers noted the following changes in journal operations:
Changes in the Publishing Business
Spurred in large measure by new cost-shifting and access-altering features of electronic dissemination, the journal publishing business has been changing. Many researchers noted that journals had become far more expensive, causing libraries to limit or reduce the number of journals to which they subscribe and publishers to adopt various strategies to maintain their library subscription income. The study team heard numerous complaints about commercial publishers and spoke with some proponents of open-access publishing, in which the author (or the funding agency that sponsored the author's research) pays the cost of publication. Study informants gave considerable evidence that the business model for scientific publication is in flux, but almost no evidence that recent developments had generally changed whether or when researchers chose to publish their work in journals. Although the ongoing changes in the publishing business have potentially far-reaching effects on scientific communication, none of the informants said these changes had in fact had a big impact on article and citation counts, and none offered ideas about why these changes would have produced any impact at all thus far.
Changes in Published Papers
Many study informants said the volume of published scientific journal articles had increased, with the numbers of journals, issues, and articles per journal issue all growing larger. As a result, they reported that, if they simply wished to report a finding somewhere, it was easier to get an article published than it had been in the past. Many said that the minimum required scientific contribution for articles in their fields (the "least publishable unit") remained essentially unchanged, although certain information, such as gene sequences, because technological and scientific advances had made it much easier to obtain, was no longer sufficient to render an article publishable.
At the same time, researchers reported that standards at the better journals—journals with the highest impact in a particular specialty and high-impact interdisciplinary journals—had risen. Many researchers said that publishing in these journals had become harder, and though a substantial number thought there had been no change, none said it had become easier. In this more competitive environment, several scientists claimed that these journals demanded revisions more often than in the past.
Many informants said the prestige premium attached to publishing in top journals had increased. In many fields, this was especially true for the two leading English-language interdisciplinary journals, Science and Nature. Researchers offered various explanations for the increasing prestige of these journals. These explanations included the "marquee" quality of journals that reached a wide audience in an era of increasing specialization and the growing importance of placing one's work in a context that highlighted its interdisciplinary implications and brought it to the attention of a diverse audience. However, informants said that successful careers could be built on a record of consistent publication in high-quality specialized journals.
Researchers said that at the top journals higher standards generally meant employing a greater range of experimental techniques and greater sophistication in the statistical analyses used to document a particular result. A few lamented that inferences from related theory and discussion of unresolved issues had become harder to include in articles, since these could be taken as indications that further experimental evidence was needed to provide definitive empirical tests of open questions; one scientist said that in her field more theory-laden discourse was moving out of the research journals and into review articles.
In many fields, a trend toward "letters" journals, which feature brief reports of significant findings, has partially or completely countered the trend toward longer, more comprehensive articles based on more experimental data. Like articles in Science or Nature, a report in a letters journal can serve as the "headline" announcing a research result and establish priority. Researchers said they would often follow these short articles with a more complete report in a specialty journal that provided the space for it.
The researchers the study team spoke with said that, although norms governing citation practices had not materially changed in recent years, actual practice had evolved somewhat in light of some new realities. They noted increasing tendencies to cite more accessible review articles rather than search for the original research on which the reviews were based, pointing out that this introduced additional inaccuracy in assessments that relied on citation analysis to measure the influence of original research. Another significant change was increased reliance on electronic searches for sources, which made findings that predated the major electronic databases effectively inaccessible and, therefore, rarely cited. In addition, some journals impose limits on the space they devote to citations, leading researchers to omit citations they consider less important.
Researchers in different disciplines saw the pace of change in research and publication processes differently. Chemists and social scientists mostly painted a picture of relatively gradual changes, whereas biologists and mathematicians were more likely to describe much more rapid developments. Yet, in most respects, researchers characterized the overall direction of scientific change in fairly similar terms.
Nonetheless, there are some fields in which special circumstances have caused research and publication to move in unique directions. Field-specific patterns and trends stood out in clinical medicine and computer sciences.
Clinical Medicine. Almost alone among those interviewed for the study, several researchers in clinical medicine said that they would not be surprised to discover that U.S. publication output was no longer growing and that U.S. science was losing ground to work done in Europe and Japan. Clinical researchers cited numerous problems that beset their field and that had not been solved by the infusion of federal research funds toward the end of the 1990s. There was broad agreement about the following developments:
According to study informants, the net effects of this set of changes were that research funds did not go as far and personnel were less able to focus on research and publication. They said that, on the whole, the circumstances for doing clinical research in other developed countries were more favorable and that the comparative advantages these countries enjoyed appeared to have been growing.
Computer Sciences. In computer sciences, proceedings of peer-reviewed invitational conferences and workshops are an important and valued publication outlet. Acceptance rates can be low for prestigious conference proceedings. Computer scientists reported that journals have proven to be too slow to keep pace with a rapidly changing discipline. Because of the premium on speed, revision and resubmission is rare. Funding agencies and central university administrations have learned to take into account the special features of publication in computer sciences in evaluating faculty research records.
The Institutional Environment
In deciding how to publish their work, researchers are attuned to the signals they receive from the institutions in the research community. These institutions include the following:
This section reports on the changes and continuities in the institutions that define the standards by which academic research is judged and that reward researchers for their accomplishments.
University Priorities and Measures
Universities generally do not use quantitative metrics to measure research output, especially when assessments are consequential. Formal measurement processes have made some inroads in shaping allocations of faculty salary increases, for example, but tenure and promotion decisions, which involve major, long-term commitments, are seen as irreducibly qualitative, especially at the most prestigious universities. The biggest exception to this generalization was the clinical track of one of the medical schools visited, and even in that case, the basic sciences track avoided formal metrics in assessing publication output.
In tenure and promotion decisions, universities rely heavily on letters from distinguished researchers assessing the distinctive contributions candidates have made to scholarship in their fields. According to study informants, these letters were no less important than they had been in the past, and some informants said they were more important. Others stressed the importance of a cogent, detailed letter from the department chair or tenure and promotion committee laying out for university administrators outside the candidate's department the reasons why the candidate's work was important. Although letters could invoke quantitative indicators to buttress a case for tenure or promotion, informants viewed the essence of the case as an argument about the quality and, especially, the impact of a researcher's work. One researcher stated that a letter that rested its case on quantitative indicators would be dismissed out of hand.
Informal quantitative standards did play a role in tenure and promotion decisions by setting approximate productivity thresholds below which a university would presume that a candidate was not competitive. Even these thresholds, however, were discussed as guidelines, and faculty members readily supplied anecdotes about individuals who received tenure or promotion because they were able to demonstrate major impact despite a relative paucity of publications.
The increasing importance of collaborative research has tended to underscore the importance of qualitative expert assessment. Faculty and administrators involved in making tenure and promotion decisions said that inferences from placements in authorship lists were no substitute for rich descriptions of the roles of individual scientists in collaborative efforts. Most study informants saw universities as increasingly confronting the problem of identifying the distinctive contributions of junior researchers to collaborative enterprises, but most saw this as a manageable issue that was unlikely to lead to substantial changes in how faculty were evaluated. Some noted that the premium on demonstrating a distinctive individual contribution to science made some junior researchers reluctant to engage in predominantly collaborative work, even when they believed they could be most productive in this way.
Because the significance of journal publications varies along a variety of dimensions—article quality, journal quality, role of any particular scientist in the group of authors, placement in the authorship list—article counts would require numerous adjustments before they could be used as a valid measure of research contributions. No one interviewed seriously proposed article counts as a viable way of evaluating faculty.
Although impact was portrayed as central, impact factors—quantitative measures of citation frequency using ISI data—elicited diverse reactions. One department chair said he initially had been enthusiastic about a quantitative measure of impact until he looked at the citation measures for his own work and concluded that some of his lesser work was among his most highly cited papers. Other researchers, however, expected that impact factors would become more important because computerized databases made them readily accessible without significant cost. Several informants also noted that impact factors and other quantitative indices became somewhat more salient as a tenure or promotion portfolio ascended to higher levels in the university and was reviewed by people more distant from the candidate's field.
Overwhelmingly, faculty and administrators believed that expert judgment, despite some risk of arbitrariness and selection bias, is superior to existing metrics. Some of the universities visited had instituted controls on the selection of experts to improve the objectivity of the evaluations produced by the process. Despite the effort and expense involved in securing expert evaluations, a university that had abandoned outside expert evaluation for making promotions within the ranks of full professors reinstituted the practice because it concluded that such evaluations were important for making creditable evaluations.
For universities, the most salient quantitative measure of research activity is not publication counts or impact factors, but external research funding, especially when competitively obtained. Study informants reported that funding could be useful as a threshold measure, given that external support is necessary for sustaining an adequate level of research activity. One administrator said that funding levels, by providing an indication of competence in the "business side" of running a laboratory, complemented publication measures, which indicated proficiency in the "research side."
Researchers familiar with faculty evaluation in leading European and Asian countries reported that universities in those countries relied much more heavily on quantitative measures. One researcher recalled his shock at learning that universities in France and Israel determined whom to hire by using publication counts. Another faculty member marveled at how concerned postdocs from other countries were with the impact factors of the journals to which they considered submitting manuscripts. She attributed improved performance on quantitative indicators in these countries to the centrality of these indicators to performance assessment and faculty members' career prospects. Like practically all of the scientists who commented on this issue, she compared these quantitatively oriented systems unfavorably with the qualitative expert judgment evaluations prevalent in American universities. Perhaps the most supportive comment on strict adherence to quantitative standards for measuring publication output was the suggestion that rapidly improving university systems might have few alternatives to such standards. Thus one department chair argued that because senior faculty in these systems often lack both the ability to make reliable expert judgments on the quality and impact of the work of their junior colleagues and the wherewithal to elicit such judgments from international experts, quantitative standards might actually produce better results than expert judgment. He added, however, that American research universities did not find themselves in these circumstances.
At many universities, some researchers said their institutions were paying more attention to teaching, though not everyone agreed that this was a real change. There was overwhelming agreement, however, that incremental attention to teaching was not expected to come at the expense of research and that research was the decisive factor in faculty evaluations. Similarly, faculty members said that they met increased demands in other areas, such as proposal writing and administration, simply by working longer hours, since they could not afford to reduce the time they spent on research.
Because expert assessment of researchers' impact in their fields is so important to career success, faculty need to make themselves and their work visible to influential people who work on related intellectual problems. Study informants pointed to some modest incremental changes in the paths to visibility, but the study team heard nothing to suggest that published research played a substantially diminished role. At elite universities, informants mentioned the Nobel Prize and other competitive public prizes, prestigious awards from professional associations, memberships in one of the U.S. National Academies (which recognize distinguished researchers in engineering, medicine, and the sciences), and major invited addresses at important national and international meetings as signs that a scientist's body of work had achieved truly significant recognition. In discussing how scientists achieved this level of recognition, senior researchers stressed the importance of what one researcher labeled "jewels"—articles that made decisive or path-breaking advances.
Many researchers also reported that the most widely recognized interdisciplinary journals, especially Science and Nature, had become more important vehicles for achieving visibility, particularly in fields where these journals publish substantial numbers of articles. Although the study team heard some skepticism about the enduring importance of some of the work that appeared in these journals and frequent reminders that space limitations in these journals precluded publication of important details of evidence or argument, scientists expressed little doubt that papers in these journals were widely noticed. Appearances at well-chosen conferences were viewed as another effective vehicle for achieving visibility, and some scientists, though by no means all, believed that conference attendance had become somewhat more important in recent years. They saw this as a response to an environment in which there were more journals being published, but fewer journals that most people in one's field could be counted on to read.
Quality and Quantity
When asked whether their universities had altered the relative value attached to quality and quantity in researchers' publication output, study informants generally said that U.S. universities had moved modestly in the direction of emphasizing quality. They characterized the change as modest in part because they perceived top universities as having consistently stressed quality throughout the period under study and in part because they perceived institutions less distinguished than their own as being less discriminating about quality and more interested in sheer numbers of publications. Several researchers noted recent changes at funding agencies that indicated a shift in emphasis toward quality. They said that NSF and Howard Hughes, by requiring grant proposals to cite a limited number of the PIs' "most significant" publications rather than include a complete list of their published work, had placed an increased premium on quality.
Researchers vary in how they balance quality and quantity in publications, and the institutional environment does not strongly enforce a particular balance. The study team heard no clear indications of a consensus that the balance was shifting toward one pole or the other, let alone that it was shifting decisively. At each university visited, researchers reported that some of their colleagues favored a smaller number of relatively comprehensive publications, whereas others tended to produce more frequent reports of more limited sets of findings. They portrayed the institutional system as sufficiently flexible to accommodate individual preferences in reporting styles within relatively wide limits.
At all of the universities visited, researchers reported that commercial application of research results was a more real possibility than in earlier times, that more faculty members and students were engaged in commercial activity, and that, for the most part, universities were seeking to make themselves more hospitable to faculty members who try to pursue commercial possibilities emanating from their work. The growing prominence of commercial possibilities in biomedical research was especially pronounced. In other respects, fields that traditionally have been removed from significant commercial activity, such as basic social sciences and astronomy, remained so, while fields that more readily translated into technological applications, such as engineering and chemistry, continued to stress the commercial implications of academic research.
Very few study informants, however, thought that commercially oriented activity had significantly reduced the amount of publication-oriented research. Most reported that faculty colleagues who had gotten involved with start-up companies had continued to publish. They noted that these researchers tended to be very active and innovative, so that their commercial activity was more an addition to their academic research than a replacement for it. In addition, commercial involvements sometimes enriched the published work of faculty researchers, involving them in new areas of research. Many people observed that awareness of the commercial potential of research sometimes prompted brief delays in publication, but they generally doubted that these delays caused an overall reduction in publication.
Faculty commercial activity has hidden costs, however. One is the overall administrative infrastructure necessary to support it. Thus one department chair said that although faculty affiliated with companies in his department continued to perform and publish research, management activity traceable to their dual affiliations took a disproportionate amount of his time as department chair. An important element of this management activity is making legal arrangements that are necessary for faculty commercial activity. Participants in a group meeting of biomedical department chairs said that because the commercial and proprietary implications of sharing research materials loomed larger than in the past, researchers were required to spend much more time arranging materials transfer agreements; other researchers interviewed echoed this theme.
Some informants spoke about adverse affects on academic research brought about by the decline of large industrial research organizations, such as Bell Laboratories, that engaged in long-term research not specifically tied to product development. They claimed that such organizations had formerly been sources of valuable information for academic researchers about the kinds of findings that might have promising industrial applications. In addition, they said that scientists and engineers in these organizations had served as liaisons between academia and industry, explaining to corporate decisionmakers why it is important to support university-based research that does not promise immediate benefits.
At the same time, some informants pointed to a tendency for academic researchers to engage in more studies oriented toward practical problems and real-world settings. They said that scientists had developed a greater appreciation for the value of studying phenomena in natural settings, similar to those in which proposed interventions would occur, rather than in more controlled contexts. Such research was often characterized as also more interdisciplinary and applied, though not necessarily commercial in orientation. Other researchers lamented what they saw as an increasingly engineering-oriented ethos in what traditionally had been basic science disciplines.
Those interviewed repeatedly told the study team that researchers were putting much more effort into securing funding than they had in the past. Faculty members perceived success rates as lower and grant sizes as less adequate to the costs of research. To sustain an ongoing research operation in an uncertain environment, researchers said they needed to write more proposals to ensure that they would secure sufficient funds. Similarly, short grant duration meant more frequent attempts to secure funding.
The researchers interviewed saw funding agencies as increasingly favoring larger, often interdisciplinary, center-type awards. They expressed a range of opinions about this trend. Some argued that it was largely a matter of fashion and that the "real" creative contributions in these larger enterprises occurred in relatively small collaborative units. In their view, these larger awards more often created superfluous overhead than the synergies that greater size and internal diversity are supposed to produce. Others argued that changing emphases in the funding agencies primarily reflected changes in the locus of research opportunities. They believed that the organization of research had needed to change accordingly to exploit these opportunities most effectively. In their view, for these larger projects, whether administered through newly created centers or existing administrative units, researchers would need to be prepared to incur both the increased administrative costs of building more complex and formal research organizations and the increased communication costs of cultivating shared research agendas among scientists from different disciplinary backgrounds.
Study informants stated that the movement toward larger research teams and interdisciplinary grants increased the costs of acquiring funding. Preparation of competitive proposals for these larger awards often necessitated developing research tools, protocols, and extensive preliminary data to demonstrate the viability of an approach. Researchers claimed that putting together the infrastructure for a major interdisciplinary effort also requires extensive communication among collaborators who, even at the proposal stage, need to develop a common intellectual vision and plan of work. Unlike proposals that aim to fund the ongoing research program of a single laboratory, these proposals involve more than the continuation of existing relationships. In addition, the planned organizational endeavors are large enough that they are not apt to be viable, even at a reduced scale, in the absence of funding, so that time and energy invested in an ultimately unsuccessful proposal might well lead nowhere. In contrast, when proposals for small grants involving one or two PIs are unsuccessful in their original form, they are more susceptible to various kinds of adaptation and reuse.
Many researchers perceived the funding agencies as increasingly risk averse, particularly when money was tight and success rates low. Funding agencies were also criticized as increasingly directive, in part because they were seen as driven more by internal priorities for research than by perspectives drawn from the wider research community. At the same time, some researchers said that funding agency staff turned over more frequently, making the agencies less able to sustain commitments and provide stable, long-term funding. Especially at the most prestigious universities, some researchers argued that federal efforts to distribute research funds more widely had adversely affected the quality of research.
In characterizing trends in U.S. funding patterns, several researchers familiar with research investments in other developed countries made suggestive comparisons. More than one researcher commented on the rigidity of European Union efforts to mandate multicountry, multidisciplinary research and saw U.S. efforts to encourage multidisciplinarity as more flexible. Likewise, some researchers praised the historic diversity of funding sources in the United States, which they saw as enabling varied perspectives and approaches to flourish. Many informants said that U.S. science was more competitive and open to new ideas than the European and Japanese systems, where researchers in select hierarchical positions were perceived to dominate funding. Some observers, however, claimed that European and Japanese efforts to open their systems to competition were bearing fruit, and even suggested that U.S. science might be suffering from an inability to plan effectively in an unstable, highly competitive funding environment. No clear message emerged from study interviews as to how the funding process could appropriately balance security and risk or whether other countries were moving to balance these better than the United States.
Many researchers reported that they spent significantly more time complying with government regulations than they had in the past. Depending on their fields of study, researchers cited different regulatory requirements as consequential. Health-related fields were perhaps the most strongly affected by compliance activities. Researchers mentioned patient privacy regulations, PI financial conflict of interests rules, institutional review boards for the protection of human research subjects, and animal care and use standards as all contributing to added time for compliance; many of these rules also affect research outside the medical area. In other fields, additional regulations were salient. Twice, for example, researchers who use satellites in their work mentioned increasingly stringent interpretations of security provisions in the International Traffic in Arms Regulations as having made it harder for them to work with international collaborators.
In general, funding agencies have moved toward greater accountability and reporting requirements. Insofar as the agencies have emphasized larger grants to newly developed teams, they have taken a more active monitoring role to protect their investments.
Although senior faculty agreed that some work related to government regulation can readily be delegated to administrative personnel specializing in compliance who are not trained as researchers, they insisted that much of it requires an in-depth, substantive understanding of research and defies delegation. In addition, many claimed that federal agencies had become more reluctant to allocate grant funds to pay for administrative support, with the result that researchers had little alternative but to do necessary administrative work themselves. The study team heard this claim most often at an institution that had suffered adverse public comment more than a decade ago for alleged laxity in complying with federal rules for using facilities and administration funds. But the team also heard this at other universities, which had no special reasons for sensitivity to this issue.
Some of the increased administrative workload is probably better characterized as a matter of governance than of compliance. At several universities informants reported that faculty members had become more active in fashioning the university's approaches to governance issues than had been the case in the 1970s or early 1980s. Although involvement in utilization committees at hospitals or committees to set campus affirmative action policies may not be strictly a function of regulatory requirements, some informants characterized it as reflecting an environment in which faculty perceive administrative decisions as consequential and seek an active role in them.
The Global Environment
The role of U.S. research in the world's S&E journals is only partly a function of developments within the United States itself. Changes in research capacity in other countries are also relevant. This section discusses informants' perceptions of changing patterns of competition and cooperation between the U.S. research community and the research enterprise elsewhere around the world.
Almost all of the researchers interviewed said that research capacity in the major European countries, Japan, and the emerging Asian countries was improving substantially. Informants varied as to which individual countries they named and precisely how highly they rated them relative to the United States. In general, the United Kingdom, France, and Germany were most often mentioned as at or near parity with the United States; Japan was also often rated very highly, but occasionally was characterized as having failed to make significant strides. China and, to a lesser extent, India were seen as producing a substantially increased volume of reputable research, although neither was viewed as currently a competitor for world leadership. Smaller East Asian entities, including Singapore, South Korea, and Taiwan, were also mentioned as having substantially increased and improved their scientific output. In most large fields researchers viewed the United States as continuing to produce a greater volume of high-quality research than other parts of the world, but said the gap between the United States and other leading countries was shrinking.
Study informants cited various indicators of scientific improvement elsewhere in the world. A biomedical researcher claimed that U.S. scientists had been featured at the European meetings he attended in the early 1990s, but that Europeans now dominated these meetings. Similarly, a biochemist said that although 20 years ago one might have been able to ignore work performed abroad in her field, doing so would be unthinkable today. Scientists across the range of disciplines noted the increased presence in the major journals of articles by researchers at foreign institutions. At one university, the chief academic officer observed that letters from foreign scientists were forming a growing part of tenure and promotion files.
Informants pointed to several reasons why other countries' research contributions had improved, citing both increased resources available for research and institutional changes that facilitate effective use of those resources, notably:
Almost all of the researchers interviewed agreed that research had become more global and less national in the period spanning 1988 to 2003; a few said it had been thoroughly global at the beginning of that period. One scientist characterized her colleague network as entirely international, and another said that distinctively national intellectual traditions were disappearing in his field. However, the general movement toward globalization in the research community had not erased differences among fields in the degree of globalization.
One indicator of globalization is the changing role of the English language. Already the dominant language of scientific discourse in the 1980s, English has become essentially universal since that time. Study informants reported that major international conferences were routinely held in English, regardless of whether English was the language of the host country. Leading journals on the European continent have moved toward publishing in English.
The trend toward more collaborative research includes more collaboration across national boundaries. Facilitated by better electronic communication, more frequent international meetings, easier international travel, and, in many cases, by government programs and incentives, this trend is evident in increased numbers of internationally coauthored articles. Likewise, administrators at one university observed that their institution had experienced a substantial increase in subawards to institutions in other developed countries.
Globalization also affects personnel. Graduate students and postdocs from other countries have become more common at U.S. universities. The job market for faculty members at all ranks has likewise become more international, though more so in some fields than in others and more at senior levels than junior ones.