Summary | Table
Part A: EPSCoR Program and Its Evaluation | Part B: Evaluation Findings | Part C: Policy and Program Implications
|4. What Was EPSCoR’s Funding Practice?|| EPSCoR was charged with
the mission of “improving” the quality of science, not just funding high-quality
science. Similarly, EPSCoR’s mission was to increase research “competitiveness,”
not just fund research that was already competitive. To achieve these
aims, EPSCoR’s strategy was to fund proposals that were judged “good”
or “very good” but not “excellent” under the peer review process. The
expectation was that the research experience gained would help scientists
become more competitive in obtaining external funding support in the future.
An important part of the evaluation was, therefore, to verify that EPSCoR did indeed implement this atypical funding policy. The EPSCoR model assumed that the funded research projects were just short of being nationally competitive. It was further assumed that the resulting enhancement in research competitiveness would produce increased R&D funding. Findings on these issues follow.
Evaluation of EPSCoR Proposals
The evaluation analyzed a sample of 48 EPSCoR proposals from 10 states for the period 1991-1992.4 These proposals had been peer-reviewed using the NSF’s traditional proposal rating system of 5=excellent, 4=very good, 3=good, 2=fair, and 1=poor. To compute a mean reviewer score, the responses of approximately five reviewers for each proposal were averaged, classifying the proposals according to the following categories:
The evaluation compared the actual award decisions with the funding decisions that would have been predicted from the preceding criteria. Critically, the evaluation also examined reviewers’ comments about their ratings.
Nearly 10 Percent of Scientifically Excellent Proposals Were Excluded (Rightfully) from EPSCoR Program Funding
Had funding decisions been based solely on the peer reviewers’ scores, 11 of the 48 proposals that fell into Category 1 should have been excluded; the 34 that fell into Category 2 should have been funded; and the 3 proposals that fell into Category 3 should have been excluded. However, only 2 of the 11 proposals in Category 1 were excluded, while 2 of the 34 proposals in Category 2 were incorrectly excluded.
The examination of the reviewers’ explanations of their rating revealed that the reviewers were using two different definitions of the “excellent” criterion. That is, the EPSCoR reviewers rated a proposal as excellent either because they considered it to be “scientifically excellent” (in which case the proposal should have been excluded) or an “excellent fit” for the EPSCoR program (in which case the proposal should not have been excluded). When these critical differences in meaning were taken into account, only 4 of the 11 proposals originally defined as excellent were found to have been judged “excellent” in the sense of scientific excellence. Further, the 2 “good-to-very-good” proposals that were excluded had their ratings downgraded by reviewers because they thought that the proposed work was already of too high scientific quality and therefore not a good fit for the EPSCoR program. In effect, the actual funding decisions were much closer to the predicted pattern than the raw peer reviewer scores indicated.
Overall, EPSCoR had excluded four scientifically excellent proposals, which constituted the top cohort of about ten percent of the 48 proposals. To the extent that this proportion may have existed in other EPSCoR competitions, the evaluation concluded that the EPSCoR program did implement its program mandate--“to stimulate competitive research”--and not to fund already-competitive research. Further, the program has had to accomplish its goals--whether in terms of increased share of R&D funds (discussed next) or improvements in scientific competitiveness (discussed later)--without the benefit of the most competitive and outstanding cohort of proposals from the EPSCoR states.5
|5. Was EPSCoR Associated with Changes in Academic R&D Funding?|| To determine whether
the EPSCoR program had achieved its primary objective of reducing the
undue geographic concentration of R&D funds, the evaluation had to
determine whether the EPSCoR states had increased their share of federal
academic R&D funding, reflected by annual data on R&D expenditures.
Also of interest was whether the EPSCoR states had increased their
share of NSF R&D funding. However, state-by-state data for NSF are
reported only for R&D obligations.
Since the end of World War II, most of the academic R&D funding has come from the federal sector, although this federal share declined from about 68 to 60 percent between 1980 and 1995 (see Exhibit 2).
Also, the NSF share of federal R&D obligations, historically, has been a distant second to that of the National Institutes of Health (NIH) and has declined somewhat over the past 20 years--from about 20 percent to about 17 percent (see Exhibit 3).
For the EPSCoR program, the main findings were as follows.
EPSCoR States Increased Their Share of R&D Funding
The EPSCoR states increased their aggregate share of federal academic R&D awards from 0.25 percent or $10.1 million per state in 1980 to 0.40 percent or $50.5 million per state in 1994. A “per state” unit of analysis was used to assess changes in R&D because the number of states participating in the EPSCoR program changed over time, from 5 to 19 (18 states and Puerto Rico), as new cohorts of states were added to the program in 1988, 1990, and 1992.
Overall, the EPSCoR states’ share of federal academic R&D funding represented a 7.65 percent share or $960 million by 1994 (see Exhibit 4). The observed increase varied by cohort, with the oldest EPSCoR cohorts showing the largest increase in share and the newer cohorts showing the smaller share increases.
For NSF funding, the EPSCoR states’ share rose from .26 to .34 percent per state between 1980 and 1994, ending with an overall share of 6.53 percent. Again, on a per state basis, the increase was found for every cohort (see Exhibit 5). However, unlike the federal expenditures data, the NSF obligations data showed no relationship between cohort age and funding gains.
Based on the observed changes in federal and NSF shares, it can be concluded that the EPSCoR states’ share of R&D funding did increase relative to the shares of the other states. To this extent, EPSCoR was associated with a lessening of the undue geographic concentration of R&D in the United States. Although the changes were small in absolute terms, this was a notable accomplishment in an era when research universities in non-EPSCoR states also were thriving and upgrading substantially.
For instance, Carnegie rankings of universities’ research capabilities are based on the volume of doctoral degrees at a university. These rankings showed that, during the same 1980-1994 period, double the number of universities in non-EPSCoR states attained the top Carnegie ranking (“Research I” universities). In other words, EPSCoR’s accomplishments of relative increase in R&D share occurred in a highly competitive environment when non-EPSCoR universities were dramatically expanding their research capabilities.
Other Conditions That May Have Accounted for EPSCoR’s Increased R&D Share Are Not Supported
One might argue that the observed increases in R&D funding in the EPSCoR states could be attributable to conditions other than the EPSCoR program.
An initial consideration was whether the increase in R&D funding was associated with growth in the population, number of students, or even the number of research investigators in the EPSCoR states. Many of the EPSCoR states have relatively small populations or are located in regions undergoing rapid population growth (e.g., Sunbelt states). Consequently, population growth-- accompanied by presumed growth in students and faculty--could partially account for the increased share in R&D funding.
Examination of this possibility revealed that the average (aggregate) rate of population growth in the EPSCoR states was actually less than that of the non-EPSCoR states during the 1980- 1994 period (9 versus 18 percent respectively). Similarly, within the EPSCoR states as a group, there was no correlation between increases in the EPSCoR states’ R&D share and increases in their population. Thus, changes in population could not account for the increases in R&D share.
A second consideration was whether a few of the EPSCoR states had accounted for most of the increase in R&D share, rather than the program as a whole. This possibility arises from a conjecture that the EPSCoR program only derived its apparent gain in share because of a few successful states whose unique circumstances--not the EPSCoR program--accounted for the results. The histogram in Exhibit 6 shows the distribution of share changes (in percent between 1980 and 1994) for each of the EPSCoR states compared to the nation as a whole and the non-EPSCoR states. Exhibit 6 fails to support the conjecture, by showing that over half of the EPSCoR states (10 of 19) exceeded the average share change for the nation-- and the average share change of the non-EPSCoR states as well.
In sum, the lack of support for these two considerations provides added confidence that the increase in R&D share was a genuine outcome.
|6. To What Extent Did EPSCoR Influence University-Government-Industry Relationships, University Research Infrastructures, and Research Competitiveness?|| In addressing whether
EPSCoR’s strategies influenced state relationships, university research
capability, and scientific competitiveness, the evaluation had to consider
1) the nature of a federal-state government partnership program as well
as 2) the influence of state-level factors on the EPSCoR strategies used
in individual states. Each state, moreover, had its own unique experience
with the EPSCoR program.
Reported below are findings drawn from five representative EPSCoR states on the possible causal linkages between EPSCoR’s program strategies and the previously discussed R&D funding increases, as conditioned by state-level influences.
State-based EPSCoR Steering Committees Helped to Set R&D Priorities, Linking State Interests with Strengthened Within-State Peer Review Processes
The EPSCoR program required each participating state to form a steering committee that represented key organizations and sectors within the state. From the beginning, NSF signaled the importance of these committees, insisting on participation by a diversity of members from academe, industry, and state government--including a state’s best and most senior scientific and technological personnel (Drew, 1985).
Each committee generally had about 12 members, and the committee’s first responsibility was to apply for and implement the initial planning award from NSF. The committee also assumed a pre-review function over the research projects proposed to NSF as part of a state’s overall proposal and, in so doing, established preliminary peer review processes engaging out-of-state experts. In some states, this was the first occasion for using external experts on a large scale. Such innovation in itself created a new environment and one foundation for increased competitiveness.
Through these steering committees--which became forums for S&T dialogues, planning, and the development of new R&D initiatives in science and technology--EPSCoR influenced the states’ S&T environments. This influence was commonly demonstrated in the opening of communication between state university research officials and a state’s government and industry representatives. One result was the identification of research priorities that combined state universities’ strengths with the unique opportunities offered by a state’s natural and institutional environments.
The formation of the steering committees was therefore one of the most important features of the EPSCoR program. These committees could influence a state to create or alter its strategic S&T plan, to establish formal state S&T agencies, and to develop more formal relationships among state universities, federal and state-supported research laboratories, and industry. In short, variations of the following account (derived from a report on one of the five states the evaluation visited) were found across EPSCoR states:
State Matching Requirements Helped to Tighten the Connection between EPSCoR and State Priorities
As a direct reflection of EPSCoR’s federal-state partnership strategy, EPSCoR called for a $1-to-$1 matching requirement, with the requirement only satisfied by newly appropriated funds, not in-kind matches (Feller, 1997; and Drew, 1985). Over the years, and for any given state, this requirement sometimes created great challenge, especially under fiscally constrained conditions. On several occasions, universities participating in a state’s EPSCoR program had to draw on institutional funds to augment state-appropriated funds to meet the state’s matching requirement in its proposal to NSF.
The matching requirement, however, had its intended effect of forcing within-state dialogue over R&D priorities. That is, the matching requirement compelled universities to come together to seek state support, which in turn, influenced the development of common research agendas and collaborative, cross-university initiatives. The dialogue over state R&D priorities effectively forced EPSCoR-funded research projects to work within the larger S&T environment of a state, leading to closer integration among universities, state government, and industry. Such communication and collaboration may be contrasted with an oft-asserted posture of the scientific community, which, according to one EPSCoR project director, has been “detached from the people who pay its bills-- Congress, state legislators, and, ultimately, the public” (Strobel, 1996).
States’ Science and Technology Capabilities Increased-- Stimulated in Part by EPSCoR’s Strategies
Also important in the capacity-building process was the creation of state science and technology policies, investments, and support that promoted basic research. The evaluation investigated EPSCoR’s influence on the capacity-building process through site visits to the same five representative EPSCoR states. The findings were as follows.
The overall impression of the evaluation team was that the five states did not yet have the rich and diverse array of S&T capabilities found in non-EPSCoR states. But significant portions of what had been put into place could be attributed to the EPSCoR program.
EPSCoR Enhanced Some Aspects of Universities’ Increased Research Orientation
University-wide actions and policies directly reflect a state’s research infrastructure and capability and, therefore, research competitiveness. Through site visits to 14 universities within the five representative EPSCoR states, the evaluation found that EPSCoR had enhanced some aspects of this infrastructure and capability. In large part, EPSCoR had enhanced participating universities’ orientation toward research by fostering cross-university collaborations (as discussed previously in connection with the EPSCoR steering committees). Single institutions with limited facilities and small faculties could combine forces in assembling a critical mass of research faculty, technical personnel, and facilities to compete for major federal research awards. This resulted, for example, in the development of interdisciplinary research centers and research teams. EPSCoR funding also helped to create new science faculty positions and laboratories that helped some universities shift from an exclusive emphasis on undergraduate teaching to one that also included the conduct of basic research.
Further, the size and prestige of EPSCoR’s awards could have had an impact, varying from new research endeavors to the introduction of more rigorous standards of peer review to changes in universities’ research cultures. As put by one vice-president for research, “EPSCoR was about everything the university should have been about: capacity, partnership, and long-term vision.” Across universities, the impact showed up in specific research endeavors, including:
EPSCoR support also was used to increase faculty startup packages, which helped state universities compete for and attract top faculty in the sciences. Packages of sufficient size to attract leading young faculty were not widely offered in the EPSCoR states prior to EPSCoR funding. The evaluation found that EPSCoR was associated with increases in the packages at 6 of 11 universities (no information was available at three of them), leading to the later hiring of a strong group of research-oriented faculty.
Little Evidence Exists that EPSCoR Initiatives Influenced University Research Capabilities in Other Ways
EPSCoR’s influence on other aspects of internal changes in the participating universities has been modest or so interwoven with other influences that it was not possible to assign it anything other than a supportive or reinforcing role. EPSCoR’s presence was concurrent with several significant shifts in internal university practices and policies linked to the promotion of research. As discussed below, however, little evidence exists that EPSCoR’s initiatives directly caused these changes, four of which are outlined below.
First, vice presidents for research existed or emerged in 6 of 14 universities (one university provided no information and one had a director of research and economic development that reported to the president). Research visibility, influence, and investments benefit when a university’s lead research officer reports directly to the university’s president. The evaluation tried to determine if EPSCoR had played a part in the upgrading of senior research administration positions at the EPSCoR states’ universities, finding that EPSCoR could be associated with only one upgrading: an assistant vice-president had been promoted to vice-president.
Second, a practice found at some research-intensive universities is for the central administration to allocate a portion of the institution’s indirect cost recovery funds to research investigators or departments that receive external research awards, as an additional incentive for seeking such awards. Among the EPSCoR states, the 14 universities showed highly varied patterns of allocation policies, with EPSCoR having little or no influence on these policies.
Third, of the 14 universities, only 2 had the minimum course loads typically available at research-intensive universities, 4 others had provisions for “ buying out” teaching time to get to the minimum (no information was available for 2 of the universities). Many research-intensive universities require the teaching of only one course, with added allowance if the course has a high enrollment. EPSCoR had little or no influence on the emergence of like policies at the 14 universities.
Fourth, the evaluation found that the 14 universities did not have very large centralized sponsored research offices (SROs), and only a few used a decentralized system. (At research-oriented universities, well-tooled and efficient SROs or well-staffed decentralized systems are usually present.) EPSCoR may have influenced the emergence of one of the SROs.
Because the five states were eligible to participate in the EPSCoR program, the visited universities were not major performers of academic research. Only 2 of the 14 universities were classified as “Research I” universities in the 1994 Carnegie rankings (5 were Research II, 6 were Doctoral I or II, and 1 was Master’s I).7
EPSCoR-Supported Research Showed Evidence of Scientific Productivity, and Hence Competitiveness
The evaluation examined whether EPSCoR-supported researchers showed evidence of increased scientific productivity and thus, presumably, increased competitiveness for research awards over time. Establishing evidence of increased scientific competitiveness would make plausible a link between improved university research capabilities and the observed increases in shares of R&D funding in the EPSCoR states (Stigler, 1994; Feller, 1996).
The scientific competitiveness of EPSCoR-supported research, however, was impossible to assess directly, without intensive peer review on a project-by-project basis. Consequently, the evaluation looked instead at the “productivity” of EPSCoR-supported research, defined as scientific investigations that produced 1) scientific publications and 2) funding from external sources following the EPSCoR award. The evaluation examined the number of scientific publications and the amount of external funding that the EPSCoR principal investigators of 86 research clusters8 (the entire universe funded between 1992 and 1996) claimed as outgrowths of their EPSCoR funding. However, data to establish normative standards for interpreting the productivity of the EPSCoR-supported clusters were not obtainable. Consequently, the descriptive data presented below are useful primarily for establishing the plausibility of the assertion that the EPSCoR-funded research clusters could reasonably be considered scientifically productive.
Exhibits 7 and 8 show the frequency of scientific publications and subsequent new awards, by these research clusters. Both exhibits show high numbers of publications or award dollars and, more important, an even distribution across the clusters, showing that these accomplishments were not limited to a small number. For instance, over two-thirds of the funded clusters (52 of 76) had 20 or more publications emanating from their EPSCoR-supported work, and 70 percent of the funded clusters (49 of 70) had $1 million or more in new awards.
The publication dates of the research publications sample were consistent with research that could have been conducted with EPSCoR funds. The research reported in the publications also pertained to the scientific topics funded by EPSCoR. As to the continuation funding data, it should be noted that there was roughly a 2:1 ratio of external to EPSCoR dollars, again an observation consistent with the suggestion that the EPSCoR clusters were scientifically productive. Thus, it is plausible to conclude that EPSCoR’s influence on enhancing university research capabilities led to more productive scientific research, which in turn was sufficient to compete for, and win, additional future R&D funding.
6 To retain the anonymity of individuals interviewed, the identity of the state is not cited.
7 Because the Carnegie rankings are based on the volume of doctoral degrees, it is worth noting that 10 of the 14 universities had graduate enrollments of 3,000 or more, and the 2 smallest universities had enrollments of about 1,700 students each.
8 EPSCoR urged states to propose groups of related research projects (“research clusters”) to encourage capacity building and interdisciplinary research, in contrast to totally independent research projects. The clusters could be of differing sizes--e.g., covering from 3 to 8 related projects. Funding decisions would then be made for a cluster in its entirety, and the totality of the clusters as well as other related educational components would then comprise the EPSCoR award to an individual state.