Strategic research partnerships (SRPs) have not only grown in numbers during the 1980s and 1990s (Link. 1996; Mowery, Oxley, and Silverman. 1996; Okubo and Sjoberg. 2000) but in importance in research and development (R&D) policy circles. They are widely perceived to be important to economic growth and the productivity of industrial research. But has reality kept up with perception? In fact, we have so little information about SRPs (Kleinknecht and Reijnen. 1992) that most of our assumptions about their impact on the U.S. economy are really nothing more than anecdotal or based on folk wisdom or based on research that employs diverse indicators and unclear conceptualizations (Kleinknecht and Reijnen. 1992).
Until we have a set of indicators based on a set of valid constructs, it will remain difficult to determine the importance of strategic research partnerships for the U.S. economy and the enterprise of science and technology in general. For example, absent a useful conceptualization we do not know whether SRPs are a substitute or an increment to companies' own R&D (Katsoulacos and Ulph. 1997; Mowery, Oxley, and Silverman. 1996). Without knowing more about the construct itself we really have very little idea about the extent to which formal research consortia are productive as compared to traditional informal collaborations (Faulkner and Senker. 1994; Smith, Dickson, and Smith. 1991; Freeman. 1991; Schrader. 1991) among peers at multiple sites who happen to be affiliated with multiple organizations.
The purpose of this paper is to identify characteristics of a desirable policy indicator for strategic research partnerships. Rather than reviewing the literature, we consider the analytical problems posed and the context of the general needs for any R&D policy indicator. We ask the question, "what are the specific requirements of a strategic research partnership indicator?"
As is so often the case, the chief stumbling block for policy analysis is conceptual as much as empirical (Gassmann and von Zedtwitz. 1999; Katz and Martin. 1997). We can begin by asking what is the minimal requirement for a concept to even relate to the notion of strategic research partnership. We are assuming SRPs, given our interest in industrial productivity require at least one of the partners to be a commercial entity. Further, we assume that a single corporation acting alone does not constitute an SRP, nor does a corporation or firm that entails collaboration among its divisions or persons working within that firm. We take seriously the term "partnership" as implying that all of the entities involved in the social configuration of the research collaboration are in fact contributing in one manner or another. Similarly we take seriously the notion of strategic. Strategic seems to us to imply that the work that is undertaken is viewed as part of the firms' strategic directions, including product development, commercialization goals (Pisano. 1991), precompetitive R&D (Quintas and Guy. 1995; Faulkner and Senker. 1994), and so forth. Interestingly, one of the more difficult demarcations in the notion of SRPs is the very notion of research. As is well known in studies of industrial research, there are many activities that are on the borderlines of what we might call research. Some of these include process engineering, technical assistance, equipment development, and development of algorithms to be used for internal company production processes. While we are inclined to take a broad view of the notion of research, at least for present purposes, it is at least worth noting that there is that gray area of technical activity that one may or may not view as research.
Having considered some of the most elemental questions of strategic research partnerships, we move here to develop a tentative working definition that will suffice at least for the purposes of identifying a concept sufficiently sharp to permit an analysis of indicators appropriate to it. Our approach involves distilling concepts employed in the literature and borrowing from a variety of typologies that have already served some of the same analytical objectives.
At the broadest level, (Gulati. 1995) informs us that strategic interfirm alliances "encompass a variety of agreements whereby two or more firms agree to pool their resources to pursue specific market opportunities" ( p. 86). More specific to research, Hagedoorn, Link, and Vonortas (2000), define SRPs as "an innovative relationship that involves, at least partly, a significant effort in research and development..." (p. 1). This perspective places the SRP definition within the broader notion of strategic alliances and recognizes that R&D may be one element (albeit an important one) embedded within a larger strategic framework, and thus, it may be difficult or unwise in these cases to extricate the research component from the larger purpose.
A legal-based concept that grew out of the National Cooperative Research Act of 1984, was the notion of "research joint ventures." The Act defined the joint venture to be any number of activities by two or more persons for the purpose of theoretical analysis, the development of basic engineering techniques, the extension of experimental findings or theory into practical applications for experimental and demonstrative purposes, and the collection and exchange of research information.
Coursey and Bozeman (1989), in studying industry-government laboratory partnerships, defined it as "any arrangement, formal or informal, whereby at least one government laboratory and one private firm jointly develop and/or obtain technical knowledge." (Coursey and Bozeman. 1989, p. 8) And, although this definition was created to specifically examine the role of government labs, it easily comports with the SRP notion, namely collaborative R&D arrangements involving at least one industrial firm and one other organization. (For empirical data using this conceptualization see Crow and Bozeman, 1998).
But what are the relationships and collaborative arrangements that make up these partnerships? According to Coursey and Bozeman (1989), they may include:
Mowery (1998), proposes a sector-based conceptualization including industry-led consortia, collaborations between universities and industry (not of the first type), and collaborations between industry and federal laboraties, often making use of formal cooperative research and development agreements (CRADAs).
As Hagedoorn, Link, and Vonortas (2000) point out, SRPs can be classified by their membership (e.g., public, private organizations) or by the organizational structure of their relationship. This latter category includes informal versus formal arrangements, and within formal, they can further be divided into equity joint ventures focusing on R&D (i.e., "research corporations") and contractual agreements they call research joint ventures. "Research corporations are created by at least two firms that combine their R&D skills and resources through equity joint ownership of a separate firm, and generally this new firm or child performs only R&D that fits within the broader context of the research agenda of the parent firms...Research joint ventures [on the other hand], such as joint R&D pacts or consortia to cover nonequity agreements, are created so that firms and other organizations can pool resources in order to undertake joint R&D activities" (Hagedoorn, Link, and Vonortas. 2000, p. 569).
Hagedoorn, Link, and Vonortas (2000) provide a useful review of the theoretical streams running through the SRP literature. Various frameworks have been applied in attempting to understand these linkages, a most prominent one in the literature being the minimization of transactions cost (Gulati. 1995; Gulati and Gargiulo. 1999); (Osborn, Hagedoorn, Denekamp, Duysters, and Baughn. 1998). Transactions cost theorists posit that the anticipated cost of a research transaction governs the form of alliance. In addition to transactions cost, Hagedoorn and colleagues (forthcoming) review strategic management and industrial organization (including tournament and non-tournament) frameworks. The strategic management literature has introduced the notion of the strategic network as not only a new form of organization but as a strategy. This same literature has sought to understand interfirm alliances in terms firms representing sets of resources (i.e., depositories of useful assets), or the dynamic capabilities they embody (i.e., firms as expert doers). Strategic management scholars therefore emphasize not just the minimization of transactions costs, but the ability of the firms to accomplish something altogether unlike what they would be able to accomplish otherwise.
Some examples of SRPs include: joint ventures within the Department of Commerce's Advanced Technology Program (which are focused at industrial firms but may include university and government labs), Industry University Cooperative Research Centers and the Engineering Research Centers of the National Science Foundation (which are headquartered at universities but include industrial firms), and government-initiated consortia such as Sematech and the Microelectronics Research Center. But SRPs need not be government sponsored, and, in reality, the majority are not. They need not be large and they need not be as visible as say the Semiconductor Research Alliance.
An important part of understanding the importance of SRPs is in understanding their function or their raison d'être. Mowery (1998) offers some potential benefits to SRPs: (1) to enable member firms to appropriate knowledge spillovers which would otherwise be lost, (2) to enjoy the benefit of scale economies of research, (3) to reduce duplicative R&D performed by the members, (4) to speed the commercialization of new technologies, (5) to facilitate the transfer of knowledge from universities to industry, and (6) to provide access to the unique capabilities of government labs. Hagedoorn and Schakenraad (1992) add the sharing of costs and uncertainty, access to complementary technologies, learning new tacit technologies, monitoring environmental changes, entering foreign markets, and expanding product range (See also Hagedoorn, Link, and Vonortas, 2000, for a review of SRP benefits).
March's (1991) work on organizational learning points out a potentially useful distinction for understanding SRPs. This is the difference between exploitation and explorationthe former focusing on making use of existing competencies and capabilities, the latter on the investigation of new alternatives. This categorization may be useful in understanding firms decisions to "make or buy" (i.e., look internally or externally) research. Moreover, Powell and colleagues (Powell, Koput, and Smith-Doerr. 1996) argue that when knowledge is rapidly changing, dispersed, and fragmented among different parties, the "locus of innovation" moves to interorganizational relationships.
As we can see from the literature, a variety of conceptual problems serve as barriers to the development of indicators of SRPs. Let us simply innumerate some of the leading questions that are implied by current uses of the term and in existing typologies. First, are SRPs entirely formal research vehicles or do they also include informal collaborative research? This question is vital and there are good arguments for including informal arguments and there are good arguments against including informal collaboration. The best argument for a more expansive definition is that may informal collaborations are extremely important and productive, whereas at least some formal collaborative arrangements are symbolic and maybe of very little value whatsoever. The chief argument for focusing only on formal collaborations is that they are much more easily identified and it is possible to set boundaries, albeit artificial boundaries, around them. The notion of informal collaboration is so difficult to set boundaries around that one has difficulty determining how to proceed to actually identify them (Powell, et al., 1996). Most people who have included informal definitions of SRPs rely almost exclusively on interview and questionnaire data. While interview and questionnaire data have obviously a great deal of utility, the construct validity for SRPs obviously poses potential problems.
Second, when do strategic research partnerships begin and end? This is closely related to the question of whether formal and informal strategic research partnerships are considered together. In most instances, if there is a formal document referring to the SRP it is time bounded. However, in informal relationships it is difficult to put an end boundary simply because it is quite possible that the same configuration of people and resources will come together in the future for additional research purposes. Moreover, even in the case of formal relationships it may well be true that the actual research that is accomplished has a relatively limited relationship to the documents that are proscribing the research.
Third, how does the fact of strategic research partnership relate to its performance and intensity? The most common approach to the measuring of SRPs is to simply count them. While this itself is not altogether a straightforward task, without some workable definition of SRPs, it is still more difficult to determine the level of intensity of the partnership. For example, if a firm has ten SRPs and another firm has a single SRP, what are we to infer from that information? It may well be the case that each of the SRPs engaged in by the firm with ten are relatively inconsequential and have little bearing on the course or the productivity of the firm. By contrast, it is at least possible that the firm with a single SRP has bet everything on that partnership and has committed the full capacity of its research and development abilities to that partnership. However, as we shall see measuring the intensity of an SRP is no easy matter. It gets worse. Even if we are able to measure the intensity of an SRP measuring its productivity (even its productivity to the firm), is not an easy matter. For example, it is quite possible that a firm can be involved in an SRP, it can commit tremendous resources to the partnership, and not only may the research not be productive, it may well be the case that it is difficult to even know whether in fact it has been productive. Here we face all of the usual difficulties of assessing R&D productivity, the most prominent of which is the fact that there are differential time horizons for the pay off of R&D and, just as important, the firm may not be able to internalize fully the pay-offs that accrue because of an inability to appropriate them.
Fourth, what is the impact of an SRP on aggregate economic growth? Even if one knows the productivity of a partnership that accrues to the participants, one cannot easily scale up to a knowledge of the social and economic impact of the partnerships. Once again, we encounter a wide variety of familiar problems common to assessments of R&D productivity. The most prominent of these is the ability to measure spillover effects from the firm's economic activity. Fortunately, there are a number of standard approaches to making such assessments, and it is likely that the weaknesses of these approaches are no greater (and certainly no less) for the question of SRPs than for the more general question of the contribution of technological change to economic growth.
Fifth, to what extent does the construct of SRPs conform to various measures and methods for assessing them? Let us assume that we have a viable construct for an SRP, an assumption that is patently optimistic but one which we hope at least will be a little more valid once we identify further conceptual problems. The existence of a satisfactory construct does not imply the existence of satisfactory measures and indicators for that construct. Thus the utility of the construct relates closely to the set of measurements of and methodological assumptions that occur with its use. Thus, for example, many studies relying on questionnaires and interviews may be of limited utility for operationalizing satisfactory constructs pertaining to SRPs (Mowery. 1998). We will return to this question at a later point when we consider specific indicators in connection with the ways in which one might go about developing satisfactory measures of those indicators.
Sixth, how does the international and global nature of strategic research partnerships affect the appropriate conceptualization of them (Gassmann and von Zedtwitz. 1999)? Clearly, many of the most important SRPs involve firms from two or more nations. However, much of the interest of policymakers has geographic boundaries. That is, policymakers are interested in the productivity of SRPs not in terms of their contribution to the gross global product but to the gross domestic product. Thus, is it useful to demarcate, even at the beginning, between SRPs that flow across boundaries of nations and ones that do not.
Many of the characteristics of R&D policy indicators should be judged by much the same criteria that one would employ in assessing any indicator or variable. Such features as policy relevance, simplicity, sensitivity, validity, stability, reliability, adequacy of index properties, etc., are as important to SRP indicators as they are to R&D (or any) indicators in general (See, for example, Bozeman and Melkers. 1993). And, thus, we do not dwell on the need for such traditional methodological and use standards. However, we do address methodological issues that seem particularly acute in a context of research and development policy indicators.
A good place to start is with a specific definition of strategic research partnerships. As we conceive it, SRPs are any arrangement, formal or informal, whereby at least one commercial enterprise and one other organization jointly develop and/or obtain technical knowledge. And, because this definition is rather broad, descriptive indicators are the logical first step in helping us to understand this phenomenon. Essentially, the question for a first set of indicators should answer the questions: What are SRPs? In what forms do they come? How common are they? And, what do they do? Generally speaking, and putting aside conceptual and methodological uncertainties for the moment, this descriptive group might include the frequency of SRPs, their structure, formality or codification, goals, life cycle, geographic location, resource investments, and R&D process indicators (See Table 1).
|Indicator Set||Level (unit of analysis)||Example|
|Descriptive||Frequency||Industry||Counts of SRPs by industry class|
|Goals||SRP||Purpose of SRP|
|Reasons firms entered SRP|
|Life Cycle||SRP||Duration or planned duration of SRP|
|Publicness||SRP||Sectoral origin of partners|
|Geographic Locale||SRP||Domestic, international|
|Country make up|
|Domestic zip code of firms|
|Input||Resources||Organization||Financial investments in SRP|
|Process||R&D type||Organization||R&D type (e.g., basic, applied, testing)|
|SRP||R&D field (e.g., biotechnology)|
|Output||R&D impacts||SRP||Publication and patents, citations|
|Cross firm patent citations|
|Control (access to) of intellectual property|
|Product Development||Firm||New/improved product(s)|
|Technology Transfer/Diffusion||SRP||New capabilities/organizational learning|
|Social rate of return|
|Knowledge Value/Capacity||Any||Human capital improvements including student participation|
While indicators of these characteristics are important in understanding the SRP landscape, an important next step is in formulating indicator categories that will aid in understanding the policy relevance and importance of SRPs as compared with other R&D and business practices. This objective in indicators development suggests that performance indicators such as measures of SRP outcomes may be desirable. Again, leaving aside conceptual and methodological murkiness, this category might include measures of R&D impacts, economic performance, product development, technology transfer, and capacity building (See Table 1).
These indicators categories, perhaps, raise as many questions as they answerfor few of these categories are simple to put in practice. Some of these questions include:
In a concluding section we provide our recommendations to consider a few needed "first step" in developing SRP indicators.
We have raised many questions and provided few answers. But at this point it may be premature to do much more than that. In the first place, despite the fact that many of our questions seem to us quite basic, few have yet received much attention. Second, there is no need to reinvent the wheel. The NSF and many other agencies have accumulated a great wealth of experience and craft in developing indicators. We have tried to focus on issues specific to SRP indicators, but the SRP indicators have much in common with most science and technology indicators issues. So, a good question to begin with (one answered more easily by others) is "in developing indicators that have been the backbone of Science and Engineering Indicators, what has been learned and how can we apply it?" We have not taken that important step, but it needs to be taken.
In our judgment, the first priority for developing SRP indicators is to settle on a acceptable operationalization. In all likelihood, the best approach is to focus on formal SRP's because an approach based on informal interaction renders already difficult measurement problems virtually impossible. Nevertheless, it is important to recognize that many SRP's are more symbolic than real and any system of indicators must be able to distinguish between the two.
After settling on an acceptable operationalization, the next step in developing indicators is to pursue simple descriptive information about the population of SRP's. Not much can be done until the SRP's have been satisfactorily identified.
In choosing among the many possible conceptualizations and indicators of SRP, a framing question should be: "what is the public stake?" To be sure, much of science and technology policy (not to mention tax policy and economic development policy) proceeds on the assumption that private benefit translates into public benefit via economic growth. In many instances this is an entirely plausible argument, at least to a point. However, public investment in science and technology is most often rationalized in terms of market failure or public domain benefits. Arguably, the fact that SRP activity has great economic import (still an open question) does not imply an equivalent government or public policy importance. This seems to imply, then, that SRP indicators should focus on such factors as the composition of R&D (e.g., do they shift the balance in available public domain research), impacts of labor and human resources, tax implications (e.g., use of tax credits, foregone revenue and its impacts), and, especially, public-private partnerships (e.g., how are government-sponsored or -brokered SRP's different from others?).
Having suggested that the sphere of indicators be limited, we nonetheless emphasize the need to move forward in examining SRP's and developing valid, reliable indicators. Despite the fact that we still do not have enough knowledge to assess the impacts and effects of SRP's we do know, if only from anecdotes and casual empiricism, that their activities are too important to the U.S. research capability for us to rely entirely on anecdotes and casual empiricism.