The Changing Research and Publication Environment in American Research Universities
Appendix B. Method
Participation in the study was voluntary, both for the institutions selected and for the individuals who chose to meet with the study team. At each institution, the team relied on assistance from a university executive responsible for research or graduate education—typically a vice president, dean, or provost—to arrange individual interviews and group meetings with faculty and administrators. The team requested interviews and meetings with people in certain broad disciplinary categories (e.g., social scientists, engineers) or administrative roles (e.g., heads of centers), but the university selected and recruited individuals in those categories. When members of the team held group meetings, they tried to bring together people with some common administrative or research experience—department chairs, for example, or biomedical researchers. Due to the time burden involved, two universities contacted expressed reluctance to host a visit, and the team decided to visit other universities instead.
There was probably some selection bias both in who the universities invited to meet with team members and who accepted the invitations. Although the team emphasized the value of talking with researchers across the range of the sciences, it is possible that more of those interviewed had some kind of connection to the National Science Foundation (NSF) or the kinds of science it supports than would have been the case had they been selected strictly at random. In addition, some informants, including several advocates of open-access publishing, may have chosen to participate because of strongly held views about certain aspects of scientific publication.
It is doubtful, however, that selection biases substantially distort the information received. The issues most central to the study are largely factual, including, for example, changes in research collaboration and interdisciplinarity, the effects of new information technologies, the balance between quality and quantity in university rewards for research, and trends in the amount and quality of research done overseas. These issues are generally not so value or policy laden as to motivate participation or skew perceptions. Moreover, variations in opinions (e.g., about the merits of "big science") did not appear to be closely related to differences in factual observations (e.g., about the growth of "big science").
The study informants were mainly senior researchers and administrators, and their perspectives were doubtless affected by their seniority. Furthermore, their perspectives on how research and publication functioned in the past were developed at times when, for the most part, they were in more junior positions. In discussing differences over time, a few informants appeared concerned that changes in their own roles as they assumed more senior status might distort their perceptions of changes in the research and publication environment. Most did not appear troubled by this issue, and some, when it arose, professed confidence that they could distinguish between these two different reasons for perceived changes. It is doubtful that the value of informants' observations was seriously compromised by this potential bias, in part because the study team sought to question informants in ways that probed the relationship between seniority and publication-related behavior. Commenting on a draft of this report, one reviewer noted that it appeared to stress relatively recent developments more than changes that occurred early in the period under study. It is certainly possible that study informants' recollections involve a bias toward recent changes.
In any event, any biases that affect the frequency with which the team heard different observations or perspectives are not likely to be consequential. Beyond making relatively broad statements about frequency—"some," "many," "a few"—the study team did not analyze the data quantitatively. The data are presented as the insights and observations of knowledgeable informants about the parts of the research and publication process with which they are most familiar. These insights and observations function less to prove propositions than to sensitize the science and engineering (S&E) community to the dynamics of research and publication and the contexts in which they occur, suggesting hypotheses that can be tested in more systematic research and ways to interpret relationships found in the quantitative data.
Interviews and meetings were semistructured; they followed a general pattern, but the interviewer or group leader made occasional strategic deviations from it when warranted. The study team used written topical guides to facilitate coverage of relevant topics and suggest response probes. These guides were pretested in interviews and focus groups with university-based scientists and engineers who were working at NSF on a temporary basis. The team developed different guides for people in different administrative positions, and different guides for interviews and group meetings, but the overlap among the various guides was substantial. With some informants, the team members were able to ask practically all of the questions in the guide and to pose them in the sequence in which they were listed. In these cases, the guide functioned almost like a questionnaire in a structured interview. More often, however, the guide provided only an approximate order for raising issues, a suggested question wording, and a reference to ensure that major issues were not neglected. Typically, informants discussed several related topics in response to a question, and probes led them to discuss still other issues. In addition, questions were omitted when it became apparent that informants either lacked relevant experiences (e.g., participation in international collaborations) or did not treat particular distinctions (e.g., among different kinds of collaborators) as important. Sometimes, too, time limitations forced team members to be selective in the questions raised. Appendix C provides examples of interview and meeting guides.
The study team tried to divide its time evenly between individual interviews and group meetings. The two have complementary advantages. Interviews provided more depth about particular circumstances and individual perspectives, but group interaction probed, validated, and refined individual observations. The difficulty of arranging group meetings meant that somewhat more time was spent in individual sessions and that groups often contained fewer than the 6–10 participants requested.
Three researchers from NSF's Division of Science Resources Statistics trained in either sociology or economics conducted the interviews and meetings. All had substantial familiarity with research and publication in S&E and with how universities are administered. Two had acquired extensive interviewing experience earlier in their research careers. The third participated with an experienced colleague in most of the pretest interviews at NSF and in the first two university visits, where the two researchers conducted the interviews and meetings jointly. Each of the other visits was made by a single researcher. Interviewers took notes during the sessions they conducted and also taped almost all of them. The observations that the different members of the research team recorded and the impressions they received were substantially similar; the team has no reason to believe that which individual conducted an interview or meeting had any meaningful impact on the data that were generated.