Scientific and Engineering Research Facilities at Colleges and Universities: 1998

Appendix A
Technical Notes


This appendix discusses the study methodology as well as various other technical aspects that the reader should consider when interpreting the data presented in this report. In addition to the current 1998 survey, the discussion includes the original 1988 survey, and the 1990, 1992, 1994, and 1996 surveys. The following topics are covered:

Sampling Procedures and Response Rates top

A. Academic Institutions top

1988 Survey

The 1988 survey was designed to provide estimates for all research-performing academic institutions as defined in the National Science Foundation’s (NSF) fiscal year (FY) 1983 Survey of Scientific and Engineering Expenditures at Universities and Colleges. The universe datafile for the 1983 expenditures survey included all universities and colleges that offered a master’s or doctorate degree in science and engineering, all others that reported separately budgeted S&E research and development expenditures of $50,000 or more, and all Historically Black Colleges and Universities that reported any R&D expenditures. This datafile represented the most recent available universe survey of R&D expenditures at academic institutions. The datafile contained a total of 566 institutions.

All HBCUs in the frame were included in the sample with certainty (N=30), and a stratified probability sample of 223 institutions was selected from among the remaining institutions in the frame. These institutions were first stratified by control (public versus private) and highest degree awarded in S&E (doctorate-granting versus nondoctorate-granting). A minimum sample size of 25 was set for each of the four resulting strata, and the remaining sample was allocated to strata in proportion to the “size” of each stratum. Stratum size was defined as the square root of the aggregate R&D expenditures in S&E of the institutions in the stratum. Academically administered Federally Funded Research and Development Centers were excluded from this survey. Within strata, institutions were sampled with probability proportionate to size. Again, size was defined as the square root of the institution’s FY 1983 R&D expenditures.

Following the selection of an initial sample of 253 institutions, NSF determined that several of the sampled institutions were out of scope of the survey. Out of scope institutions included those in outlying territories, military academies, and three highly specialized institutions considered inappropriate given the nature of their programs. Elimination of these out-of-scope cases reduced the final sample to 247 institutions, of which 29 were HBCUs and 99 had (or were) medical schools.

Institutions in the sample accounted for more than 75 percent of all academic R&D expenditures in FY 1983 and encompassed at least 70 percent of the spending in each major S&E discipline. The sample represented a weighted national total of 525 institutions. The composition of this survey universe by type of institution is shown in table A-1.

Table A-1. Number of institutions in the survey universe of research-performing colleges and universities:  weighted estimates, 1988
Table A-1 (Spreadsheet format)
1990 Survery

The institution sample for the 1990 survey was the same as for the 1988 survey, except for two changes:

The same changes noted above produced a net increase of six institutions, increasing the sample size to 253 in 1990. The universe represented by the sample, however, did not change.

1992 Survey

The institution universe and sample for the 1992 survey were the same as for the 1990 survey, except for three changes:

Of the 91 sampled nondoctorate-granting institutions, nine were later determined to be out of scope, since they reported in the 1992 facilities survey that they had no S&E research space and also reported in the 1988 R&D expenditures survey (which provided the basis for the sampling frame) that they had less than $50,000 in separately budgeted R&D expenditures. The exclusion of these out-of-scope institutions reduced the sample of nondoctorate-granting institutions to 82.

1994 Survery

The institution universe and sample for the 1994 survey closely matched the 1992 survey, with the following exceptions:

Of the 314 sampled institutions, five nondoctorate-granting institutions were later determined to be out-of-scope, because they reported no S&E research space. The exclusion of these out-of-scope institutions reduced the sample to 309.

1996 Survey

The institution universe and sample for the 1996 survey were the same as the universe and sample for the 1994 survey. No institutions were added, and none were deleted.

Seven of the nondoctorate-granting institutions in the sample reported no S&E research space in their survey response and were determined to be out of scope. The exclusion of these seven institutions reduced the sample to 307.

1998 Survery

The sampling frame for the 1998 survey was increased to 675 institutions to accommodate additional coverage for Hispanic-serving institutions and non-HBCU-Black institutions. The 1998 sampling frame included 675 institutions drawn from the most recent census of institutions reported in the 1993 Academic R&D Expenditures Survey. Fifteen institutions in the sampling frame reported no science and engineering research space and were determined to be out of scope for the current survey. The exclusion of these institutions reduced the universe to 660 institutions. The universe was divided into the following nine strata to ensure representativeness:

  1. The top 100 colleges and universities in terms of the size of R&D expenditures, where size was defined as the square root of the 1993 R&D expenditures in thousands;


  2. The original panel of 29 HBCUs that has been selected to the sample with certainty since the 1988 NSF Facilities survey;


  3. The remaining 35 HBCUs in the sampling frame;


  4. Non-HBCU-Black institutions—institutions that enrolled at least 25 percent black students according to the Integrated Postsecondary Education Data System (IPEDS);


  5. Hispanic-serving Institutions—institutions that enrolled at least 25 percent Hispanic students according to IPEDS;


  6. Other public doctorate-granting institutions;


  7. Other private doctorate-granting institutions;


  8. Public nondoctorate-granting institutions; and


  9. Private nondoctorate-granting institutions.


Because these strata are not mutually exclusive categories, they were defined in a hierarchical manner. Stratum 1 was formed first so that all institutions in the top 100 were included irrespective of whether they could be included in any other stratum. Stratum 2, the 29 HBCUs in the sample since the 1988 NSF Facilities survey, was the second stratum formed. Stratum 3, the remaining 35 HBCUs, was the third stratum formed. Stratum 4, 13 institutions that enrolled at least 25 percent black students yet were not HBCUs, was the fourth stratum formed. The first four strata are mutually exclusive groups (i.e., no HBCU or non-HBCU-Black institution is found in the top 100). In the universe of all research-performing institutions with S&E research space, there were 13 institutions that enrolled at least 25 percent Hispanic students. Four institutions, however, had already been selected into other strata. Thus, Stratum Five only includes nine institutions. Institutions in the first five strata were all selected into the sample with certainty (i.e., all institutions were part of the sample).

The remaining 481 institutions in the universe formed the final four strata based on their institution type (e.g. doctorate-granting vs. nondoctorate-granting) and institutional control (e.g. public vs. private). Within each of these four strata, institutions were sampled using a probability proportional to size sampling scheme so that the larger institutions were selected with higher probability than the smaller ones. The size of the institution was defined as the square root of the 1993 R&D expenditures in thousands. Within each of these four strata, the minimum size of the institution was defined as 40 for doctorate-granting institutions and for public nondoctorate-granting institutions. The minimum size of the institution for private nondoctorate-granting institutions was defined as 11.

Table A-2, below, presents the number of institutions in the sampling frame, eligible population, sample, and respondents, by stratum, as previously described.

Table A-2. The number of academic institutions in the sampling frame, eligible population, sample, and the number of respondents, by stratum: 1998
Table A-2 (Spreadsheet format)

The overall response rate for the 1998 survey was 86.9 percent. The response rate varied from 100 percent of the top 100 institutions to 73.2 percent of institutions sampled from stratum nine.

Table A-3 presents the number of non-HBCU institutions by institution type in the universe in all survey periods between 1990 and 1998.

Table A-3. Number of respondent non-HBCU institutions in the 1990, 1992, 1994, 1996, and 1998 samples of resesarch-performing colleges and universities by institution type and institutional control
Table A-3 (Spreadsheet format)

Table A-4 presents the number of institutions within each stratum by institution type and control. Seventy of the top 100, 143 of other doctorate-granting, and 151 of nondoctorate-granting institutions are public institutions.

Table A-4. Number of academic institutions by sampling stratum, institution type, and institutional control:  1998
Table A-4 (Spreadsheet format)

Thirty of the top 100, 134 of the other doctorate-granting, and 131 of the nondoctorate-granting institutions are private institutions.

Table A-5 presents the number of HBCU, non-HBCU-Black, and Hispanic-serving institutions within each stratum. Only Strata 2 and 3 contained HBCUs. All non-HBCU-Black institutions fell within Stratum 4. The 13 Hispanic-serving institutions were drawn from Strata 1, 4, and 5. Three minority-serving institutions had enrollments of at least 25 percent black and at least 25 percent Hispanic students. These institutions were considered non-HBCU-Black institutions in all analyses in this report.

Table A-5. Number of minority institutions by sampling stratum: 1998
Table A-5 (Spreadsheet format)

Table A-6 presents the number of HBCUs with S&E research space in the universe by institution type in each of the surveys between 1990 and 1998.

Table A-6. Number of Historically Black Colleges and Universities (HBCUs) in the 1990, 1992, 994, 1996, and 1998 samples of research-performing colleges and universities
Table A-6 (Spreadsheet format)

B. Research Organizations and Hospitals top

In preparation for the 1988 survey, NIH provided listings of all hospitals and nonprofit research organizations that received extramural research funding from NIH during FY 1986. A small number of agencies and institutions that primarily conduct public information dissemination or other nonresearch activities were eliminated from the listings.

Samples of 50 hospitals and 50 research organizations were selected from the listings, with probability proportional to size, as measured by total dollar awards from NIH in FY 1986. It was determined during data collection, however, that there was some duplication in the listings. Some nonprofit research institutions were located within hospitals and shared the same facilities, and some of the research organizations were units within other sampled research organizations. In addition, some of these institutions have been classified as out of scope of the survey based on their reports that they do not contain any research space (e.g., because their research grants have expired or because their current research is conducted entirely off premises). Elimination of duplicate and out-of-scope institutions has reduced the number of research organizations to 47 sampled in 1988 and the number of sampled hospitals to 42.

In 1994, an updated list of hospitals and research organizations that received extramural research funding from NIH during FY 1992 provided the sampling frame. Fifty hospitals and 50 research organizations were initially selected. One institution was eliminated from each of these samples either because it was a duplicate or out of scope for this study. This resulted in a sample of 49 hospitals and 49 research organizations. Like the sample of academic institutions, the 1996 sample of hospitals and research organizations was the same as that used in 1994.

The sampling frame for the 1998 survey included 126 hospitals and 175 research organizations. One hospital and four research organizations were eliminated from this sampling frame because they were out of scope for this study. This resulted in an eligible population of 125 hospitals and 171 research organizations. The research organizations and hospitals in the 1998 sample were drawn from an updated list of institutions receiving funding from NIH in FY 1997. Forty-six research organizations and 49 hospitals were sampled using a probability proportional to size (PPS) sampling scheme so that the larger institutions were selected with higher probability than the smaller ones. The measure of size of the institution was defined as the total dollar amount of NIH research funding each institution received in 1997. The PPS selection was accomplished using a systematic sampling scheme. With systematic PPS sampling, each selection represents a certain portion of the total population—in this case, a portion of the total dollars in grant awards. Institutions that received more grants than this amount are included in the sample with certainty. Sixteen research organizations and 29 hospitals were selected with certainty. The remaining 30 research organizations and 20 hospitals were sampled with uncertainty.

Table A-7 presents the number of institutions in the sampling frame, eligible population, sample, and respondents, by stratum, as previously described.

Table A-7. The number of research organizations and hospitals in the sampling frame, eligible population, sample, and the number of respondents, by stratum: 1998
Table A-7 (Spreadsheet format)

Eighty-three of the 95 sampled research organizations and hospitals (87.4 percent), completed the survey.

Biomedical institutions are the focus of chapter 9 of this report. There are five mutually exclusive categories of biomedical institutions:

  1. Colleges and universities with no affiliated medical school;


  2. Colleges and universities with an affiliated medical school;


  3. Independent medical schools;[3]


  4. Research hospitals;


  5. Nonprofit research organizations

Colleges and universities with an affiliated medical school are counted as both a college or university and as a medical school in all tables reporting the number of institutions. Their biological and medical science research space—existing, needed, constructed, deferred, and repaired/renovated—and the associated expenditures are divided between the college or university and the medical school categories depending on whether the research space or capital project was designated as inside or outside a medical school. That is, while the institution is counted twice, its research space and associated costs are not.

Two notes of caution are necessary regarding the medical school information. A few institutions reported no existing medical school research space yet reported actual or planned construction or repair/renovation of medical school research space. Thus, the 'medical school' category does not refer to a constant group of institutions across all tables in Chapter 9. Second, the number of medical schools is based on the sum of the weights of the institutions with research space inside medical schools. Medical schools were not an explicit stratum in the sampling scheme. Thus, the number of medical schools reported may not reflect the actual number of medical schools in the universe.

Table A-8 presents the number of institutions within each stratum by institution type that reported existing research space in the biological or medical sciences, inside and outside of medical schools.

Table A-8. Number of institutions with biomedical research space by sampling stratum: 1998
Table A-8 (Spreadsheet format)

Out of the 956 institutions in the eligible population, 908 reported existing biomedical research space. The majority of the 48 academic institutions with no biomedical research space were nondoctorate granting.

Sampling Procedures and Response Rates top

The 1998 survey questionnaire, reproduced in Appendix C [PDF], updated information collected during earlier (1988, 1990, 1992, 1994, and 1996) surveys regarding several topics:

In addition to collecting updated information on the above topics, the 1998 questionnaire added two new questions:

The response categories for one question were modified slightly in 1998 from previous years’ surveys. When classifying the current condition of research space, a distinction is made between research space that requires major renovation to be used effectively and research space that requires replacement. In 1996, these two categories were combined.

In addition, a modification was made to the categorization of laboratory animal facilities in relation to government regulations. In 1998, the categories reflect the four levels of Animal Biological Safety, as described in Biosafety in Microbiological and Biomedical Laboratories.[4]

Finally, the 1998 questionnaire eliminated the question used in 1996 regarding the status of the institutions relative to the cap on tax-exempt bonds (applicable only to private universities and colleges).

World-Wide Web Survey top

For the first time since the facilities survey began in 1988, institutions had the option in 1998 of responding to the survey either on the printed questionnaire or using an Internet-based version of the survey on the World-Wide Web. Institutions were encouraged to utilize the Internet version, which contained their 1996 responses. The Internet version was programmed to detect logic errors across the 1998 survey items, as well as inconsistencies from the institution’s 1996 responses. Each institution was assigned an individual login and password to access the Internet survey.

Data Collection top

A. Academic Institutions

In January 1998, a letter from Neal Lane, Director of the National Science Foundation, was sent to the president or chancellor of each sampled institution asking that the institution participate in the study and that a coordinator be named for the survey. A letter of endorsement of the project signed by the heads of two higher education associations also was enclosed. A few days after the two-week deadline for returning the coordinator identification card, telephone follow-up was conducted with all sampled institutions that had not yet identified a survey coordinator. Survey materials, including printed surveys, instructions for the Internet version of the survey, and facsimiles of the 1996 responses for each institution were sent to the coordinator in mid-February by overnight mail. The questionnaire and cover letter requested return of the completed survey by March 31, 1998. At the end of March, few surveys had been returned and the deadline was extended to late April 1998. All institutions were notified of the extension. Nonresponse follow-up began in mid-March and continued through July 1998.

B. Research Organizations and Hospitals

In May 1998, a letter from Judith Vaitukaitus, Director of the National Center for Research Resources, was mailed to the president, CEO, or director of each sampled organization asking that the institutions participate in the study and that a survey coordination be named for the survey. Survey packets, including printed surveys, instructions for completing the Internet survey, and facsimiles of the 1996 responses for each institution were sent to each coordination on a rolling basis, beginning on June 6, 1998. Although the return deadline for the survey was June 30, 1998, by the end of July, few responses had been received. The survey deadline was extended until September 25, 1998. Reminder phone calls were made and faxes were sent to determine participation status for the nonrespondents beginning in mid-June and continuing through September.

As printed versions of the survey were returned, responses were entered into the Internet version to run the series of logic and arithmetic checks. Responses returned on the Internet version were available immediately for analysis. Telephone followup was conducted with the institutions to resolve data inconsistencies discovered during analysis.

Item Nonresponse top

After machine editing of questionnaire responses for completeness, internal consistency, and consistency with data from previous surveys, extensive telephone data retrieval was conducted to minimize the amount of missing data or otherwise problematic responses to individual questionnaire items. As a result of these persistent follow-up activities, most of the individual items had very low item nonresponse rates.

One exception was item 1a, which requested the total amount of academic space in all disciplines outside S&E fields. As in previous surveys, this item was difficult for some institutions to answer and, though data retrieval was attempted, it had a higher nonresponse rate (20 missing responses or 6.6 percent) than other items. Items on the amount (Item 1), adequacy or inadequacy assessment (Item 2), current condition (Item 3), completed construction and repair/renovation (Item 4), planned construction and repair/renovation (Item 6), and additional need (Item 7) of research space had fewer than 2 percent missing values in each field.

Missing values were imputed for questionnaire items that were included in the data analysis. Missing data on total academic space outside S&E fields were imputed based on the ratio of total academic space to total space in S&E fields. In Items 2 and 3, reported percentages were converted to NASF based on the amount of research space in Item 1. In Items 4, 6, and 8 (on completed capital projects, planned capital projects, and scheduled animal facility improvement) most missing values involved either missing costs or missing NASF, but not both. In these cases, the missing data element was imputed from the reported element using 1996 data on average cost per NASF to estimate the one from the other.

Missing values that could not be imputed using the above methods were imputed using a “hot deck” approach. This involved imputing the missing value from a “donor” institution that did provide the needed information and that was as closely matched as possible to the institution with the missing information in terms of control, type (doctorate-granting or nondoctorate-granting) and size of research expenditures.

Weighting top

After data collection, sampling weights were created for use in preparing national estimates from the data. First, within each weight class, a base weight was created for each institution in the sample. The base weight is the inverse of the probability of selecting the institution for the sample. Second, because some institutions in the sample did not respond to the survey, the base weights were adjusted in each weight class to account for this unit nonresponse. Finally, the weights were adjusted again to make the number of estimated institutions equal to the known number of institutions in various categories. For this final “poststratification” adjustment, the institutions were classified by type (top 100 in research expenditures, other doctorate-granting, nondoctorate-granting, control, and HBCU status. The poststratified weights were used to produce the estimates shown in this report. The weighting procedures were essentially the same as those employed in the 1988, 1990, 1992, 1994 and 1996 studies.

Reliability of Survey Estimates top

The findings presented in this report are based on a sample and are therefore subject to sampling variability. Sampling variability arises because not all institutions are included in the study. If a different sample of institutions had been selected, the results might have been somewhat different. The standard error of an estimate can be used to measure the extent of sampling variability for that particular estimate.

One of the ways that the standard error can be used is in the construction of confidence intervals. If all possible samples were selected and surveyed under similar conditions, then the intervals of two standard errors below the estimates to two standard errors above the estimates would include the average result of these samples in about 95 percent of the cases. Because only one sample is actually selected and surveyed, the standard error must be estimated from the sample itself. The interval constructed using the estimated standard error from the sample is called a 95-percent confidence interval. In this report, discussion is limited to group differences or changes over time that fell outside the 95-percent confidence intervals of the 1998 estimates.

Another way standard errors are used is to calculate coefficients of variation. The coefficient of variation is calculated by dividing the estimates’ standard error by the estimate. For example, if an estimate had a mean of 1000 and a standard error of 130, the estimate’s coefficient of variation would be 13 percent. In this report, discussion is limited to estimates whose coefficient of variation was less than 25 percent.

In past reports, the standard errors were estimated using the jackknife repeated replication method. The jackknife replication method involves dividing the full sample into a number of replicates and estimating the standard errors based on the variability among these replicates. For the 1998 survey, the standard errors were generated using the Taylor series linearization method to approximate functions of linear statistics estimated from the sample. The statistical software package STATA was used for this variance estimation. Estimated standard errors for selected statistics are shown in table A-9.

Table A-9. Standard errors (S.E.) for selected estimates
Table A-9 (Spreadsheet format)

Data Considerations, Definitions, and Limitations top

In addition to sampling errors, survey estimates can be adversely affected by nonsampling errors. Errors of this type include those resulting from reporting and processing of data. In this survey, extensive follow-up with respondents was conducted to ensure that the data were as accurate as possible. This follow-up included a cross-year review that verified inconsistencies between the current and previous questionnaires.

Research Square Footage top

In 1996 for the first time, and again in 1998, the survey included a definition of “net assignable square feet.” NASF was defined as the sum of all areas (in square feet) on all floors assignable to, or available to be assigned to, an occupant for specific use, such as instruction or research. It is unlikely that the inclusion of a definition had any effect on trends in this item.

Respondents were instructed to prorate the NASF and the cost of construction and repair/renovation projects to reflect the proportion of space that was used for science and engineering research. For example, if half the space of a new 20 thousand square foot biological sciences building costing $8 million was to be used for biological research and the other half was to be used for instruction, only the prorated net assignable square footage for research (which would be less than 10 thousand gross square feet) and the prorated cost of construction for research ($4 million) were reported in the survey. Therefore, these figures do not reflect the total amount of space under construction or the total cost of the building or a “project.”

Further, if multiple S&E fields shared research space, respondents were instructed to prorate the research construction and repair/renovation NASF and costs to reflect the proportion of use by each individual S&E field. If the prorated research construction or repair/renovation cost for an individual field was not over $100,000, the NASF and the costs were not to be reported in the survey.[5] However, some institutions’ responses for some fields may reflect the NASF and the cost of several projects summed together. Further, some projects at some institutions may extend across several fields and, therefore, their NASF and costs were reported for several S&E fields, if they were reported at all.

For example, if an institution committed $1 million to renovate a 100 thousand square foot Biological Sciences building, of which 45 thousand NASF and $450,000 are allocated equally for research facilities in the medical sciences, the biological sciences, and bioengineering, then 15 thousand NASF and $150,000 were prorated to each of these three fields, and the remaining gross square footage and the remaining $550,000 were not reported. If, however, the prorated costs were $350,000 for the medical sciences, $75,000 for the biological sciences, and $75,000 for bioengineering, the NASF and costs for the latter two fields (which sum to $150,000) would not be reported.

Finally, institutions’ facility record keeping systems vary considerably. In general, most of the larger institutions have central computerized facility inventory systems, often based on space surveys conducted specifically for OMB Circular A-21. Many institutions with smaller research programs are not required to calculate square footage for OMB Circular A-21, and do not maintain databases that can provide such information. These institutions had to calculate or estimate square footage information specifically for this study.

Condition and Adequacy of Research Facilities top

Questions eliciting assessments of the condition of S&E research space or its adequacy are by their very nature subjective. Two persons may make different assessments of the same facility or have different opinions of what is required in order for a facility to be suitable for a particular type of research. Despite the subjectivity involved, these items do provide an overall picture of the current status of facilities.

In 1996, the wording and response choices for the questions assessing both the condition of the institution’s S&E research space and its adequacy were altered slightly from that used in previous years. Respondents were given only three possible choices for evaluating the adequacy of the amount of S&E research space: adequate, inadequate, or not applicable. In 1998, respondents were given four categories for assessing the condition of research space. In 1996, two of the categories “C—requires major renovation to be used effectively” and “D—requires replacement” were combined, but in 1998, they are separate categories again. Thus, the percent of change over time for these two items must be interpreted with some caution.

Capital Projects top

Few institutions maintain information on construction and repair/renovation projects specific to research facilities. Many capital projects involve both research and nonresearch space. When a project was not dedicated exclusively to research, institutions had to estimate the proportion of the project that was related to research.

For projects taking more than one year to complete, institutions were asked to allocate the project costs to the fiscal year in which actual construction activity began or was scheduled to begin.

Because institutions use different dollar values to identify “major projects,” this survey established a guideline to ensure consistency of reporting. As in previous cycles of the survey, projects with costs over $100,000 associated with research facilities were included. In 1992, 1994, and 1996, the surveys also had a separate question about repair/renovation projects costing between $5,000 and $100,000.

In 1998, a new question was added. It asked the institutions to list any nonfixed equipment costing $1 million or more that was included in their Item 4 costs for new construction or repair/renovation during the FYs 1996 and 1997.

Dollar Amounts: Current Versus Constant Dollars top

Since 1994, the facilities report has used both constant and current dollars. Tables in the body of this report are presented in 1997 constant dollars; tables in Appendix E, “Detailed Statistical Tables,” are in current dollars. Dollar amounts were adjusted using the Bureau of the Census’ Composite Fixed-Weighted Price Index for Construction. Unlike a more general index, this construction index closely tracks inflation within the construction industry. This index reflects only changes in prices and is unaffected by changes in the mix of construction projects during any given year. The Bureau of the Census’ Composite Fixed-Weighted Price Index for Construction for 1986–97 are presented below in table A-10.

Table A-10. Composite Fixed-Weighted Price Index for Construction inflation adjustments
Table A-10 (Spreadsheet format)

Cost Per Square Foot Data top

The study did not collect unit cost data for individual construction or repair/renovation projects. It collected only the aggregate research-related costs and the aggregate research space involved in all projects begun during specified periods. These aggregates can be combined into indices of average cost per square foot, which are useful in tracking broad cost trends over time. However, they are of little practical value as guidelines for project planning. By all accounts, unit costs for both construction and repair/renovation projects are highly variable, depending on the specific requirements of the particular project and on many other factors as well (e.g., geographic region of the country). Such differences, which are of crucial importance in project planning, are obscured in the kinds of multiproject averages that can be constructed from this study’s data.

Deferred Capital Needs top

Both in 1998 and 1996, institutions reported separately the construction and repair/renovation costs for projects included in institutional plans, as well as for projects not included in such plans. In addition, institutions were asked to report their estimated central campus infrastructure needs separately for construction and repair/renovation and for both those both in plans and not in plans. This provided a more complete estimate of deferred capital projects.

In addition to this estimate of research facility needs based on institutions’ reports of the S&E research construction and repair/renovation projects that had been deferred, the 1996 and 1998 surveys made additional efforts to measure this need. If institutions indicated that they had an inadequate amount of S&E research space in any given field (Item 2), they were asked to indicate the additional space needed. Institutions also were asked to report either the amount or percent of that space that was funded and scheduled to undergo major renovation or replacement (Item 3). It was thus possible to derive estimates of the amount of additional space needed and the amount of repair/renovation needed and not scheduled.

Both of these approaches, which are based on different assumptions, are believed to provide conservative estimates of the research facility needs of research-performing institutions.

A new item was added in 1998 asking the respondent to identify the amount of indirect costs recovered from Federal grants and/or contracts that is included in “institutional funds” if institutional funds was a source of funding in Item 5a for any repair/renovation or new construction in fiscal years 1996 and 1997. Finally, one last item, the categorization of laboratory animal facilities in relation to government regulations, was modified in 1998. The categories used are the four levels of Animal Biological Safety as described in Biosafetety in Microbiological and Biomedical Laboratories.[6]


Footnotes

[1] This is the Federal Interagency Commission on Education number assigned by the Department of Education. Numbers beginning with 66 are for accredited institutions, which have not yet received a FICE number. These are identification numbers for the record file only.

[2] One of the 29 HBCUs selected with certainty in 1990 was excluded because it had no currently funded R&D at the time the sample was taken.

[3] An independent medical school is a medical school with its own FICE code. An independent medical school may or may not be affiliated with a college or university.

[4] U.S. Government Printing Office (1993). Biosafety in Microbiological and Biomedical Laboratories (3rd Edition). Washington, D.C.: U.S. Government Printing Office.

[5] Note that the survey collected data on total repair/renovation projects costing between $5,000 and $100,000 for institutions’ S&E research facilities. These costs were collected for the institution as a whole and were not broken out by field.

[6] U.S. Government Printing Office (1993). Biosafety in Microbiological and Biomedical Laboratories (3rd Edition). Washington, DC: U.S. Government Printing Office.


Previous Section Top of page Next Section Table of Contents Help SRS Homepage