Historically, one purpose of the Facilities survey has been to allow analysis of changes over time. This continues to be a survey goal but during the redesign process a greater priority was placed on data quality than on consistency with previous years. As a result, it is possible that the redesigned survey may sometimes result in unreliable measures of change when comparing 2003 data with earlier years. Even when there has been no change in requested data, the instructions and wording often have been changed to improve the question clarity. These changes could lead some respondents to give different answers than they would have given with the original instructions. Over time, improvements in clarity should increase the reliability of responses by reducing error. For the short term, however, improvements in clarity may have the effect of producing data that are not directly comparable with past responses. For example, if a type of research space previously had been underreported, then a change that results in more complete reporting would produce more accurate data but would also create the misleading impression that the growth in the amount of space was much greater than actually occurred.
Despite the risks, the redesign study proceeded with altering the wording for the following reasons: (1) the redesigned questionnaire should collect more consistent data across institutions, and thus provide a more useful database; (2) the redesign effort is not expected to be repeated for many years, so consistent time trends will become available in later years; and (3) improving the clarity of the questionnaire not only increases consistency across all institutions (because they should be more likely to interpret each question in the same way) but also can help to increase consistency over time within individual institutions (e.g., because a change in institution coordinator is less likely to result in a new interpretation of the question). For this last reason, data for previous years already had many inconsistencies, so NSF is not really sacrificing as much consistency over time as might be assumed at first.
Only after the 2003 data are collected will it be possible to do a full analysis of the effects of the changes in the wording, and it may not be possible to know which changes are real and which are because of changed interpretations of the questions without contacting the institutions and verifying why their reports changed. The discussion in this section provides some preliminary expectations as to how or why the data might change, based on what institutions have said they provided in the past.
In theory, the total research space by field provided in response to question 2 should be equivalent to the totals provided in the past, even though the question has changed by adding new columns for laboratories, laboratory support space, offices, and other research space and subtracting columns for instructional space and leased space. However, there are several ways in which the revised questionnaire may help to remind institutions to include some categories of space that might previously have been omitted:
The questionnaire also gives greater prominence to handling shared space than the earlier questionnaire. If space is shared between research and nonresearch activities, then this change might lead to more space being prorated among these two categories rather than being placed entirely into one category or the other. This could cause a change in either direction, either increasing or decreasing the amount of reported research space. If the space is shared between two different science fields, this change should not affect the overall amount of space, but it might affect the distribution of space between the various S&E fields. Again, however, for any particular field, the result could be either an increase in space or a decrease. There is potential, then, that although reports of the amount of space might change on an institution-by-institution basis, the changes might balance out across multiple institutions.
The net effect is likely to be an increase in the amount of space because of the potential that changes with respect to shared space might balance out, whereas improved completeness in reporting should uniformly result in increased amounts of space.
Although the question on facility condition is structured in a very similar manner to the question used in 1999, the timeframe is different. The revised questionnaire defines condition in terms of the next 2 years rather than the current year only. Strictly speaking, this makes the 2003 version a different question than the one used in earlier versions; thus, the answers from previous years are not directly comparable. Most likely, however, the responses would be highly correlated because institutions would not likely expect to change radically over 2 years. Still, because research space tends to deteriorate and become outdated over time, there will probably be a tendency to say that some space that is currently in superior or satisfactory condition will not be in a similar condition within 2 years. Thus, there will likely be a negative effect, with space appearing to be in worse condition than it has in past surveys.
The change in question wording is intended in part to reduce institutions' likelihood of giving answers that they perceive to be beneficial to their own interests. Because either overestimating or underestimating the condition could be self-serving, depending on an institution's perspective, it is not clear whether such errors would balance out or would be effected either negatively or positively.
Because the effect of a reduction in self-serving answers is not clear, one cannot be sure of the net effect when both factors are combined. However, because the change in time frame itself might be predicted to have a negative effect, and because two of the three alternatives with regard to self-serving answers would either increase the negative effect (if institutions earlier tended to overestimate their condition) or render it unchanged (if the conflicting effects balanced out), the net effect might be more likely to produce lower condition ratings than previously.
The largest change in the questions on repairs and renovations was an increase in the threshold for determining which projects should be reported, from the previous $100,000 to $250,000. After taking inflation into account, this is not a tremendous increase from when the survey first was conducted. However, it is a substantial change compared with 1999. It is likely that fewer projects will meet the threshold; thus, the total amounts reported on repairs and renovations should decrease compared with 1999.
The 2003 questionnaire discusses more extensively how to handle shared space and includes an explanation that a project must meet the threshold within an individual field. Past respondents may sometimes have included projects for which total costs met the threshold, but that did not meet the threshold in any one field. This also may result in respondents to the 2003 questionnaire reporting a smaller number of projects than in previous years, reinforcing the likelihood that the reported cost of renovation will decrease compared with 1999.
Questions on new construction share the same two changes mentioned in the previous section on repairs and renovations: the threshold was increased to $250,000, and changes to the instructions on shared space may lead to the exclusion of some projects for which only the combined total was more than $250,000. Both changes would tend to reduce the funding amounts compared with 1999. However, changing the threshold may not be as important for new construction as for repairs and renovation because any new construction may be more likely to exceed the minimum threshold.
Another change to the questionnaire might work in the opposite direction. The questionnaire now asks for a separate project sheet for each project rather than a summary sheet that combines all projects. Because institutions will need to declare the number of projects, then provide information on each one, this process might help prevent some projects from being accidentally excluded. It may also prevent errors in listing projects that individually fail to meet the threshold. Some institutions may have given total construction costs without weeding out projects that did not apply (though $100,000 is a very low limit for new construction). If such errors are made on the project sheets, they can be identified from the additional detail that will be available on a project-by-project basis.
Most of these changes would lead to a reduction in the amount of reported construction costs and NASF reported for new construction, so the net effect might be a reduction overall. However, the lesser importance of the threshold for new construction (as compared with repairs and renovation), combined with the possibility that some large projects will be reported that might accidentally have been omitted, makes the net effect harder to estimate.
These sections are quite similar to the section on repairs and renovations in the past 2 years, so one might anticipate similar results. The increased threshold, combined with changes in the instructions on shared space, could be predicted to lead to reduced estimates compared with 1999.
These sections have a similar structure to the section on planned repairs and renovation, and one might therefore expect reduced estimates compared with 1999. However, the threshold may be less important for new construction, so there may be less change.
 A respondent could think it best to overestimate the condition as a way of showing the institution's research capabilities. On the other hand, he or she might underestimate the condition as a perceived way of seeking increased Federal aid.