This document has been archived.

File: Ccd_all.pdf
Pages:1 to 90 of 90
Document Body Page Navigation Panel

Document Body

Table of Reports
Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Report from the Technical Review Committee . . . . . . . . . . . . . . . ix
Final Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi

-i-


Evaluation of the Division of Undergraduate Education’s
Course and Curriculum Development Program

Executive Summary

for
Division of Research, Evaluation and Communication
Directorate for Educational and Human Resources
National Science Foundation
Arlington, VA 22230

by
Jeffrey W. Eiseman
James S. Fairweather
Sheila Rosenblum
Edward Britton

Prepared by The NETWORK, with SRI International, pursuant to NSF
Contract No. RED-9255379, Conrad Katzenmeyer, Project Officer

The views expressed in this report do not necessarily reflect the position or
policy of the National Science Foundation, and no official endorsement by the
National Science Foundation should be inferred.

-iii-


Executive Summary
The conclusion of this independent evaluation is:
“The Course and Curriculum Development program serves as a
major force towards reforming undergraduate education in
mathematics, science, and engineering. The evidence presented
and discussed demonstrates that this program is achieving its
ultimate goal of increasing students’ understanding of, and
attitudes toward, these disciplines. There is also evidence that
funds invested to develop projects at a relatively small number
of institutions have had noticeable impacts on other institutions
throughout the country.”

The Course and Curriculum Development (CCD) program is administered by the Division of
Undergraduate Education (DUE) at the National Science Foundation (NSF). The CCD program awards
grants for developing undergraduate courses and sequences of courses in mathematics, science, and
engineering. Its main objectives are:

to improve the content, conduct, and quality of undergraduate teaching;
to increase student understanding of, and improve student attitudes toward, mathematics, science, and
engineering; and

to contribute to a shift in academic culture so that colleges and universities place greater value on undergraduate
teaching, and on scholarship related to undergraduate education.

From 1988 through June 1996, the CCD program awarded $102 million to nearly 800 CCD grants at
360 institutions of higher education, including research universities, comprehensive universities, liberal
arts colleges, and community colleges. Small and large grants were awarded for innovations varying in
size, structure, and scope. The innovations focused on changes in course structure, content, and pedagogy.
Three kinds of grants were awarded: grants to develop and implement course or curriculum changes,
grants to adopt or adapt courses and materials developed by CCD projects at other institutions, and grants to
encourage other institutions to adopt or adapt CCD-developed courses and materials.

-iv-


Accomplishments
The CCD program has been largely successful in accomplishing its major goals related to participating
faculty and students, as elaborated below. The program affected host institutions as well, but in
most departments, this impact was limited. The program was also successful in affecting undergraduate
education in many institutions that did not receive CCD funds.
The program’s accomplishments in all these areas are particularly impressive since many projects
encountered institutional cultures that assigned lower priority to undergraduate education,
especially the education of non-science majors, than other institutional functions. In these environments,
there were relatively few incentives for faculty to engage in educational innovation except the intrinsic
desire to improve education for students.
Some of the most important study findings are:

Impact on students. Faculty and students reported that students deepened their understanding of the scientific
approach to problems; expanded their competence in applying concepts, principles, and theories; enhanced
their competence in using methods and equipment; and furthered their interest in and comfort with science,
mathematics, computer skills, and laboratory equipment.

Impact on faculty. When principal investigators were asked how those faculty who were most affected by the
project had changed:

• 75 percent reported that they were now spending more time on teaching undergraduates,
• 83 percent said that they now collaborated more with peers about teaching;
• 75 percent said they were now using non-traditional methods to assess student attitudes and learning;
and

• 96 percent said that they had changed the way they thought about teaching and learning.

The case studies provided further evidence that CCD projects were often unique, powerful, and transforming
experiences for participating faculty. For example, one faculty member commented that the project led to
the “most energetic, intellectual conversations” he had had in 20 years.
Impact on departments. More than two-thirds of principal investigators reported that their departments had
made a formal commitment to use CCD project activities and materials on a long-term basis. Over a third
reported that their department’s commitment to undergraduate education had increased, at least partly because
of their CCD project. On the other hand, while some departments that had implemented CCD calculus projects
had extended project principles and procedures to other mathematics courses, few departments in other
disciplines had extended the approaches implemented in CCD projects beyond project courses. Fewer still had
changed their reward status to reflect a more equal emphasis on research and teaching.

-v-


Impact on other institutions. Ninety-six percent of survey respondents reported that they developed materials
or other products that can be used by faculty at other institutions. A similar percent reported that they
conducted some activities to communicate their experiences and insights to colleagues elsewhere. Efforts to
assess the extent of use by faculty who did not receive grants but who requested information from, purchased
materials developed by, or attended workshops conducted by CCD grant recipients, support the premise that
funds invested to develop projects at a relatively small number of institutions have had noticeable impact on
other institutions throughout the country. In addition, the training and experience of teaching assistants, postdoctoral
fellows, and visiting faculty contributed to CCD’s longer term national impact — these individuals
transferred their knowledge of education innovation to other institutions.

The outside content specialists concluded that all the innovations visited are scientifically and
mathematically sound. Also, in their judgment, most incorporate pedagogical principles advocated by
education reformers.
According to the survey results, NSF funds played a critical role in the development and
implementation of most of the innovations studied. Eighty-one percent of the PIs estimated that without
NSF funds, they would not have been able to implement more than half of their project agendas.

Factors Affecting Success
Project success was assessed with respect to: how properly and skillfully faculty were implementing the
innovation, how favorable the outcomes were, and how likely it was that the innovation would continue at
the site. A number of factors were associated with one or more of these measures of success. Awareness of
these factors may provide lessons for the design of successful projects. They include:

the “fit” with the local context (including the local receptivity to, and support for, the change, and the extent of
the match with the institutional mission);

the positions of, and departmental respect for, key project personnel;
a focus on how content is taught;
faculty mastery of a project’s instructional methods;
training for faculty and TAs;
ongoing monitoring (including building in feedback and revision cycles);
regular communication with faculty and relevant administrators; and
attention to management issues.

Suggested Modifications
To increase the CCD program’s national impact, the study team suggests the following modifications:
Increase the proportion of grants dedicated to outreach — encouraging other institutions to adopt successful projects.

-vi-


Award grants for conducting summative evaluations of promising debugged and stabilized innovations in
order to help NSF program officials decide which proposals for dissemination are most deserving of support.

Award grants to provide technical assistance to CCD grant recipients, especially first-time PIs, so that NSF funds
will not be wasted in trial and error cycles addressing problems that have already been successfully solved by
others. Several PIs would benefit from technical assistance related to project functions such as: curriculum
development, student assessment, formative and summative evaluation, faculty training, organizational change
issues, “scaling-up” pilot projects beyond the faculty who participated voluntarily, and dissemination.

Develop guidelines for proposals and proposal reviewers that reflect the findings of this evaluation regarding
which project features are most consistently associated with successful implementation, favorable outcomes,
and good prospects for continuation.

Methods
The NETWORK, Inc., as a subcontractor to SRI International, conducted the evaluation of CCD from
1993-1996. The evaluation consisted of telephone interviews with 43 national leaders in science,
mathematics, and engineering education; a survey of principal investigators who received awards between
1988 and 1993 (of whom 345, or 80 percent, returned the 12-page questionnaires); a separate survey of
unfunded applicants; and case studies based on visits to 33 institutions representing 25 projects. The case
study sample included mathematics (both calculus and non-calculus), engineering, computer science, life
science, physical science (chemistry, geology, and physics), and multidisciplinary projects. It also included
projects at community colleges, liberal arts colleges, comprehensive universities, research universities, a
women’s college, and an Historically Black College or University (HBCU). Small and large projects,
projects conducted at single institutions, and others that were part of consortia were visited. Case studies
were conducted by teams consisting of one member of the evaluation team’s core staff and one or two
outside content specialists. Most visits lasted for two days, and included observations of classes and
interviews with principal investigators, deans, department heads, and faculty.

Conclusion
The CCD program serves as a major force towards reforming undergraduate education in mathematics,
science, and engineering. The evidence presented and discussed demonstrates that this program is
achieving its ultimate goal of increasing students’ understanding of, and attitudes toward, these disciplines.
There is also evidence that funds invested to develop projects at a relatively small number of institutions
have had noticeable impacts on other institutions throughout the country.

-vii-


Evaluation of the Division of Undergraduate Education’s
Course and Curriculum Development Program

Report from the Technical Review Panel
for
Division of Research, Evaluation and Communication
Directorate for Educational and Human Resources
National Science Foundation
Arlington, VA 22230

by
Alan Tucker, Chair
Sally Chapman
Charlene D’Avanzo
Edward W. Ernst
Frank B. W. Hawkinshire, V
Judith F. Tavel
Kenneth L. Verosub

Prepared by The NETWORK, with SRI International, pursuant to NSF
Contract No. RED-9255379, Conrad Katzenmeyer, Project Officer

The views expressed in this report do not necessarily reflect the position or
policy of the National Science Foundation, and no official endorsement by the
National Science Foundation should be inferred.

-ix-


Overview
The Course and Curriculum Development (CCD) program in the NSF Division of Undergraduate Education
is doing an excellent job in advancing its goals of promoting quality learning and teaching in science,
mathematics, and engineering. It is serving as an effective agent for change, leveraging modest funds to
stimulate innovation in instructional practices. The most visible success of the CCD program has been in
calculus. Calculus instruction has been estimated to be a half billion dollar a year “industry.” By investing
about $3 million per year for six years, the CCD program has helped to change the content and pedagogy of
calculus instruction that hundreds of thousands of students receive annually.
While initially focused on curriculum, CCD funding is now helping to promote and support a
generally heightened interest in pedagogy, as well as in new modes of student learning, and in assessing
student performance in undergraduate science, mathematics, and engineering education.
In the future, the CCD program should work to increase further the attention given to aspects of
pedagogy and assessment in proposals. Also, a larger proportion of CCD funds should be devoted to
dissemination and to faculty training in order to promote broader adoption of the quality CCD materials
already produced. Specially targeted projects may be needed to determine how to change the academic
culture so that it places more value on teaching.

Background
The need for reform in undergraduate science, engineering, and mathematics curricula was raised in numerous
disciplinary and federal reports. The leadership of the National Science Foundation has been responsive to these
long-standing and varied concerns, as reflected in the National Science Board’s Neal Report: Undergraduate
Science, Mathematics and Engineering Education
(1986) and the recent document developed by the
NSF Advisory Committee for Education and Human Resources — Shaping the Future: New Expectations
for Undergraduate Education in Science, Mathematics, Engineering and Technology
(1996).
The CCD program was established in 1988 — inspired in part by the Neal Report’s recommendations
— at a time when studies were revealing disturbing trends in undergraduate science education. For example,
UCLA’s Cooperative Institutional Research Program surveyed over 300,000 college students in the late
1980’s, and found that interest in science as a major had dropped dramatically since 1970. The National
Commission on Excellence in Education’s A Nation At Risk (1983) had previously documented a growing
scientific illiteracy of all Americans. The CCD program was a timely response to the need for sweeping
reform of undergraduate teaching for all students.
When the program was established, there were too few innovative courses that were proactively
responding to this need with models of student-centered and student-active instruction. Therefore, the first

-x-


step by CCD was to fund initiatives for such course development. Now, partly due to the success of the
program, there are excellent examples of courses and curricula in which students learn to think critically
and experience science as a process.
At this point, eight years after the inception of the CCD program, the needs of science,
mathematics, and engineering faculty have expanded. For a variety of reasons, increasing numbers of
faculty want and need to learn how to be more effective teachers. This ready audience will greatly benefit
from learning about innovations already in place. But college and university faculty are especially
resistant to changing how they teach, and programs of instructional assistance for these teachers must go
far beyond the mechanics of course and curriculum design. For example, three of the 25 visited
programs relied on traditional teaching methods (but changed what they taught). That these three
programs were developed by faculty on the forefront of instructional reform demonstrates the difficulty
for some in changing the “I taught, so they learned” approach to teaching.

Impact
The CCD program has had impact on students, faculty, departments, institutions, and disciplines.

Impact on Students
The focus in undergraduate education reform has been shifting from adding new content to promoting
student learning. This observation is consistent with results from data obtained through telephone
interviews conducted at the beginning of the evaluation with 43 leaders in mathematics, engineering, and
science education. The emphasis the CCD program has placed on student engagement and on what students
are learning has greatly aided the broad acceptance of this perspective.
On the basis of responses by 345 PIs to survey questions, the impact on students may be
summarized as follows (see Table 11 on page 31 of the CCD Evaluation Report for further information):

increased understanding of the scientific approach to problems;
increased competence in applying concepts, principles or theories, in using methods or equipment, and in
working in teams with other students;

increased interest in, or comfort with, the science taught, the mathematics involved, the computer skills
needed, and the laboratory or field equipment used.

These are higher level thinking and behaving skills. They extend beyond the mere “transmission
of information” that appears to be the basis of teaching in science, mathematics, and engineering courses.
From these survey findings, and from interviews with faculty and students during case study visits to 25

-xi-


projects, panel members concluded that most CCD projects are successful in helping students think more
deeply, ask more thoughtful questions, and, in general, think more critically.
The use of student-centered and active learning instructional modes (e. g., cooperative learning)
were most effective in stimulating students and improving their learning. Students interviewed during the
case study visits confirmed that most of them enjoyed the student-centered learning environment and
believed they learned better. They felt more involved in their education. Some had the opportunity to plan
and carry out their own experiments. This was an experience they had not been afforded previously.
Case study visitors noted several actions project faculty had taken to enhance student learning.
Many of these efforts should improve the motivation and capability of students for lifelong learning, a
capability greatly needed for a successful career in mathematics, engineering, or science.
In most of the projects visited, there were significant changes in the ways that faculty assessed
student learning. Rather than simply asking students about the facts that may have been learned, faculty
seemed to be placing more emphasis on asking about the understanding that lay beyond the facts. We hope
that this kind of assessment will become the norm in the future (see “Recommendations” below).
Many sectors of the workplace in the nation have placed greater emphasis on the quality of their
products and services. The cornerstone of these efforts at improved quality is an emphasis on knowing and
meeting the needs of one’s customers. Since students are the most important customers for undergraduate
education services, faculty need more input from students.

Impact on Faculty
The impact on the faculty was the most apparent of all of the CCD program outcomes. Faculty
involved in the projects reported that they were “energized” by the involvement. As shown through PI
survey ratings and in interviews conducted during case study visits, faculty were more engaged in all
aspects of their teaching. They were especially concerned about the following:

What students were learning. “Some instructors found themselves thinking throughout the day about the
kinds of errors their students made. Evidence of misconceptions prevented instructors from maintaining
the assumption that had shaped much of their past instructional practice: If their lectures were clear and
well organized, then their students would learn” (Evaluation Report, p. 30).

How they thought about teaching. “Ninety-six percent of the principal investigators reported that the faculty
who were most affected by the project changed their conceptions of teaching and learning, and 84 percent
reported that some additional departmental colleagues also changed their conceptions. Moreover, from other
questions in the survey, the nature of this conceptual change is in line with planning and implementing more
effective active and collaborative learning strategies” (Evaluation Report, p. 30).

-xii-


In short, these faculty were shifting their emphasis from the narrow concern of content alone to the
larger issues of facilitating student learning. As a result: “75 percent of the grant recipients reported that
the faculty who were most affected by the project now spend more time on teaching undergraduates, and
44 percent reported that some additional departmental colleagues also do so” (Evaluation Report, p. 29).
More important, they were more actively engaged with their students, and they found that the attention
they were paying to their teaching made teaching more rewarding and enjoyable. The other side of
the coin is that the students responded favorably to the changes and to increased faculty attention paid to
them. This points to the importance of faculty-student interaction in making teaching more effective
and promoting active learning.
There were only a few instances where changes in instruction did not lead to such benefits. It
appears that students did not receive the hoped for benefits because: (a) faculty did not clearly understand
why they were making the instructional changes, (b) faculty had not achieved a sufficient level of
mastery of the instructional skills involved, and/or (c) student workloads were unrealistic. These instances
are not incompatible with the conclusion that follows from the large majority of study data, nor with what
other studies over many years have suggested: that faculty need to be thoughtful and attentive teachers
who focus on both what and how their students learned. The benefits that accrue from this kind of
attention are important for both teachers and students.
The net impact on faculty of CCD projects can be summarized as a paradigm shift — a shift away
from simple transmission of facts to focus on how students learn. It is important to note that this change
was not easy for the faculty, and they often felt uncomfortable with the demands that this change brings.
Nevertheless, change is happening, and faculty are recognizing the benefits of this shift in emphasis.
Further, when faculty members receive a CCD grant, their self-esteem is enhanced and they feel supported
in their commitment to teaching.
Finally, as indicated in the CCD evaluation report, CCD projects often have a delayed effect on
faculty, an impact that can easily be missed: After receiving information about new instructional materials,
faculty in many cases thought about the new ideas privately for a while and did not use them in their teaching
until one or two years later. Or they tried modest experiments and after achieving success, they were ready
to adopt innovations that involved more fundamental reform.

Impact on Departments
One of the two primary goals of the CCD program is: “to contribute to a shift in academic
culture so that colleges and universities place greater value on undergraduate teaching and on scholarship
related to undergraduate education.” Changes in the academic culture can take place at departmental,

-xiii-


institutional, or disciplinary levels. There is clear evidence that some CCD projects were effective in
changing the academic culture at the departmental level. In some cases, change at that level was the
objective of the project. In such cases, the CCD grants supported changes in one course or a group of
courses to serve as a catalyst for reforms on a broader scale. For example, some calculus projects led to the
revitalization of entire departments.
Faculty involved in CCD projects often became change agents within their departments and
institutions, and found themselves interacting in a more meaningful way with their colleagues. As stated in
the evaluation report, “83 percent of the principal investigators reported that the faculty who were most
affected by the project now collaborate more with their peers regarding teaching, and 65 percent reported
that some additional departmental colleagues also do so” (p. 29). Thus, teaching became an important
topic of conversation and affected the climate outside the particular CCD project.
The success of a new or innovative approach to teaching was clearly an important factor in causing
non-involved faculty members to examine their own teaching methods and to be more receptive to
reforms. Also, changes in one or more courses often created a need to find new approaches to teaching
successor courses. However, there were successful projects that did not lead to change on a larger scale
and the causes for these variations need to be examined in more detail.

Impact on Institutions
At the institutional level, the CCD projects contributed in several ways to the ongoing shift in
academic culture. First, the existence of the CCD program is tangible evidence that NSF supports efforts to
place greater value on undergraduate teaching. Since CCD grants carry NSF’s imprimatur, they serve as an
effective vehicle for communicating this value. In part, they signal NSF’s interest in persuading institutions
to devote greater attention to teaching by increasing the visibility and enhancing the stature of the principal
investigator. This phenomenon was most obvious at institutions where extramural funding was uncommon
among the faculty.
Second, CCD projects have demonstrated that alternative methods of teaching mathematics,
science, and engineering can be developed and can be successfully taught. Such demonstrations can have
an impact at the institutional level. For example, one engineering project has begun to change the culture
of undergraduate education at the whole institution; for the first time, an engineering course is being
included in the general education requirement. More generally, the “tracer study” component of this
evaluation supported the premise that “funds invested to develop projects at a relatively small number of
institutions have noticeable impact on other institutions throughout the country” (Evaluation Report, p. 42).

-xiv-


Impact on Disciplines
In the past ten years, there has also been growing pressure for reform of undergraduate education
from within disciplines and from professional societies. It is now quite common to see many sessions and
workshops on education at professional meetings. CCD projects are featured in these sessions or serve as
the basis for these workshops.

However, many courses for both majors and non-majors are still primarily content-driven rather
than concept-driven. As the body of knowledge in each field increases, so does the amount of information
that students are being asked to learn. For example, a typical text for an introductory college chemistry
course now has as many as 2,000 bold-faced terms for students to learn. Expecting faculty to teach
and students to learn such large volumes of material creates serious problems at both the practical and the
pedagogical level. The CCD program can and should take a major role in helping disciplines address the
questions of what objectives should their curricula establish and how they should be achieved.

Recommendations
The panel made recommendations in eight areas.

Promote factors for successful reform. This study revealed how multiple factors contributed to
successful curricular reforms. In the evaluation report, these factors were grouped under the headings of
(i) the local context, (ii) the process, (iii) management and logistics, and (iv) the characteristics of the
innovation. Both CCD applicants and review panels should be aware of the importance of these factors
when writing and reviewing proposals. These and other findings in the evaluation report should be of great
value to current and prospective CCD PIs. This review panel strongly encourages broad distribution of the
findings of the CCD evaluation report. Summaries should be placed in key publications, such as Science
and professional society newsletters.
Particular attention should be given to the characteristics of proposed reforms. The study team
identified five innovation characteristics associated with curricula that were rated by case study visitors as
“most effective”:

Course content had a high level of coherence; the parts had a logical sequence from beginning to end.
Teaching emphasis was placed on broad concepts and key principles, not small details.
Students developed progressively deeper comprehension of key concepts, principles, theories, and/or data sets
by revisiting the same concepts throughout the curriculum via a wide range of contexts and examples; the
contexts and examples that came later tended to be less familiar, and more abstract or complex.

-xv-


Successful projects had faculty who had mastered the pedagogy called for in the project design.
Whether or not projects reformed what was taught, changes rated by visitors as most likely to be continued
almost always involved reform of how content was taught.

Promote faculty training and continued support. The case study visits gathered convincing evidence
that high-quality faculty (and TA) training and continued support following training were critical for
success. Thus, we recommend that there should be a greater emphasis on workshops to train and to
support faculty in follow-up sessions so that they become proficient in CCD project activities. If faculty
and TAs use project materials without workshops and follow-up supporting activities, hoped for results are
unlikely to be obtained.
Local workshops were often run by department faculty who had themselves attended workshops
about the new materials and instructional strategies. However, three models for helping faculty develop
the requisite knowledge and competencies were consistently successful: (1) having outside experts
conduct field-tested workshops, (2) having local faculty oversee workshops that feature national experts,
and (3) an apprentice model where local faculty members who had previously taught project courses team
with partners who have not yet done so.
When reviewers judge the merits of proposed faculty training, they should employ the
nine elements in Figure 4 (page 47) of the evaluation report that the study team indicated as necessary,
although not sufficient, for a model program. CCD staff can use these same elements to determine if time
and funds budgeted for faculty training are realistic and will encourage teaching mastery.

Promote training in assessment. Assessment is another area where faculty need training and
technical assistance to help them adopt, adapt, and/or create strategies and techniques to gather data on
student and course outcomes. Proper selection, modification, and creation of instruments will permit
discovery of the levels of knowledge and skills obtained by students. However, newly created instruments
must match the mode of instruction. If students acquire knowledge and skills in the laboratory, then
assessment of the levels of knowledge and skills learned should be conducted through appropriate hands-on
tasks. Knowledge acquired through the manipulation of symbols should be assessed in the symbolic mode.
If multiple instructional modes are employed, then active, symbolic, and iconic (pictures, maps, graphs,
and models) modes should be employed in a mixed assessment strategy. Whatever assessment devices
are selected, they must be workable in terms of the size and format of the course. Faculty should also
learn how to gather and interpret data from multiple sources such as journals, team member ratings, and
videotaped performances of students completing representative tasks.

-xvi-


Promote dissemination. There should be more workshops to disseminate information about
recently developed CCD instructional materials. While new development efforts continue to be needed,
dissemination should play a co-equal role. For example, the highly successful calculus reform initiative
made heavy use of workshops and short courses at national and regional professional meetings and
free-standing events. When a department was ready to try reform materials, there were typically two or
three faculty who collectively had attended half a dozen workshops about calculus reform.

Promote multidisciplinary efforts. The CCD program should expand efforts to promote
multidisciplinary curriculum development. It is also important within disciplines to provide
cross-disciplinary perspectives. Too often, faculty seek to prepare their majors to become discipline-focused
clones of themselves. Yet in the future, scientific enterprises and the business world will need workers
who can draw on perspectives and modes of reasoning from several disciplines. The CCD program can be
an agent of change to help better align faculty teaching goals with students’ needs.

Address the needs of underrepresented groups. A major NSF educational goal was to broaden the
population of students pursuing careers in science, mathematics, and engineering to include traditionally
underrepresented groups. It was disappointing that only 31 percent of the PIs surveyed indicated that they
took any specific steps to address this goal. Program guidelines and staff may need to publicize this goal
more aggressively.

Provide assistance for PIs. To enhance the effectiveness of CCD grants, the review panel
recommends that NSF organize regional meetings of CCD PIs to promote an interchange of
information and concerns about assessment, new modes of instruction, research into student learning,
strategies for changing the academic culture, and related issues that cut across all disciplines. Another
approach is to encourage new PIs to visit a nearby mature CCD project for “start-up” help.
Most projects would also benefit from technical assistance on administrative issues, such as
planning and monitoring implementation, scaling-up a successful project staffed by volunteer faculty to a
larger group of perhaps more skeptical colleagues, and helping faculty on other campuses adopt a complex
project. We encourage DUE to consider offering some kind of technical assistance.

Conduct research in undergraduate education. The panel believes that three kinds of research
are needed:

Assessment. More research is required to identify various ways of evaluating student performance. This need
is especially relevant now that curriculum reformers are focusing on obtaining evidence of higher levels

-xvii-


of student cognitive functioning. These newer techniques must deal with data collected longitudinally and
at the end of courses. Additional research is also required to identify ways to minimize the labor-intensiveness
of some assessment methods. These techniques should permit rapid data gathering and analysis of
data, and the transformation of the results into performance profiles for each student and the entire class.

Changing the academic culture. The study team’s report stresses the difficulty in changing the academic
culture. It noted that multiple forces determine the way academia functions. While much research has been
done on creating organizational change, most of it has been done outside of academia. Studies are needed to
understand better the psychological barriers and institutional impediments to changing individual faculty
behavior.

Dissemination. Dissemination of curricular reforms is another area requiring further research. Future CCD
project directors would benefit from more detailed, and empirically based, information about what makes for
successful acceptance of curricular innovations and what types of barriers hinder adoption of innovations.
Funding of these research questions would seem to call for a collaborative effort by CCD with other DUE and
EHR units.

Quality of the CCD Program Evaluation
The worth of any program evaluation depends on the methods employed. There were four methodological
features that strengthened this study. First, the evaluators collected data from multiple sources using
multiple methods. They conducted interviews with leaders of reform efforts in their disciplines. They
then used these findings to design survey questionnaires for CCD grant applicants who did, and did not,
receive funding. They also formed case study teams to visit selected colleges and universities, where
visitors reviewed documents, observed classes, and conducted individual and group interviews
and discussions to form the basis of their ratings.
Second, they defined success in terms of multiple dimensions. Third, they included members of
the review panel on visiting case study teams. Consequently, panel members were able to gain direct
knowledge about several projects: they observed classes to see how they were taught, and they questioned
students, faculty, and administrators about their experiences with reforms. This permitted them to form
independent judgments about the balance of the materials presented in the written report. Fourth, the evaluators
systematically cross-validated data across the multiple methods and sources employed. The combination
of these factors, along with the fact that the findings from various methods and sources were mutually
consistent, increases our confidence in the findings, and in the recommendations based on them.
Collectively, they contribute to making the evaluation report a document of substantial value to NSF and to
current and prospective CCD grantees.

-xviii-


Summary
The CCD program can claim credit for contributing both to specific instructional reforms and to promoting
a greater general concern about better undergraduate science, mathematics, and engineering education.
Now, the CCD program should use this study to refine its guidelines for existing types of project funding
while also moving its emphasis towards greater dissemination, more faculty training, and better student
assessment.

-xix-


Evaluation of the Division of Undergraduate Education’s
Course and Curriculum Development Program

Final Report

for
Division of Research, Evaluation and Communication
Directorate for Educational and Human Resources
National Science Foundation
Arlington, VA 22230

by
Jeffrey W. Eiseman
James S. Fairweather
Sheila Rosenblum
Edward Britton

Prepared by The NETWORK, with SRI International, pursuant to NSF
Contract No. RED-9255379, Conrad Katzenmeyer, Project Officer

The views expressed in this report do not necessarily reflect the position or
policy of the National Science Foundation, and no official endorsement by the
National Science Foundation should be inferred.

-xxi-


Table of Conte nts
Chapter One: Introduction . . . . . . . . . . . . . . . . . . . . . . . . 1
Chapter Two: The Nature of the Innovations . . . . . . . . . . . . . . 6
Size, Structure, and Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Key Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Changes in Teaching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
New Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
New Assessment Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Chapter Three: Change Strategies, Training, Evaluation, and Transfer 14
Change Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Formative Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Summative Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Promoting Adoption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Division-Level Support of Communication and Transfer . . . . . . . . . . . . . . . . . 20
Communication Activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Chapter Four: Success . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Soundness of the Innovation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Completeness and Proficiency of Implementation . . . . . . . . . . . . . . . . . . . . . 28

Impact on the Institutions Receiving Awards . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Impact on Faculty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Impact on Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Impact on Departments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

-xxiii-


Extent of Impact Beyond Funded Institutions . . . . . . . . . . . . . . . . . . . . . . . . . 34
Awareness and Use of DUE Products and Materials . . . . . . . . . . . . . . . . . . . . 36
Results from a Tracer Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Results from Visits to Six Campuses That Did Not Receive Funds . . . . . . . . . . . . 38
Results from an Evaluation of an Electronic Dissemination Effort . . . . . . . . . . . . 39
Commentary on the Five Kinds of Evidence . . . . . . . . . . . . . . . . . . . . . . . . 40

Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

Chapter Five: Site Dynamics and Factors Associated with Success . . . 43
Factors Associated with Implementation and Continuation . . . . . . . . . . . . . . . . . . 43
Context Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Process Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Management and Logistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Innovation Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Summary of Factors Associated with Implementation and Continuation . . . . . . . . . 50

Factors Associated with Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Impact on Faculty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Impact on Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Impact on Departments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

Chapter Six: Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 59
How Effectively are CCD’s Objectives Being Achieved? . . . . . . . . . . . . . . . . . . . 59
Student Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Institutional Outcomes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

What was Learned about Factors that Affect Project Effectiveness . . . . . . . . . . . . . . 62
The Nature of Innovations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Faculty Mastery of Instructional Features . . . . . . . . . . . . . . . . . . . . . . . . . 63
Support for Faculty Mastery of the Innovation . . . . . . . . . . . . . . . . . . . . . . 63
Involvement of Respected Colleagues . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

What Modifications Might Make CCD More Effective . . . . . . . . . . . . . . . . . . . . . 64
The Mix of Grants Awarded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Guidelines for Scale-Up and Adoption Proposals . . . . . . . . . . . . . . . . . . . . . 66
Orienting Proposal Reviewers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

Summary and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

-xxii-


L is t of F igures
Figure 1: A Model for Assessing the Effectiveness of the CCD Program . . . . . . . . . . . . . . 3
Figure 2: Sources for Research Relevant to Promoting Undergraduate Learning . . . . . . . . . 10
Figure 3: Steps Taken to Serve Underrepresented Populations More Effectively . . . . . . . . . 11
Figure 4: Elements Provided in Effective Training Programs . . . . . . . . . . . . . . . . . . . 47
Figure 5: Factors Associated with Implementation and Continuation . . . . . . . . . . . . . . . 51
Figure 6: Outcomes in the Student Gains Scale for Engineering and Science Projects . . . . . . 53
Figure 7: Changes in Teaching Associated with Student Gains, by Discipline . . . . . . . . . . 54
Figure 8: Factors Associated with Departmental Impact . . . . . . . . . . . . . . . . . . . . . . 56
Figure 9: Actions Consistently Associated with Successful Scale-Up and Transfer . . . . . . . . 57
Figure 10: An Integrated View of Project and Site Dynamics . . . . . . . . . . . . . . . . . . . 60

-xxiv-


L is t of Tab les
Table 1: Distribution of Visits by Duration and Institutional Type . . . . . . . . . . . . . . . . . 2
Table 2: Ranges for Half the Projects, Along with CCD Program Statistics . . . . . . . . . . . . 7
Table 3: Proportion of Projects Reporting Increases in Teaching Activities, by Discipline . . . . 10
Table 4: Percent of Projects That Developed Various Materials and Products . . . . . . . . . . . 12
Table 5: Reported Primary Project Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Table 6: Percent of Projects Reporting Departmental Discussion of the Projects . . . . . . . . . 22
Table 7: Percent of Projects Reporting Use of Various Communication Methods . . . . . . . . . 22
Table 8: Percent Reporting Increases in the Amount of Time Faculty Spend on Teaching . . . . 29
Table 9: Percent Reporting Increases in Amount of Collaboration Related to Teaching . . . . . 29
Table 10: Percent Reporting Changes in Faculty Conceptions of Teaching and Learning . . . . . 30
Table 11: Percent Reporting that More Students Achieved Valued Outcomes . . . . . . . . . . . 31
Table 12: Reported Familiarity with DUE Products, by Award Outcome . . . . . . . . . . . . . 36
Table 13: Percent of Unfunded Adopters Who Reported Success, by Benefit . . . . . . . . . . . 38

L is t of S idebar s
Calculus Reform: A Major CCD Initiative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Some Factors Affecting Diffusion and Adoption of Instructional Innovations in Higher Education . . 58

-xxv-


Chapter One: Introduction
The National Science Foundation’s Course and Curriculum Development (CCD) program awards
grants for developing courses and sequences of courses in mathematics, science, and engineering. Its
purposes are:

to increase student understanding of, interest in, and comfort with mathematics, science, and engineering,
especially for students from underrepresented populations; and

to contribute to a shift in academic culture so that colleges and universities place greater value on undergraduate
teaching, and on scholarship related to undergraduate education.

The first CCD grants were awarded in 1988. For the first three years, awards were made for
projects focusing on either calculus (including pre-calculus) or engineering. From 1991 on, awards were
also made for projects focusing on science, and for mathematics projects unrelated to calculus.
Beginning in 1993, The NETWORK, Inc., a subcontractor to SRI International, conducted an
evaluation of CCD. The evaluation was overseen by a technical review panel consisting of seven faculty
members who have been at the forefront of undergraduate educational innovation in their respective fields.
The panel provided feedback to the evaluation team regarding its goals, sampling plan, data collection
plans, instruments, and draft reports. The evaluation consisted of four components:

Telephone interviews with leaders. During 1993-1994, 43 nationally known leaders in mathematics, science, and
engineering education were interviewed. Among other things, they were asked to describe the outcomes that they
valued (for undergraduate students and institutions) and which changes in teaching they thought would help to
enable a greater number of students to achieve those objectives. The findings from these interviews were described in
Eiseman’s Interviews with Leaders in Education, Science, Mathematics, and Engineering (Andover: The NETWORK,
1994), and were used in constructing the questionnaires described immediately below.

Surveys. In 1995, 429 questionnaires were sent to principal investigators (PIs) who received awards from
1988 through 1993. In addition, 11 questionnaires were sent to co-PIs located at other institutions. Responses
were received from 345 individuals, representing 335 projects. Fourteen institutions did not return
questionnaires: six PIs had died, retired, or left the institutions and were not otherwise traceable; three had not
yet made enough progress in their projects for it to make sense for them to fill out the questionnaire; and five had
projects that were inappropriate — e.g., two had projects for secondary students rather than for undergraduates.
The response rate was between 78 and 81 percent, depending on whether grant recipients who had died or
retired, or whose project did not fit the sample specifications, were included in or excluded from the total.

At the same time, 350 questionnaires were sent to a sample of individuals who, between 1988 and 1993,
submitted proposals that were not funded. In 23 instances, potential respondents to the survey of unsuccessful
applicants had left their institutions. These individuals were randomly replaced with other unsuccessful
applicants from the same kind of institution. Ultimately, responses to this survey were received from 240
individuals, representing a return rate of 69 percent.

-1-


The two surveys were the subject of a separate technical report by Eiseman and Fairweather: Surveys of Grant
Recipients and Unsuccessful Applicants
(Andover: The NETWORK, 1996). This final report presents and
discusses the most important survey findings.

Case studies. Case study visits were conducted at 33 institutions representing 25 projects. The sample
included mathematics (both calculus and non-calculus), engineering and computer science, life sciences,
physical sciences (chemistry, geology, and physics), and multidisciplinary projects. The sample also included
projects at community colleges, liberal arts colleges, comprehensive universities, research universities, a
women’s college, and an Historically Black University. Visits were made to small and large projects. They
were also made to single-institution projects, to institutions that were part of a consortium, and to institutions
that did not receive NSF funds but either adapted CCD project activities or materials or were represented at
workshops conducted by grant recipients.

Most visits lasted for two days. Table 1 shows the distribution of visits by duration and institutional type.

Table 1: Distribution of Visits by Duration and Institutional Type
Duration
of Visits Two-Year Colleges Liberal Arts Colleges Comprehensive Universities Research Universities Total

1-Day 41 2 18
2 or 3 Days 3 251525
Total 7 371633

The case studies were conducted by teams consisting of one member of the evaluation study staff and one or
two nationally-recognized content specialists. Each of the members of the review panel served as a content
specialist during at least one case study visit.

Prior to the visits, the team members read the proposal, comments by those who had reviewed the proposal,
and annual reports if any had yet been filed. The visits were arranged so that team members could meet with
the principal investigator at the beginning and the end of the visit, and could observe classes, talk to
students, project faculty, non-project faculty, the department head, an appropriate dean, and when possible, a
project evaluator. The reports that resulted were then used by the evaluation team for cross-case analysis.

Telephone tracer study. On the basis of survey information, twenty projects were selected to assess the extent
to which the CCD project had an impact beyond the institutions receiving NSF funds. Principal investigators
and publishers were asked for lists of names of individuals who had attended conferences or workshops or
requested or purchased project materials. Lists of names were obtained from 14 projects, and 184 individuals
on these lists were interviewed by telephone about their knowledge, use, and experience with project ideas,
software, or materials.

To understand how well CCD is working, the evaluation team constructed a model of the program,
which is presented in Figure 1. The model displays relationships among four types of elements:

Program input. Various types of CCD awards were grouped together under two headings: development grants
(to develop innovations — including activities, courses, curricula, and various kinds of supporting

-2-


Figure

1:

A

Model

for

Assessing

the

Effectiveness

of

the

CCD

Program

Increase Availability

of

High

Quality

Units,

Courses,
and

Materials
Increase

the

Project

Faculty’s Knowledge
Regarding

the

Nature

of

Teaching

and

Learning Increase the

Project

Faculty’s Competence in
Using

Effective Methods of

Promoting

Learning
Core

Tasks

Program Input CCD Development Grants CCD Transf er Grants

Institutional Outcomes

Ultimate Outcome Increased Proportion of Students Achieving Val ued Outcomes
Increased
Value

Placed

on Undergraduate

Education Enhanced Likelihood that Implemented Innovations will Continue
Larger

Numbers

of

Faculty

Working
to

Improve Undergraduate Education


materials and products) and transfer grants (to promote the adoption of innovations at institutions other than
where they were developed).

Core tasks. Recipients of development grants were expected to develop courses and materials in a manner that
was sound — not only mathematically and scientifically, but also pedagogically. The “Increased Availability”
box is shown as a core task for transfer (as well as development) grants because faculty who were trying to
implement existing innovations often discovered the need to create additional activities or supporting materials
to suit their own contexts and objectives.

The model asserts that in order to achieve desired outcomes, two additional core tasks must be accomplished in
both development and transfer grants. Project faculty must understand that building student understanding,
competence, and positive attitudes toward the discipline involves much more than transmitting information
skillfully. They must also achieve an adequate level of proficiency with the rather difficult-to-master strategies
and techniques that promote deeper understanding, thoughtful problem solving, and increased interest.

Institutional outcomes. Staff from NSF’s Division of Undergraduate Education (DUE) recognized that any
success at increasing the proportion of students achieving valued outcomes will be transitory if the core faculty
involved in a project changes but the department as a whole does not. As a result, they hoped to achieve three
specific institutional outcomes (but recognized that success on institutional dimensions takes a long time
and is extraordinarily difficult to achieve):

• a change in the academic culture so that a higher premium is placed on undergraduate education by both
administrators and faculty — and institutionally through faculty reward systems;

• the involvement of faculty beyond those working on the project in efforts to improve undergraduate
education, not only in project courses, but in prerequisites and more advanced courses; and

• a strong chance that whatever is implemented will be refined, extended, and institutionalized so that
three to five years from now, it will still be in place.

Ultimate goal. The ultimate goal of the CCD program has been to increase the proportion of students who are
achieving a range of valued outcomes.
The leaders in mathematics, science, and engineering education interviewed
by telephone recognized that for many students, proficiency at using formulas, manipulating symbols,
or carrying out laboratory procedures was often achieved with little understanding of the underlying
mathematical and scientific concepts, principles, and theories. Accordingly, one valued outcome was to
increase student conceptual understanding.

These leaders asserted that many governmental and corporate policy decisions that involve or can be informed
by mathematical, engineering, or scientific knowledge — or can affect the nature and extent of future activity
by mathematicians, engineers, and scientists — are made by individuals with little exposure to, and often negative
attitudes toward, these areas of knowledge. Accordingly, two additional valued outcomes were to help
non-majors develop: (a) understanding of, and respect for, the scientific process, and (b) interest in, comfort
with, and positive attitudes toward mathematics, science, and engineering.

Finally, DUE staff encouraged grant recipients to serve more effectively “underrepresented populations” —
namely, women, minority students, and students with disabilities.

-4-


The model shows causality flowing in only one direction, despite the fact that system dynamics
involve not only other conditions and factors but also many single and higher order feedback loops. The
model was simplified in this manner because its sole purpose is to draw attention to key elements. The
model guided the study team’s evaluation activity, and also shaped the organization of this final report.
The report is divided into six chapters. The discussion in Chapter Two focuses primarily on the
program input boxes. It provides a basis for understanding the nature of the projects, both their similarities
and their differences. Chapter Three focuses on the core task section of the model. It describes three kinds
of activity that either support or extend the project — namely, efforts to train faculty or graduate teaching
assistants to use the innovation properly, efforts by project personnel or their outside contractors to
evaluate project implementation or outcomes, and efforts related to transferring the innovation either to
other faculty within the host institution or to faculty in other institutions.
Whereas Chapter Two and Three are primarily descriptive, Chapter Four is evaluative. Its scope
includes assessments of how well the core tasks were carried out, and of how fully the overarching goal
and the desired institutional outcomes were achieved. However, with only two exceptions, all of the
survey and case study findings are based on grants that were awarded before or during 1993. This means
that the data in Chapter Four are useful in commenting on what DUE staff did during CCD’s first six years,
but less helpful in commenting on any decisions since 1993 to change the profile of proposals being funded
or to launch new initiatives. Chapters Five and Six constitute a partial remedy for the time-boundedness of
the data. Chapter Five examines the factors associated with the various categories of success discussed in
Chapter Four. Using the model as a guide, Chapter Six provides a more holistic assessment of the extent to
which the CCD program is working as intended. It also suggests modifications for achieving DUE’s
emerging priorities.

-5-


Chapter Two: The Nature of the Innovations
The survey and case study evidence confirmed that many significant reforms, large and small,
were designed, implemented, disseminated, and adopted. Across many campuses, a critical mass of
courses or curricula were changing. This chapter describes the nature of innovations supported by CCD.
There were substantial similarities across projects. Most projects tried to address such problems
as: a lack of cohesiveness among topics in courses, weak connections between lectures and laboratories,
not enough emphasis on application of the course materials and information, and not enough emphasis on
teaching for understanding. As described in more detail below, both the survey and case studies revealed
that a substantial majority of the projects included emphases on applications and critical thinking, active or
project-based learning, inquiry or discovery labs rather than verification labs, and working in groups.
These changes were consistent with the emerging research on teaching and learning that suggests that
active student engagement with mathematics and science materials is essential for developing deep
conceptual understanding.
Differences among projects include variations in size, structure, and scope; in complexity; and
with respect to their key features.

Size, Structure, and Scope
No single measure captures project size. For example, two projects doing the same work may cost very
different amounts of money if the indirect costs and salaries at one institution are substantially higher than
those at the other. The number of full time equivalent (FTE) professional positions supported by CCD is a
proxy for the level of activity that corrects for salary and indirect cost disparities; it also takes the duration
of the project into account. Yet in some institutions, the only people who engaged in project activities were
those who were supported by project funds, whereas at others, project funds only supported a fraction of
those participating. And some institutions donated all the professional personnel time and used grant funds
for other purposes.
Because no single measure captures project size, Table 2 contains program statistics along four
dimensions. Between 1988 and 1993, CCD awarded grants to 468 different principal investigators. The
first two rows of the table are based on NSF data regarding projects awarded to these 468 individuals. For
purposes of this evaluation, a project that was renewed was still considered a single grant; if an individual
received more than one grant, only the largest was counted. The second two rows are based on survey
responses. For this table, the term “professional” includes not only faculty, administrators, and
professional staff, but also graduate teaching assistants (TAs).

-6-


Table 2: Ranges for Half the Projects, Along with CCD Program Statistics
A Range Encompassing
50% of the Projects

Program Statistics

Lowest Highest Median Average
Duration of Award
1.5 to 2.5 years 1 ¤2 year 7.5 years 2.5 years 2.7 years
Amount of Award $50,000 and $140,000 $1,500 $2,106,809 $100,000 $156,272
Professionals Supported 1 to 5 professionals 0 people 130 people 5 people 8.7 people
FTE Positions Supported 1 ¤4 to 2 professional FTEs 0 FTE 42.1 FTE 1.65 FTE 2.6 FTE

Table 2 reveals that while the range for each of the dimensions of project size is substantial, the
size of the typical project is modest. For example, the amount of the average award is only 7.4 percent of
that of the largest award; the comparable figures for number of professionals supported and number of FTE
professionals supported are even lower (6.7 percent and 3.9 percent respectively). Furthermore, Table 2
demonstrates that the averages provide a misleading indication of the size of typical projects. For example,
the average number of professionals supported (8.7) is considerably above the rather narrow range that
encompasses at least half of the projects (from one to five professionals). For the other three dimensions of
project size, the average is also higher than the top of the range.
The awards for grant recipients in the case study sample ranged from $57,333 to $1,204,585, with
the average just over $300,000. Although the case study sample included a disproportionate number of
medium and larger projects, it also included some adopter sites that did not receive any funds from NSF.
The latter were included to examine the outcomes of selected CCD dissemination efforts.

The scope of the projects ranged from the work of one professor on part of one course, to that of
several faculty attempting to revise the entire core curriculum of a department, to that of multi-institution
consortia attempting to disseminate or adopt innovations. Here are some examples of large and small
scope projects:

A large project in a single institution. An award to a computer scientist in a state university supported the
development and implementation of a new four-course pragmatically-oriented core curriculum. The
innovation involved both curricular and pedagogical changes. The major thrust of the new curriculum was a
set of laboratory materials and exercises. Reversing tradition, lecture materials and content were designed to
support the laboratories. Other key elements were an emphasis on software engineering, a high degree of
mathematical rigor especially in discrete mathematics, a strong prerequisite structure, the use of a contemporary
programming language not typically used in higher education institutions, hands-on experience, working in
teams, real world artifacts in assignments, and projects requiring the construction of extensive lines of code.
The project involved a high level of student interaction with faculty and graduate students. Both graduate and
undergraduate TAs worked extensively in the laboratories, but faculty, not graduate students, did all
the teaching.

-7-


A small project in a single organization. A single investigator in an institution with a strong liberal arts focus
received a small award to work on two science courses. The project tackled the issue of how to improve
student learning in large lecture classes by using active learning techniques, group activities, and writing
to enhance student learning.

A consortium. A large university formed a consortium among several of its campuses to revitalize
introductory curricula and make the mathematics, science, and engineering departments more hospitable to
women and minorities. One key feature of this project was to provide Distinguished Visiting Professors to
campuses to work with faculty fellows and others.

Complexity
One way of characterizing innovations is by their complexity. Relevant features include the magnitude of
the change attempted, the degree of difficulty of the change, and its newness. Contrary to conventional
wisdom, small and easy changes do not necessarily result in greater implementation success than bigger
and more difficult ones. Intervening factors include whether the innovation is considered beneficial and
perceived as sufficiently different from present practice to justify a serious commitment of time and effort.
For the most part, funded projects attempted to develop or implement new activities. However, in
a few cases, new activities with the potential for inducing major change were simply incorporated into a traditional
course with little change in approach. In one institution visited, for example, a case study team
member wrote:

The mathematics that is being taught is essentially the calculus course of twenty or more years ago. The
textbook is a mainstream traditional one. One faculty member has modified his approach by
making extensive use of a computer algebra system, but most of the faculty added an hour or two a
week of student time in the computer classroom, tacked on to traditional classroom time, giving
little attention to understanding and promoting the potential benefits of allowing students to focus on the central
idea of calculus.

In the above example, most faculty treated the innovation as a minor change, although it had the potential
to be the cornerstone of a fundamental reform, as one faculty member used it. For most faculty in this
project, the difficulty of implementing the change was only minimal because neither the principal
investigator nor the faculty member who had figured out how to use the software package effectively was
able to convince his colleagues to go beyond trivial uses.
Some projects that emphasized both content and pedagogy received strong support from
non-project faculty and administrators, partly because the newness was obvious and the effort seemed
beneficial. The following examples illustrate this point.

-8-


A large community college received a grant to develop and implement two multidisciplinary courses
for non-science majors. Both curricular and instructional changes were major. One course used a
macroscopic approach to examine the evolution of the earth and the universe; the other used a
microscopic approach, integrating the physical and life sciences to understand the organization
of the living cell and its interaction with its environment. Faculty from four different science departments
participated. Both courses emphasized active learning, hands-on laboratory activities, extensive writing,
and critical thinking, which was expressed extensively through “asking questions.”

Another example of a project with a high magnitude of change involved integrating existing
computer tools into a physics course:

The developers created a hypertext multimedia environment user interface in order to make the
leading computer learning tools available to physics education. Then they incorporated existing
computer learning tools, developed additional applications for these learning tools, and created
instructional lessons for using the system. The complexity of the innovation was increased still more by
setting this multimedia environment in studio classrooms (rather than the standard lecture/recitations and
laboratories). In these studio classrooms — which contain computer workstations for groups of two
to three students who collaborate as partners — a team consisting of a faculty member and graduate
and undergraduate TAs deliver all these kinds of instruction to groups of 40 to 60 students. Instructors
facilitate students’ work on problems, exercises, and laboratories, spending less time lecturing or
demonstrating solutions.

Key Features
Whether large or small, many of the projects contained similar key features. Not all of the features were
evident in every project. Some of the key features that appeared frequently include: changes in instruction
that affected both faculty teaching and student learning activity, the development of new materials, and the
development of new assessment methods.

Changes in Teaching
The recommendations of the 43 leaders interviewed by telephone regarding the changes needed in
undergraduate education in their disciplines were consistent with the literature. In the survey, principal
investigators were asked to rate the centrality or importance of aspects of teaching mentioned by these leaders,
both before applying for the grant and now. Table 3 shows the net proportion of projects reporting
increases — i.e., the proportion reporting increases minus the proportion reporting decreases — on ten of
these aspects. The net proportions are presented by discipline because, as will be illustrated in Chapter
Five, changes in teaching that are associated with student gains differ by discipline.

-9-


Table 3: Proportion of Projects Reporting Increases in Teaching Activities, by Discipline
Net Percent of Projects
Mathematics
N = 84 Engineering N = 73 Science N = 148 All N = 306

Having students work in teams 83 71 81 79
Having students use software 89 70 62 74
Having students frame questions/devise procedures 65 53 76 68
Having students serve in research apprenticeships 26 25 19 23
Using non-traditional assessment methods 86 63 74 75
Achieving high integration among course components 67 73 70 70
Teaching concepts or methods from other disciplines 73 60 69 68
Teaching recent findings, theories, or methods 74 59 67 66
Eliciting and addressing student misconceptions 56 74 62 63
Lecturing – 58 – 21 – 53 – 46

The nine increases and the one decrease (lecturing) are all consistent with expert opinion regarding
best practices (see Figure 2 for a list of research references). For eight of the ten aspects, the net
increase in usage was greater than 60 percent.

Figure 2: Sources for Research Relevant to Promoting Undergraduate Learning
Bonwell, C.C., & Eison, J.A. (1991). Active learning: Creating excitement in the classroom. Washington, DC:
School of Education and Human development, George Washington University.

Bruffee, K.A. (1993). Collaborative learning, higher education, interdependence, and the authority of knowledge.
Baltimore: Johns Hopkins University Press.

Feldman, K.A., & Paulson, M.B. (Eds.) (1994). Teaching and learning in the college classroom. Needham Heights,
MA: Ginn Press.

Goodsell, A., Maher, M., & Into, V. (1992). Collaborative learning: A sourcebook for higher education. University
Park, PA: National Center for Postsecondary Teaching, Learning and Assessment, Pennsylvania State University.

Halpern, D.F., & Associates (1994). Changing college classrooms: New teaching and learning strategies for an
increasingly complex world.
San Francisco; Jossey-Bass.

Johnson, D.W., & Johnson, R.T. (1994). Learning together and alone: Cooperative, competitive and individualistic
learning.
Boston: Allyn & Bacon.

Johnson, D.W., Johnson, R.T., & Smith, K.A. (1991). Active learning: Cooperation in the college classroom.
Edina, MN: Interaction book Company.

Kadel, S., & Keeher, J.A. (1994). Collaborative learning: A sourcebook for higher education. University Park, PA;
National Center for Postsecondary Teaching, Learning and Assessment, Pennsylvania State University.

Menges, R.J., Weimer, M., & Associates. (1996). Teaching on solid ground: Using scholarship to improve practice.
San Francisco: Jossey-Bass.

Meyers, C., & Jones, T.B. (1993). Promoting active learning, strategies for the college classroom. San Francisco:
Jossey-Bass.

Schön, D.A. (1987). Educating the reflexive practitioner. San Francisco: Jossey-Bass.

-10-


The three most frequent changes involved engaging students more actively in the learning process.
Almost 80 percent reported an increase in having students work in teams. Changes in use of technology
were also evident with 74 percent of all respondents — 89 percent in mathematics projects — reporting an
increase in having students use software. Sixty-eight percent of all respondents — 76 percent in science —
reported an increase in having students frame researchable questions or devise investigative procedures.
The case studies provided corroborative evidence that a large number of projects added or modified their
courses to include more hands-on investigative activities. Here is an example:

A principal investigator at a large university developed a biology course for non-majors. It consisted of a
series of activities designed to help students develop the concepts of hypothesis testing, metric assessment,
and descriptive statistics. The intent was for students to formulate and test their own hypotheses related to
biological concepts about which, according to the literature, lay people often held misconceptions.

Although “natural” for laboratory-based science courses, hands-on investigative activities were designed
and implemented in mathematics, computer science, engineering, and multidisciplinary courses where the
frequency of hands-on experiences is much lower.
According to survey respondents, 31 percent of the projects took special steps on behalf of
underrepresented populations. The most common steps are listed in Figure 3.

Figure 3: Steps Taken to Serve Underrepresented Populations More Effectively
Varied the instructional modes or otherwise accommodated diverse learning styles
Provided additional advising or tutoring, or set up clubs for women or minority students
Actively recruited women or minority students
Modified or selected materials to make them gender/ethnic neutral
Modified or selected materials to make them of special interest to women or minority students
Brought in women/minority speakers or focused on contributions made by women/minorities
Increased proportion of time spent on skills or problem types that traditionally pose difficulties for
women/minorities

New Materials
Development of course materials or other products was a feature of almost all of the projects. In
many cases, materials development accompanied the development of hands-on activities, and included
such items as newly designed labs and resource manuals for students. Table 4 shows the three kinds of

-11-


products most frequently developed, along with the proportion of projects that developed them. Virtually
all grant recipients (96 percent) reported that they developed at least one product.

Table 4: Percent of Projects Reporting Having Developed Various Materials and Products
Texts, workbooks, lab manuals, etc. . . . . . . . . . . . . . . . . . . . . 75
Syllabi, lesson plans, instructor’s manuals . . . . . . . . . . . . . . . . . . 74
Software for students . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Here is a description of a calculus-based physics project in a large university that placed heavy
emphasis on materials development:

Staff developed two sets of materials for producing more active student learning and deeper conceptual
development to augment the more typical quantitative-only understanding previously promoted in the
course. The materials are used throughout the lecture, recitation, and laboratory components of the
course taken by 1,000 engineering and physics majors each quarter. The two products are: 1) “Active
Learning Problem Sheets” for lecture and recitation, and 2) a one-year set of lab activities comprised of
“concept construction experiments” and “experiment problems.”

Some projects had ambitious plans to develop and disseminate materials, but not all were able to
complete them. The responses to one survey item, which was not specific to developing materials or products,
indicated that 29 percent of the projects were unable to fully develop or implement at least one major
aspect of their project before the end of the grant due to insufficient time or money.

New Assessment Methods
As was shown in Table 3, 75 percent of the grant recipients asserted that their projects involved
using assessment methods beyond traditional exams, quizzes, and problem sets. One example is
“gateway” tests that were part of several calculus reform projects. Gateways are based on the concept of
mastery learning, where students are required to achieve a given proficiency level in order to pass the
course. Gateway tests have the advantage of reassuring reform critics — who complain that skill
development is being sacrificed — that a certain level of proficiency has been achieved by all students,
while at the same time allowing the instructors to focus their major effort on promoting understanding.
One project developed and implemented an ambitious reform that addressed both the engineering
courses for majors and those designated as meeting general education requirements. Experts in

-12-


assessment from the College of Education provided technical assistance during the development stage of
each new course.
Some projects employing new pedagogical techniques introduced ways for students to document
and reflect on their learning experiences. These activities included keeping journals, making presentations,
and putting on exhibitions. However, even in some of the most innovative contexts, faculty tended to rely
on traditional assessment strategies, such as standardized tests.

Chapter Summary
A wide range of innovations in a wide range of institutional settings and consortia were supported by
the CCD program. These innovations varied in size, structure, scope, and complexity, and in their key
features. According to data from the survey of PIs, each of eight separate changes in teaching was
made by over 60 percent of the projects, and half were made by over 70 percent. These and other
changes in teaching made by CCD projects — including several steps to help underrepresented
populations increase their interest and achievement in mathematics, science, and engineering —
were in the direction recommended by specialists in these fields.
Finally, according to the survey data, 96 percent of the PIs reported that they developed materials
or other products that can be used by faculty at other institutions.
The next chapter examines change strategies, training, evaluation, and promoting adoption.

-13-


Chapter Three: Change Strategies, Training, Evaluation, and Transfer
The focus in this chapter is on change strategies and on three categories of activity that play a role
in CCD project implementation success, both within and beyond developers’ sites: training, evaluation, and
activities to promote adoption.

Change Strategies
The strategies used to improve undergraduate education varied considerably across projects. Whether by
design or default, projects acted in accordance with change strategies, which sometimes evolved over the
course of the project. When these changes are analyzed by strategy component, they consist of a cluster
of decisions. Six choices faced by most projects are described below.
Pilot effort versus across-the-board. Those projects that focused on courses that are taught by
several faculty members had to decide whether to have a few volunteers work on developing the
innovation or to have all faculty members participate in the development from the beginning. On the basis
of the case studies, most projects chose to start with volunteers, but there were institutions that, when
implementing innovations developed elsewhere, chose to adopt the innovation across the board.
Incremental versus all-at-once. When an innovation could be divided into somewhat independent
parts — e.g., having students work in teams and having students write descriptions of their understanding
of key concepts — PIs could decide to introduce one element at a time or to wait until all elements had
been worked out (and supporting materials developed) and then implement them all at the same time.
There are two variations of this decision. One involved projects that set out to develop a sequence of
courses: the project team had to decide whether to develop and implement the first course before the
second course was ready or whether to hold the first course until the second course was ready to be
implemented. The other variation involved whether to attempt the entire change that the PI hoped to
eventually bring about during the project term or to seek funding for a modest “phase one” proposal,
with the intention of subsequently finding a way to launch a “phase two.” Since the large majority of
PIs sought funding for their full plans, the thought behind one of the exceptions is of interest. He wrote
that his regional consortium was composed of faculty . . .

who are seeking to introduce new approaches to calculus instruction. Many are dealing with institutional
constraints and professional conservatism, and are not yet ready for radical curriculum revision. Our

-14-


objective is to bring some of the fruits of the calculus reform effort to a wider [regional] audience in order
to stimulate evolutionary change in calculus instruction.

We are adapting ideas and materials from successful pilot projects in calculus curriculum reform to create a
series of self-contained instructional modules. . . . These modules, on specific topics, . . . will be piloted,
revised, and evaluated by consortium instructors. . . . [A]n instructor may choose to use one or several.

Two years after the project just described ended, the faculty at the PI’s institution voted to adopt a
nationally-known, CCD-developed project that made much more major change in the way calculus was
taught. This was an instance in which the slow-and-steady strategy worked.

Prescriptive versus presenting options. Most PIs proceeded on the premise that there was a key
set of ideas underlying their project and that their departmental colleagues should adopt those ideas. This
strategy was also used by some projects that were attempting to persuade faculty in other institutions to
adopt their approach to educational reform. The project described in the previous paragraph selected the
alternative of presenting options, as did some other projects that were trying to foster educational reform in
other institutions. For example, one project conducted workshops for institutions in its region that enabled
them to examine two approaches to calculus reform in depth. If participants chose to adopt one of the
approaches, project staff provided follow-up training in that particular approach.

Focusing on “what” and “how” versus focusing on “why.” Whenever project leaders had the
attention of their colleagues, they had to decide how to allocate the limited time. Some tended to focus
heavily on the nature of the innovation — its key ideas and components — and on the skills needed to
carry it out. While study team members found no projects that neglected either the nature of the innovation
or the skills needed to carry it out, a few principal investigators placed the highest premium on having
faculty examine why change was needed, and why certain kinds of changes might address perceived
problems better than others. As a PI who implemented this latter approach put it, “It is easier to move a
cart by having good strong horses pull it than by trying to push it yourself.”

Whether to make an explicit effort to recruit high status colleagues. In academia, as elsewhere,
status is a rather subjective concept. Two objective criteria are tenure and rank. According to the survey of
PIs, 79 percent of the grant recipients already had tenure when they received their grants, and 53 percent
were full professors. However, these figures are in line with the composition of faculty nationally, and
clearly other factors are involved in determining who, on a particular campus, is generally viewed as
having high status and who does not.

-15-


Probably most PIs formed a team with faculty of no more than average status by default — i.e.,
without actively considering whether the inclusion of one or more high status faculty might strengthen
their project’s prospects for long-term survival or departmental acceptance. In still other projects, where
the PI was an administrator or a high status faculty member, status may not have been an active
consideration. However, evaluation team members visited at least two kinds of projects in which
participant status had been explicitly taken into account.
In one variation, a department head had deliberately appointed a high status faculty member to be
in charge of undergraduate education (or undergraduate education reform); as part of carrying out his or
her function, that individual applied for CCD funding. In the other variation, high status faculty were
deliberately recruited to try out the innovation. In some of these projects, the high status faculty were
invited to participate in the initial development phase. In others, the initial development phase proceeded
without attention to faculty status, but after a working version existed, high status faculty were recruited as
part of a long-term plan to promote department-wide adoption.

What kind of support to provide for faculty who did not participate in developing the innovation.
Because there were differences among projects regarding the number of innovation components, the
magnitude of the departure from what previously existed, the commitment of participating faculty to
undergraduate education, and the difficulty in mastering the instructional and organizational skills
involved, projects differed with respect to the amounts and kinds of support faculty needed. Yet even when
a subset of projects were similar with respect to the amounts and kinds of support needed, study team
members found major differences among them with respect to the amounts and kinds of support actually
provided. Sometimes the constraint was clearly budgetary, such as in projects where the PIs had requested
funds for project management or summer workshops but that portion of their requests had not been funded.
In other instances, PIs had not recognized how difficult it would be, either to develop the innovation to the
point where it could be reliably counted on to produce the desirable results, or to bring colleagues who had
not been involved in working through the innovation’s development to the point where they knew what it
was for, why key design decisions had been made, how it looked when it was being implemented properly,
and how to implement each of its features competently.

On the other hand, study team members found projects that had thought carefully about their
faculty’s support needs. Sometimes they had anticipated them from the beginning, and other times they

-16-


had discovered them from experience. As discussed in Chapter Five, five kinds of support were especially
important: instructional materials for students, handbooks or videos for faculty, pre-implementation
training, providing ongoing opportunities for faculty to discuss problems and share success, and formative
evaluation mechanisms. The first two have already been addressed in Chapter Two. Immediately below
are sections devoted respectively to training and evaluation; later in the chapter is a section devoted to
communication with faculty about the innovation.

Training
The need for training of faculty and TAs (whether or not it was done sufficiently) is widely recognized in
the projects. Forty percent of the projects in the case study sample had implemented exemplary training
programs. Several projects provided opportunities for faculty to team teach with the developer or another
faculty member who had taught it previously, or to observe most or all of an innovative course before
attempting it. Although costly, this was seen as an effective learning opportunity for faculty.
Other faculty training models were also used, such as the one mentioned earlier involving the
presence of a distinguished visiting professor, who served as a mentor in new pedagogical techniques.
This individual worked extensively with three faculty fellows who had been chosen as future leaders and
who received release time as part of the CCD grant. She also gave workshops that were available to all
interested faculty. The faculty fellows became a team that then delivered workshops at the host campus
and elsewhere.
In several projects, faculty training consisted of a week-long pre-implementation orientation and
training session. In one site, orientation was required of TAs and encouraged of tenured faculty who had
not yet taught the course. One large engineering and general education project had particularly successful
faculty workshops, described as “active training” rather than “passive dissemination.” These workshops
attempted to build skill (a) in defining educational objectives, or (b) in using either selected
instructional approaches or particular student performance assessment techniques. Over half the
participants said they changed their instructional approaches after attending the workshops.
In some sites where faculty training took place, faculty other than the developer undertook
dissemination activities outside their own institution. Thus, faculty training can enhance not only the
quality of local implementation, but also the amount of external dissemination.

-17-


Evaluation
Two kinds of evaluation are relevant for CCD projects:
“formative evaluation,” or ongoing monitoring, to help PIs discover when it is necessary to modify
approaches, activities, and materials; and

“summative evaluation,” to indicate whether the project effectively achieved its objectives.

Although NSF requires evaluation in CCD projects, it does not distinguish between these two types.
For both kinds of evaluation, principal investigators sometimes hired outside evaluators. For some
projects, these evaluators came from the campus school of education. In other institutions, bureaucratic
problems made it difficult or impossible to pay grant funds to university employees, so the principal
investigator went outside the university. Other projects went outside their campuses either because their
institution did not have a school of education, or because the principal investigator wanted to involve
evaluators with specialized content knowledge. In general, case study teams found PIs struggling with
evaluating the success of their efforts.

Formative Evaluation
Formative evaluation activity can be further divided into project-wide activity — to improve the
project as a whole — and product testing and revision activity — to improve materials and other products.
Projects that engaged in formative evaluation activities used a variety of methods, including observations
of classes by outsiders, surveys, interviews with students, focus groups, analysis of
comparative enrollment and attendance data, and analysis of records of student interaction with software.
According to survey responses, most principal investigators who developed products gathered feedback
from students, faculty and/or publishers. The proportion collecting such data varied according to the type
of product developed, ranging from a low of 62 percent of the projects that developed video tapes for
faculty to 95 percent of those that developed software for students.

Summative Evaluation
Many grant recipients were not sure how to determine whether their projects had achieved their
objectives. A special problem existed when new objectives had been added to a course: no data existed on
previous or comparable students with respect to the objectives in question. Most projects collected
“norm-referenced” data — that is, data that compares students within a class to provide a basis for

-18-


assigning letter grades. Few collected “criterion-referenced” data — i.e., data that could be compared with
predetermined standards to assess the extent to which particular objectives had been achieved.
Even though CCD proposal guidelines required that projects engage in some evaluation activity,
there was no specific NSF requirement for summative evaluation activity. Furthermore, some projects used
all the money they received for development, so they did not have an adequate budget for high quality
evaluation activities. Finally, many projects were not funded long enough to allow the innovation to reach
the stage of maturity at which a summative evaluation is appropriate — i.e., the point at which the
innovation has been adequately debugged and stabilized.
Nevertheless, a few projects placed a high priority on evaluating the effects of their changes in
teaching, and set about to do a thorough job. Usually they received outside help from specialists familiar
with measurement and experimental design issues. Evidence on this last point comes from case studies
and two other sources: (1) in a few project abstracts, the evaluation plan was described in detail, and
(2) a few PIs were sufficiently proud of their project’s evaluation activities and results that they sent them,
unsolicited, along with their completed questionnaires.

Promoting Adoption
There is strong consensus among policy makers, funding agencies, and innovation developers that it is
important for materials development grants to have an impact beyond their host institutions, as well as
within them. Faculty involved in undergraduate curriculum reform share this concern, evidenced by the
Report of the National Science Foundation Workshop on the Dissemination and Transfer of Innovation in
Science, Mathematics, and Engineering Education (National Science Foundation, 1990, p.1):

[R]esults of these [innovations] are, for the most part, not being disseminated throughout the nation’s
higher education community. We need to multiply the benefits of educational innovation activity at
one location by providing for the dissemination, transfer and adaptation of quality innovations to other
institutions and additional learning environments.

This section discusses “dissemination” activities, but the terms “transfer” — taken from the title of the
1990 NSF report quoted above — and “promoting adoptions” are used rather than “dissemination” because
the latter term often merely refers to the communication of information. The section is divided into two
parts: the first part describes division-level strategies to promote adoptions, and the second describe
activities taken by projects to communicate their insights and experience to colleagues within and beyond
their institutions.

-19-


Division-Level Support of Communication and Transfer
DUE has implemented at least three strategies for promoting innovation transfer within and beyond
institutions receiving grants: CCD has awarded development grants to consortia of unlike institutions,
CCD has awarded some grants whose primary purpose is promoting external transfer, and, in 1994, DUE
conducted a conference for grant recipients to discuss ideas and issues related to communication and transfer.

Development grants to unlike-institution consortia. When a single institution develops an
innovation or product, it often inadvertently does so without recognizing some of the ways in which local
competencies and conditions have shaped decisions regarding organization, format, issues, examples, and
so on. However, when the developers are part of a team that includes different kinds of institutions or
campuses from different sections of the country, then decisions that might impede transportability of the
innovation may surface earlier, permitting the developers to make appropriate modifications.
One common transportability problem is that the number of terms in an academic year differ from
institution to institution. For example, if a decision is made to write a book for institutions with a semester
system, the book becomes more transportable if a section is added to the instructor’s manual explicitly for
institutions that have three terms in an academic year, describing options for adapting the book. Similarly,
developers who were writing a calculus textbook developed a sequence for the topics covered to meet the
needs of one of the institution’s engineering departments. Being part of a consortium might not change the
sequencing decision, but it might lead to the inclusion in the instructor’s manual of a rationale for the
sequence selected, plus suggestions, furnished by other consortium members, of how other sequences
could be used, along with the advantages and disadvantages of each.
The following case study sample includes two variations of this consortium-of-unlike-institutions
strategy, both of which led to increased innovation transportability.

A grant was awarded to faculty in a single institution, which developed three courses and supporting
materials on their own campus, but then sent the materials to faculty on other campuses for field testing.
The plan to field test the materials before finalizing them was not included in the grant recipient’s
original proposal, but was suggested by an advisory committee that DUE staff had urged them to form. In
fact, the field tests were carried out by members of the advisory committee so that a mechanism was in
place to ensure that the feedback from the field tests was addressed.

A basic prototype for a statistics course for non-majors was developed at the institution receiving the
grant, but active collaboration with faculty at other institutions resulted in interesting variations. The
extent of cross-fertilization was increased by faculty from one institution visiting another for a semester
and team teaching the course. In this instance, the result was a rich repertoire of models from which
potential adopters could select.

-20-


Grants to promote adoptions. Two different kinds of grants promote adoption: those that send
innovations out and those that bring them in. Table 5 shows the distribution of responses from grant
recipients when they were asked to characterize the primary purpose of their projects.

Table 5: Reported Primary Project Purpose
Develop
activities, materials,
software, etc.

Implement
innovations
developed
elsewhere

Disseminate
innovations
developed
here

Disseminate
innovations
developed
elsewhere

84%7%8%1%

Even for the 84 percent who characterized their primary purpose as development, proposal
guidelines called for plans to communicate their insights and results beyond their institutions. Many of the
projects in this category were explicitly funded to produce materials or other products that could facilitate
adoption elsewhere. The remaining 16 percent were essentially transfer grants: nine percent (7.8% + 1.2%) to
encourage and help other institutions adopt already developed innovations and seven percent to implement
in their own institutions innovations that had been developed elsewhere. Many federal programs fail to
foster transfer in such an explicit way.

Dissemination conference. In 1994, DUE organized and funded a dissemination conference for
principal investigators under CCD and some of its other programs. The conference included sessions on
electronic publishing and on how to approach a traditional print publisher; it included an exhibition hall
where PIs were able to display project materials. During the case study visits, several faculty gave
unsolicited compliments about this conference, describing it as a productive use of their time. They
appreciated learning about other CCD projects as well as dissemination ideas. Whenever the study team
asked faculty about the conference, they offered similar sentiments.

Communication Activities
Although communication is only one step toward promoting adoptions, it is the necessary first
step. Communication is important whether the intent is to extend the implementation of the innovation to
other sections or courses within the developer’s institution, to institutionalize the innovation therein, or to
transfer it beyond.

-21-


Communication within institutions. Principal investigators were asked to rate the proportion
of faculty teaching undergraduates who, at various points in the project, took part in discussions about the
project and its potential implications for the department. As shown in Table 6, PIs report that as their
projects matured, the proportion of faculty engaged in such discussions increased. Yet even after Year 1,
only 40 percent of the PIs reported that more than 40 percent of their colleagues were discussing project
activities.

Table 6: Percent of Projects Reporting Departmental Discussion of the Projects
Fraction Involved of Faculty Who Teach Undergraduates
0-20% 21-40% 41-60% 61-80% 81-100%
While writing the proposal 66 21 5 4 4
During Year One . . . . . . 40 39 9 5 7
Since Year One . . . . . . . 26 34 17 9 13

Communication beyond institutions. According to survey respondents, 96 percent of the projects
carried out one or more activities to disseminate information about their projects beyond their institutions.
Table 7 lists the five most common methods, along with the proportion using each. Research-oriented
faculty are already familiar with the top three dissemination activities: presentations or workshops,
conference posters, and journal articles. Sabbaticals or faculty exchange programs can be a very effective
transfer promotion mechanism since participants can interact and experiment with the innovation’s
specifics in a new setting. However, because each sabbatical only affects one or a very few institutions, it
seems unlikely that they can be used to reach large numbers of potential adopters.

Table 7: Percent of Projects Reporting Use of Various Communication Methods
Presentations or workshops . . . . . . . . . . . . . . . . . . . . . . . . . 92
Journal articles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Posters or booths at conferences . . . . . . . . . . . . . . . . . . . . . . 63
Electronic media — e.g., bulletin boards and World Wide Web sites . . . . 48
Sabbatical or faculty exchange programs . . . . . . . . . . . . . . . . . 15

-22-


It was rare to find strategies that would target the most likely users in institutions and reach them in
more direct ways, such as direct mail or telephone calls. Such activities are quite labor intensive, requiring
someone to research the programs and faculty at other institutions, to identify potential users or their
administrators, and to tailor the communications to those audiences. Developers can work with publishers
to develop and implement this strategy. The 1994 dissemination conference helped principal investigators
and publishers establish new connections and learn more about each other’s perspective.
One additional strategy is intended to affect other institutions more indirectly. Several of the
projects studied involved TAs, and a smaller number involved postdoctoral fellows or visiting faculty.
Many of the PIs perceived that training and providing experience to these individuals was a commitment to
large-scale educational reform because they will eventually teach elsewhere.

Chapter Summary
This chapter contained a brief description of differences among projects with respect to the strategies that
PIs used to try to involve their colleagues. The discussion focused on the following strategic choices:

whether to mount a pilot effort or to involve faculty across-the-board from the outset,
whether to make changes incrementally or all at once,
whether to be prescriptive or to present options,
whether to focus exclusively on “what” and “how” or to devote time to focusing on “why,”
whether to make an explicit effort to recruit high status colleagues, and
what kind of support to provide for faculty who did not participate in developing the innovation.

Many projects conducted some form of training for faculty and TAs, and 40 percent of the projects
in the case study sample had implemented exemplary training programs. Three effective models were a
team-teaching apprentice model (where a faculty member new to the innovation teamed with an experienced
colleague), a distinguished visiting professor model, and a week-long pre-implementation model.
A few projects had exemplary multimethod management information (formative evaluation)
systems in place that drew on various sources of help, both within and beyond their campuses. Between 60
and 90 percent of the projects collected formative evaluation data related to the effectiveness of the
materials or products they developed; the figure varies with the kind of product developed. In some of
the larger projects, DUE staff had encouraged the grant recipients to develop an explicit formative
evaluation mechanism in the form of a project evaluation advisory committee.

-23-


The cases studies included a few projects that had conducted exemplary summative evaluations.
However, many projects did not have adequate funds to carry out a solid summative evaluation. Other
projects had not matured to the point where the innovation could be evaluated.
Recent NSF strategies to promote adoptions include awarding grants to consortia of unlike
institutions, funding grants to promote adoptions, and holding a dissemination conference for principal
investigators. According to the principal investigators surveyed, 96 percent of the projects conducted some
activities to communicate their experience and insights to colleagues elsewhere.
The next chapter discusses the success of CCD projects.

-24-


Chapter Four: Success
The effectiveness of CCD is ultimately a function of the success of its funded projects. One aspect
of success has to do with the development process — i.e., moving from the initial set of ideas described in
the proposal to the point where there are:

practices to implement,
materials or other products to field test, and
management, evaluation, or dissemination plans and systems to follow.

This implies a linear process but, for most projects, development continues during and beyond
initial field testing and implementation. But given that CCD is a grant-based rather than a contract-based
program, it is clearly an indication of success that grant recipients were able to make the changes and create the
products already described in Chapter Two and the support systems already described in Chapter Three.
A second aspect of success concerns the implementation and use of what was developed.
Evaluating this aspect involves assessing the extent to which the initial ideas were fully implemented as
planned, the interconnectedness and coherence of its parts, and the skillfulness with which it is
implemented.
A third aspect of success concerns the nature and extent of the project’s impact within the
institutions funded. Here several foci are relevant:

changes in faculty,
outcomes for students, and
impact on the program or the department, including institutionalization of activities and products.

One more aspect of success is the extent of transfer beyond the institution.
A project can achieve considerable success on one dimension, but little on another. Consider the
following excerpt from a case study report:

By the end of the first full year of the project, all three core courses were up and running. The project was
less successful in obtaining the support of departments (chairs and faculty) to use the new courses as
substitutes for the old ones.

This project had successful project implementation, but unsuccessful institutionalization. Alternatively, a
project might result in improved faculty commitment to teaching (successful effect on faculty) with only
modest improvement in student learning, and no obvious change in the value of teaching in the departmental
reward system. Here is yet another mix:

-25-


The laboratory innovation was completely implemented as planned. The PI’s successor implemented
changes that made the program more efficient and effective (for example, some labs were too difficult
or took too long to complete, and were therefore sub-optimal as vehicles for teaching). Student
outcomes vary according to instructor proficiency and student interest. Unfortunately, the majority of
students find the course “too much work for the number of credits” and the TAs verified that the
workload is, indeed, substantially greater than that of most lab options available to non-science majors.
The few students who had taken regular labs said that they preferred the fact-based, follow-the-recipe
type experiments of the regular labs over the innovative labs.

Yet projects that fail to achieve all of their goals may still achieve important benefits. For example:
The implementation of the laboratory course can be characterized as complete; the course has been
taught by a wide range of faculty and visitors and is ready for export. Yet although reviewers for NSF had
cited the mathematics minor as one of the strengths of the original proposal, we did not find evidence that
many students taking the courses had declared a minor in mathematics. Nevertheless, the project seminars do
offer non-quantitative students an alternative entry point to either majoring or minoring in mathematics.
The one-size-fits-all feature of these courses is quite amazing: they are enjoyed by majors interested in
learning what else there is to mathematics besides calculus, by non-majors who liked mathematics in
high school, and by non-majors who enter the courses rather anxious about mathematics.

Finally, consider the small group of faculty at a major research university who attempted to get
their colleagues to adopt an alternative set of core courses developed in a CCD project at another institution.
Although partially successful in achieving some student learning objectives, the effort divided the
department into two antagonistic camps, one strongly committed to the new program and the other equally
opposed to it. On one dimension — departmental impact — the effort has yet to achieve its objective, and may
have made it more difficult to try educational innovations in the future. Yet the fact that the CCD-funded
curriculum core was sufficiently appealing for faculty in another institution to try it is an indication of
substantial positive impact.
The data used to assess the extent of implementation success and within-institution impact comes
from the survey of PIs, judgments made by substantive experts during case study visits about the
soundness of project content, and case study reports. The data to assess the extent of beyond-institution
impact comes from the survey of PIs, the case study reports, and telephone interviews with faculty users
from non-project institutions. In this chapter, vignettes from case study reports are used to illustrate trends
identified in the survey and to provide examples of complex phenomena.

Implementation
This section discusses three aspects of project implementation: soundness of the innovation as judged by
experts in the discipline, the completeness and proficiency of the implementation, and the products
produced by project participants.

-26-


Soundness of the Innovation
Three dimensions of soundness were examined: were the content and materials scientifically and
mathematically accurate? were curricular revisions consistent with developments in the field? and was the
pedagogy sound? Evidence used to make judgments about these questions comes exclusively from the
case studies.

Mathematical and scientific content. In every institution visited, content experts found the
mathematical or scientific content to be sound and defensible. Below is a typical excerpt from a
description of a science project at a large research university:

The laboratories and problem-solving materials have been thoughtfully prepared, contain sound physics,
and demonstrate innovative pedagogy. The curriculum augments the typical quantitative treatment with
qualitative concept development. All of the materials incorporate current research on student learning.
They encourage students to think about what they are doing and construct their own method of inquiry.
Each set of lab experiences builds from simple to more complex phenomena and calls upon the student to
use knowledge gained in other labs and in lecture.

Curricular changes. When projects in the case study sample had changed the curriculum, the
content specialists virtually always endorsed the changes made. At the same time, they noted that not all
reform objectives were universally accepted. For example, some mathematics faculty interviewed judged the
reform goal of developing student intuition about calculus concepts to be less important than developing
proficiency with a wide range of differentiation and integration procedures. A similar example involves a
project to reform the engineering science core by emphasizing scientific principles, such as conservation of
energy. Critics at the site asserted that increased student learning of larger concepts and phenomena did
not justify the decreased preparation of students to handle specific problems required in their field.
When content specialists had a concern about the curriculum, it was that grant recipients had not
made any changes, or that those made had not gone far enough. This was mainly a concern of specialists in
calculus and pre-calculus reform, and was usually directed at projects that incorporated technology —
graphing calculators and/or computers — into classrooms. According to one specialist:

The technology was used more as a low level tool. Although not inconsistent with its recommended uses,
this use did not promote the higher order skills that project leaders described as important. Instructors
and students consistently told us that its best use was simply to check answers or store formulae. Given
that the technology exists, there should be a change in emphasis in course objectives. There is now far
less need to insist on high levels of proficiency in symbolic manipulation, and correspondingly greater
need to develop high levels of understanding of fundamental concepts. More important, they are missing
the opportunity to teach students more powerful uses of this technology.

-27-


Pedagogy. As described in Chapter Two, project changes in teaching emphasized increased use
of active and collaborative approaches to teaching and learning. Reviews of the pedagogy by content
specialists were generally positive. Typical was this review of a science course:

The approach [using writing examples, group work, innovative forms of student assessment] is consistent
with research that shows students learn science more effectively and retain information better when
actively engaged in the acquisition and processing of information; i.e., when they are active instead
of passive learners. Moreover, the use of real world examples made it possible for students to relate
[the science] to their lives, an instructional technique often lost in traditional [science] courses.

Content specialists found that pedagogical practices were not uniformly sound across all projects.
In about 25 percent of the cases, content specialists found either no change in the use of traditional
pedagogy (e. g., lecture only classes), or that the innovation was not well informed by the literature on
pedagogy, or that the project failed to address pedagogy at all, even when it was central to the intended
effort. As was the case with their comments regarding curricular changes, the nature of the criticism was
not what was accomplished, but rather what could have been attempted.

Completeness and Proficiency of Implementation
CCD projects varied substantially in the completeness and proficiency of implementation. At one
end of the scale, three case study sites had little success in implementing planned innovations, although
two were still in progress and may eventually achieve greater success. At the other end, one project greatly
exceeded planned activities, so much so that project participants obtained a second CCD grant to disseminate
results to other institutions. Overall, about one-half of the case study sample fully implemented planned
activities. These projects covered the full range of disciplines. An example of a fully implemented adoption of
an innovative mathematics curriculum had the following components in place, some of which went beyond
the reform version being adopted: orientation and training for TAs and faculty new to the courses, a
laboratory for students serving as a drop-in tutorial facility, delivery of new courses, gateway tests to assess
student mastery of material, and active, multi-pronged monitoring of project activities.
Equally prevalent (about one-half overall) were the projects that had implemented some part of
proposed activities but not others. Typical of these “mixed” implementation successes was a science
project designed to integrate computers with instruction:

Over one-half of the 1,000 introductory [science] students per semester are using the technology as a
problem-solving tool, a replacement for traditional laboratory experiences, or demonstration or simulation of
[scientific] phenomena. There is broad, irreversible support for the innovation among students and
faculty. Student attendance is dramatically higher, they are learning collaboratively in small groups.

Project leaders made sure that faculty were “trained” by enabling faculty to co-teach during their first
responsibility in the new course. However, faculty only received limited technical assistance for such key

-28-


pedagogical and technological roles as: facilitating collaborative work by teams of two or three students,
using the technology, and diagnosing and addressing student misconceptions. As a consequence, faculty
are devoting less time than project leaders hoped to using the innovation and instead are spending more
class time on mini-lectures. Often technology use is confined to the limited prompts of the prototype
instructional materials.

Impact on the Institutions Receiving Awards
The CCD program and the projects that it funds are supposed to affect faculty, students, and departments.
For each of these groups, data from the survey of PIs are combined with detailed examples from the case
studies to assess and illustrate project impact.

Impact on Faculty
Survey data indicated a substantial, beneficial effect of CCD projects on both participating faculty
and on their colleagues who taught undergraduate students. In Chapter Two, Table 3 presented evidence
that many faculty across disciplines have changed their teaching behaviors in specific ways that are
consistent with expert recommendations regarding best practices. In addition, as Table 8 shows, 75 percent
of the grant recipients reported that the faculty who were most affected by the project now spend more time
on teaching undergraduates, and 44 percent reported that some additional departmental colleagues also do
so.

Table 8: Percent Reporting Increases in the Amount of Time Faculty Spend on Teaching
Decrease No Change Small Increase . . . . Large Increase
Faculty most affected by the project . . . . 12431 29 15
Other departmental colleagues . . . . . . 5 5131121

As Table 9 shows, 83 percent of the principal investigators reported that the faculty who were most
affected by the project now collaborate more with their peers regarding teaching, and 65 percent reported
that some additional departmental colleagues also do so.

Table 9: Percent Reporting Increases in Amount of Collaboration Related to Teaching
Decrease No Change Small Increase . . . . Large Increase
Faculty most affected by the project . . . . 11628 32 23
Other departmental colleagues . . . . . . 4 3141195

-29-


More dramatic and more important are the results shown in Table 10: 96 percent of the PIs
reported that the faculty who were most affected by the project changed their conceptions of teaching
and learning, and 84 percent reported that some additional departmental colleagues also changed their
conceptions. Moreover, from other questions in the survey, the nature of these conceptual changes were in
line with planning and implementing more effective active and collaborative learning strategies.

Table 10: Percent Reporting Changes in Faculty Conceptions of Teaching and Learning
No
Change Small Change . . . . . Large Change

Faculty most affected by the project . . . 4 10 41 46
Other departmental colleagues . . . . . 16 39 35 9

Case study data elaborate how these changes are reflected in the actual working lives of faculty:
Some instructors found themselves thinking throughout the day about the kinds of errors their students
made. Evidence of misconceptions prevented instructors from maintaining the assumption that had
shaped much of their past instructional practice: If their lectures were clear and well organized, then their
students would learn.

Case study data supported survey findings. For example, case study teams found projects with
substantial effects on faculty in many projects. Here is an example:

A faculty member said “this [CCD project] was one of the best experiences of my career.” In this project,
faculty learned new techniques in conducting guided inquiry labs and discovery-based labs, and other active
learning techniques. They gained confidence, developed leadership skills, and became trainers of others.
According to the former dean, the CCD project changed how faculty viewed their jobs — i.e., integrating
design, focus on student learning, and using cooperative/active learning techniques rather than just
disseminating information to passive recipients. Some of the most traditional faculty, who previously
used only lectures, now use group methods.

Project visits sometimes revealed differences among faculty. In some projects, for example,
faculty who were using project materials improved their teaching while others did not. Typically, in such
projects, faculty were divided about project objectives, the need to change, and the relative importance of
different aspects of teaching and learning. For example:

Faculty perceptions of project outcomes were mixed, from very positive to sharply negative. The
negative reaction came not only from non-participating faculty, but also from one current instructor who
was loathe to make time in lecture for problem solving, which he viewed as of unproven effectiveness,

-30-


given the number of topics to be covered. However, this same faculty member appreciated that
the innovation offered him the first opportunity in his career to speak comfortably with colleagues
about his teaching. Overall, the innovation seems to have increased the magnitude and nature of
changes being discussed by everyone and attempted by some.

Impact on Students
The 43 leaders interviewed by telephone proposed several important student outcomes of
educational reform efforts. These outcomes can be grouped under three headings: understanding,
competencies, and attitudes. In the survey, PIs compared the proportion of students participating in project
activities who achieved these outcomes with the proportion of students who had achieved them prior to the
project. As Table 11 shows, for every outcome listed, more than two-thirds of PIs reported that, when
compared to an equivalent group of students, more CCD project students achieved the given outcome.

Table 11: Percent Reporting that More Students Achieved Valued Outcomes

Fewer
Achieved No Change More Achieved

Many
More
Achieved

Gaining understanding of or familiarity with:

Recent concepts, findings, and/or theories . . . . . . . . . 24 50 26
The scientific approach to problems . . . . . . . . . . . . 15 46 39
Limits of science, mathematics, or technology . . . . . . . 24 44 32
The role in society of science, mathematics, or technology . 30 41 29
Gaining competence in:
Applying concepts, principles, or theories . . . . . . . . . 10 46 44
Using methods and/or equipment . . . . . . . . . . . . . . 1 154143
Framing researchable questions . . . . . . . . . . . . . . 32 43 25
Devising methods, equipment, or procedures . . . . . . . . 29 45 26
Working as a team member . . . . . . . . . . . . . . . . . 15 32 53
Developing greater interest in, or comfort with:
The science taught . . . . . . . . . . . . . . . . . . . . . . 12 52 35
The mathematics involved . . . . . . . . . . . . . . . . . . 1 29 43 27
The computer skills needed . . . . . . . . . . . . . . . . . 18 38 44
The laboratory or field equipment involved . . . . . . . . . 1 25 36 38

Case study data confirm the survey finding that, in general, CCD projects led to student learning
gains. In only three out of 25 sampled projects could case study teams find no evidence of at least some

-31-


positive outcomes for students, although results were sometimes mixed (i.e., some students improved,
others did not). According to interviews with students, the most typical outcome was increased student
engagement in their own learning:

Most students said that the most valuable aspects of the group activities were the interactions with other
students and the sense of being responsible for their own learning.

The more successful projects aimed beyond learning; they tried to encourage students to consider
majoring in mathematics or science. According to one substantive expert reviewing a CCD science project:

Most non-science majors signed up for the course to satisfy general education requirements. Often they
came into the class with either a distrust or dislike of science. They finished the course with a new
appreciation of the relevance of science to their lives. Taking one or both of the CCD courses stimulated a
number of students to consider majoring in science.

Yet direct assessment of student learning outcomes in CCD projects was rare. Instead, surveys of
student attitudes about the class experience, enrollment trends, and the like served as indirect measures of
effectiveness. The following example is typical of what principal investigators are able to say about the
impact of their projects on students:

Project staff are just beginning to consider outcomes for students systematically (e. g., comparing students
in laboratories using the old and new laboratory manuals). However, from reviewing student journals
and interviewing students, it appears that students were more engaged in their learning, were learning
more, were better able to articulate what they were learning, and for some, were more comfortable with
science.

On the other hand, given that one of the objectives of CCD is to improve student attitudes toward
science, mathematics, and engineering, sometimes sophisticated assessment methods are not needed.
Consider two examples, where the pre-project vehicle for serving very large numbers of students was the
lecture/recitation/laboratory format. The first project retained this format, but developed active learning
activities for students to carry out during the lecture period. The second developed alternatives to lectures.
Each quote, from faculty not directly involved in the projects, cited dramatically increased student
attendance as the most important outcome:

I don’t know too much about [the innovations], but I support them because almost all the students are
attending most of the time. If they attend, they will learn something, no matter what you’re doing. But
it’s better than that. When I’ve walked by the room, I see students paying attention and doing things.

The large lecture, as it is, is failing because students won’t even come. By the end of the semester, as
little as a third of them show up. Students who are there drink coffee and read the student newspaper as
much as they pay attention to the lecture. They just are not engaged. Lectures where they do pay some
attention are taught by veteran, “star” lecturers who have an arsenal of demonstrations they can do.

-32-


There aren’t too many of them, and they’re retiring. Certainly not enough to cover every lecture, every
term. The [innovations] permit a much larger number of faculty to be more effective instructors.

Impact on Departments
The CCD program seeks to influence academic departments directly through the institutionalization of
project activities, and indirectly through promoting educational reform nationally and through encouraging
changes in the academic culture (e. g., faculty rewards for teaching). In general, CCD projects have had
substantial influence on faculty attitudes toward teaching, some influence on curriculum reform, and
limited effect on academic cultures. This result is not surprising because many CCD projects are modest in
scope and do not purport to reform, say, entire course sequences.
More than two-thirds of responding PIs claimed that their departments made a formal commitment
to use project materials and activities, although the extent of their use was unclear. Survey data indicated
that when the department did change, 10 percent of PIs attributed the change directly to CCD projects; 50
percent claim the project was at least partly responsible for the innovation.
Large size and scope were no guarantee of substantial departmental effects. In two very large
projects, project participants added new courses and achieved some reform in texts and pedagogy, but had
very little success in gaining acceptance by faculty who had not participated in the project. In contrast, in
another project of equal size and scope, the project led to reform of the entire computer science curriculum.
Case study data indicate that a few projects led to dramatic improvement in student learning
through new courses and instructional approaches, yet it was rare for PIs to be able to get faculty beyond those
named in the grant to try the new materials and instructional methods. Other projects resulted in implementing
new courses but not in changing the nature of student/faculty relationships. Yet another project succeeded in
getting most faculty in the department to use technology in instruction, but did not improve their teaching.
According to one content specialist who visited this project:

Although it is likely that the use of [the technology] will be required in more courses, I saw no evidence
that this would move beyond the low level tasks at any time in the near future.

At this institution and at some others in the case study sample, it was obvious that it was more
difficult to reform departments and departmental curricula than for a small number of committed faculty
involved in a single course to experiment with new teaching practices. Yet CCD can point to a handful of
projects that dramatically influenced entire departments. Consider this description of departmental change
attributable to a CCD project:

According to the immediate past dean, the CCD project was “probably the single greatest advance in
science and engineering education [at the institution] in the past decade.” Consider the variety and
importance of changes in the participating departments and in the college as a whole:

-33-


• Two of six departments require the new lower division courses; two others have made them optional.
• One participating department has revamped both its entire curriculum and its pedagogy.
• The college finished second in a recent national competition to reward excellence in teaching.
• The college obtained an additional CCD grant, recognized by participants as being a result of the success
of the 1989-1992 effort. The new project involves two universities and a community college.

Workshops are the mechanism used to train faculty to teach design and to facilitate student learning,
especially preparing and using student groups. Post-workshop surveys show that more than half the
participants claimed to be using more active and collaborative learning styles.

• The most recent accreditation visit gave the college kudos, a dramatic improvement over the previous
visit prior to the two CCD projects.

Extent of Impact Beyond Funded Institutions
The focus of the surveys and the majority of the case studies was what happened to faculty, students, and
departments within the institutions that received CCD grants. But an important premise underlying
programs like CCD is that funds invested to develop projects at a relatively small number of institutions —
during its first eight years, grants were awarded to 328 different colleges and universities as well as to 32
other institutions — will have noticeable impact on other institutions throughout the country.
Five sets of data bear on the extent of impact of the CCD program beyond institutions that received
grant funds. One relates to a study conducted under the auspices of the Mathematical Association of
America (MAA) of the calculus reform effort, which has been CCD’s most focused effort. The data
discussed in the MAA study were not collected as part of this evaluation; they are presented as a sidebar in
the shaded box on page 35.
A second data set combines responses to the survey of PIs with responses to the survey of
applicants for CCD grants whose proposals were declined. Each respondent was asked to assess his or her
familiarity with materials and products that had been developed under previous DUE grants. The third data
set comes from a tracer study: telephone interviews were conducted with people who did not receive CCD
grants, but who received or requested information from, purchased materials developed by, or attended
workshops that were conducted by CCD grant recipients. The fourth set involves visits to institutions that,
although they did not receive project funds, contained faculty who attended project-sponsored workshops.
The fifth set consists of responses to an electronically-administered evaluation of a project that both
distributes a regular electronic newsletter and maintains and continually updates a database on the World
Wide Web. The sections below describe the findings from the second, third, fourth, and fifth data sets.

-34-


Calculus Reform: A Major CCD Initiative
Since 1988, NSF’s Division of Undergraduate Education has invested well over $20 million in
projects aimed at reforming undergraduate calculus. To what extent has this investment paid off?
A 1995 report from the Mathematical Association of America (Assessing Calculus Reform Efforts:
A Report to the Community)
provides data about the nature and extent of undergraduate calculus reform in the
United States. Although this study was not an official review of NSF’s calculus program initiatives, it
provides useful information for assessing the impact of NSF’s investment. Information came from two
large national surveys conducted in 1992 and 1994, and from other sources. The report’s major findings were
as follows:

Calculus reform is taking place at all levels of post-secondary institutions. In a spring 1994 survey,
68 percent of 1,048 responding mathematics departments indicated that either modest or major calculus
reform efforts were currently under way. Of the departments responding, 22 percent were undertaking
major reform efforts and 46 percent described their efforts as modest. We estimate that at least 150,000
students, or 32 percent of all calculus enrollments, in spring 1994 were in reform courses. [Tucker and
Leitzel, 1995]

According to the report, NSF support has been critical to the reform movement. CCD funds have
been used to develop a number of different reform texts, which in 1994 were being used in some manner
by 40 percent of institutions where reform was under way. CCD calculus reform grants have also been used
for many other purposes, such as:

• convening conferences,
• a newsletter on undergraduate mathematics education,
• institutional planning for reform,
• cross-disciplinary reform efforts,
• use of technology in calculus, and
• dissemination of successful reforms to other institutions.
Calculus reform involves changes in the modes of instruction, often including the use of technology,
along with an increased focus on conceptual understanding and decreased attention on symbol manipulation.
Reform may involve cooperative learning, open-ended projects, regular writing assignments, or increased
emphasis on modeling and applications. The study found that “large numbers of reform instructors report
that the new instructional methods are having positive effects on students’ conceptual understanding,
mathematical reasoning, and problem-solving abilities.” Furthermore, the calculus reform movement
appears to be spreading to other undergraduate mathematics courses, as well as to secondary calculus and
pre-calculus courses.

-35-


Awareness and Use of DUE Products and Materials
In the two surveys that were conducted — one of grant recipients and the other of a sample of
unsuccessful applicants — respondents were asked whether they had ever used products developed under
an NSF undergraduate education grant. Note that some of these products may have been developed under
DUE programs other than CCD (although the question could have asked solely about CCD products, many
respondents probably would have not have been able to identify the actual program involved; as a result,
the results would have been more precise but less accurate). The data in Table 12 — which displays the
number of successful and unsuccessful applicants who reported varying degrees of familiarity with DUE
products — indicate that existing dissemination activities are reaching undergraduate faculty, at least those
who are interested enough in curriculum development to apply for grants. Sixty-three percent of CCD grant
recipients have made use of at least one DUE activity or product. A significantly higher percentage of
unsuccessful applicants, 38 percent compared to 16 percent, were not aware of DUE products.

Table 12: Reported Familiarity with DUE Products, by Award Outcome
Not Aware of Any Aware, but Have Not Used Have Used
Declines
86 (38%) 45 (20%) 98 (43%)
Awards 52 (16%) 71 (21%) 210 (63%)

c 2 = 37; p < .001.

While many information awareness activities are occurring and are having some effect, the data in
Table 12 refer only to faculty who were already sufficiently engaged in educational innovation to apply for
a CCD grant. When case study team members discussed innovations with faculty who were not involved in
the NSF grant, the latter frequently reported ignorance of that innovation’s specifics. Faculty presumably
would be even less familiar with the many CCD-sponsored efforts beyond their own institutions,
as illustrated by the following comment by one faculty member:

This is the first time I’ve ever paid attention to how I teach and it’s been the first opportunity I’ve ever
had to actually discuss teaching ideas with a colleague. But, how do you go about finding out what
people are doing out there to teach differently? I wouldn’t know where to start. It would be great if NSF
produced a CD-ROM of all the curriculum development proposals and final reports that would let you
search through them by subject, type of institution, type of innovation, type of course, or whatever.
Better yet if the disc contained the final products.

-36-


NSF has established a database on the World Wide Web which such faculty could find useful. As is
true for efforts by grant recipients to put information on the internet, making the data available is easier
than succeeding in making potentially interested faculty aware of it.

Results from a Tracer Study
One of the survey questions asked grant recipients to estimate the number of institutions (other
than those receiving funds under the grant) that contained faculty who were using project ideas or
materials. Because these estimates were likely to vary widely in accuracy, an effort was made to assess
their validity. Specifically, the study team selected 20 projects that had reported that faculty in at least 300
institutions were using project ideas or materials. Requests were made by mail and/or telephone to the PIs
of these 20 projects for names of faculty (beyond project institutions) who had requested information,
purchased materials, or attended workshops. Fourteen lists of names were received (from PIs, publishers,
and software distributors). Individuals on the list were called on a random basis, and 184 were reached.
Of these, 168 (91 percent) remembered having attended a workshop or conference, or requesting
or purchasing materials. These 168 individuals were interviewed by telephone about their use, experience,
and future plans regarding project ideas and materials.
Of the 168 who remembered the project, 138 (82 percent) had already tried project ideas, software,
or instructional materials. Of the 30 who had not yet tried anything,

ten described plans to do so during the next year that the interviewer rated “concrete enough to be credible”;
and

five indicated that their thoughts, attitudes, or behavior had been noticeably affected by their slight connection
with the CCD project.

Of the 138 who had already tried some aspect of the project:
96 percent said they used it more than once and found it helpful;
93 percent said they expect to use it on a long-term basis;
92 percent indicated that they intend to encourage (or already have encouraged) colleagues within or beyond
their institution to learn more about it; and

58 percent reported that, as a result of their efforts, their colleagues were now using the software or materials,
or in some way had changed their teaching because of project ideas.

Table 13 presents, in descending order, how often respondents cited each listed benefit.

-37-


Table 13: Percent of Unfunded Adopters Who Reported Success, by Benefit
Benefit Proportion
Increased student understanding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Increased student interest in, or comfort with, the material . . . . . . . . . . . . . . . . . 92
Helped students learn from procedures or gain competence . . . . . . . . . . . . . . . . . 91
Changed the respondent’s conceptions of teaching and learning . . . . . . . . . . . . . . . 60
Increased the amount that the respondent talked with colleagues about teaching . . . . . . 54
Change the respondent’s approach to assessing learning . . . . . . . . . . . . . . . . . . 48
Increased the amount of time the respondent spends on teaching . . . . . . . . . . . . . . 42

Results from Visits to Six Campuses That Did Not Receive Funds
Six out of the 33 institutions visited neither received CCD funds as individual institutions, nor were
part of CCD-funded consortia. These six were visited to learn about the results of dissemination efforts by
three other funded projects in the case study sample. The six visits were not selected at random out of all
the institutions that may have benefited from these three grants, but neither were they the first choices of
the PIs involved. Rather, they were selected from nominations made by the PIs in response to geographical,
scheduling, and cultural constraints. The geographical and scheduling constraints were imposed in order to
accommodate the evaluation team’s already existing field visit schedule, and to do so within the team’s
established budget parameters. The cultural constraints explicitly excluded institutions that were known,
prior to the CCD project involved, to place an unusually high premium, not only on excellence in teaching,
but also on being on the forefront of educational reform; in other words, the evaluation team wanted to
exclude institutions that were more predisposed than their peers to take advantage of NSF-funded
opportunities so that those actually visited would be more representative of U.S. colleges.
Faculty in five of the six institutions visited had successfully and skillfully implemented project
ideas in two or more classrooms. In the remaining institution, the two faculty who had attended the
CCD workshop had enthusiastically urged their colleagues to try out project ideas and materials but were
outvoted. However, at the time of the visit, two years after the workshop, this institution’s new
president had just announced a financially induced, radical restructuring that, at the end of the academic
year, would result in faculty layoffs in the department. The department head felt that since the personnel
changes that had been finalized removed the opposition to the project, he expected to be able to convince
his remaining colleagues to implement project ideas and materials.
The just-mentioned restructured college serves to illustrate an important finding: that both
psychological and contextual readiness sometimes takes several years to develop.
The telephone tracer

-38-


study yielded additional data on this point: individuals in ten institutions who had been exposed to project
ideas and materials more than a year ago had now developed concrete plans to implement something
based on them during 1996-97. Thus evaluators need to refrain from prematurely concluding that
non-implementation represents a “failure”; instead it may merely indicate the presence of an incubation
process that may ultimately result a full readiness to change.

Results from an Evaluation of an Electronic Dissemination Effort
The dissemination efforts that have been reported above have used the conventional
strategies of conducting workshops and/or preparing software or other instructional materials. However,
since the use of electronic dissemination strategies has recently increased and one of the projects
included in the case study sample that used an electronic dissemination strategy conducted his
own tracer study, the study team obtained a copy of his data. The data are presented here not as
representative of projects that disseminate information electronically, but rather as an indication of the
potential of such strategies.
The project has developed a course that uses contemporary news stories involving statistics to help
students become more sophisticated consumers of the quantitative data that they encounter in everyday
life. In addition to implementing the course, this project electronically disseminates a biweekly newsletter
that includes (1) abstracts of and comments on recent newspaper stories and journal articles that contain
research study data, (2) complete citations for each news story, and (3) discussion questions for classroom
use. It also maintains a database on the World Wide Web that, among other things, contains (1) key word
indexes that were linked to articles in past issues, (2) syllabi for previous versions of the course,
descriptions of teaching aids, and links to the full text of articles when it is available as well as to resources
at other web sites.
The PI of the project sent an electronic evaluation form to users, requesting that they mail it
back completed. He received 323 responses, 257 of whom teach in colleges or universities. Out of the
total number responding, 92 percent said that they had used the newsletter (delivered over the internet) and
42 percent said that they had used the more recent technology (the World Wide Web database) in their
teaching or other professional activities. Out of the 257 who teach in colleges or universities,
the newsletter was used by:

91 percent as a source for examples of how statistics are used in the real world;
74 percent as a source for topics for class discussion; and
21 percent as a source for student reading assignments.

-39-


Out of this same population, the database was used by:
37 percent as a source for ideas on teaching particular statistical topics;
33 percent to search for articles on particular topics;
28 percent to obtain the full text of articles abstracted; and
16 percent to access teaching resources such as evaluation instruments, course project guidelines, and video
summaries.

Finally, out of these 257 respondents:
15 percent indicated that they had already taught a course similar to the CCD project course, and
19 percent more indicated that they plan to teach one in the near future.

Commentary on the Five Kinds of Evidence
Ten of the faculty in the tracer study and 19 percent of those who responded to the electronicallyadministered
survey did little or nothing during the first two years after exposure to reform ideas, but then
began planning more major reform efforts. Among other things, these experiences highlight the difficulty
of measuring the full impact of programs like CCD that work best by inducing faculty both to pay more
attention to what their students are experiencing and to reconstruct their conceptions of teaching and
learning. It suggests that that whenever the goal of a program involves changing long-standing attitudes
and behavior, the full extent of both the local and national impact of many of its best projects will not be
manifested until several years after their funding has stopped. A corollary is that any assessment of CCD’s
first six years that is based on changes that have already occurred will underestimate its long-term, national
impact.
Yet even without taking into account the reform efforts that are only now in progress, the five
kinds of evidence build a consistent picture: The CCD program has prompted faculty in many institutions
that did not receive any CCD grant money to try out project ideas and materials. Furthermore, most of the
faculty who conducted these experiments in reform considered them sufficiently successful to justify
repeating and extending them.

Chapter Summary
A meaningful indication of the success of the CCD program must be multidimensional. Some of the
dimensions that are part of success have to do with the creation, development, and support processes, and
were already discussed in Chapters Two and Three. Dimensions discussed within this chapter include:

-40-


soundness, completeness and proficiency of implementation, nature and extent of impact, and transfer to
other institutions. Individual grants typically achieved success in some aspects and had shortfalls in others;
in the aggregate, the CCD projects had successes in each aspect.
Content experts judged all 25 visited innovations to be scientifically and mathematically sound.
However, they did not find them universally sound in their pedagogical aspects. Overall, about one-half of
visited projects fully implemented planned aspects while the other half of the projects implemented some
proposed activities but not others.
The CCD grants generally produced meaningful effects for students and faculty. Over two-thirds of
the PIs reported that more students achieved each of 13 outcomes advocated by national experts. Over
five-sixths of the PIs reported that more students achieved each of the following six outcomes:

increased understanding of the scientific approach to problems;
increased competence in applying concepts, principles, or theories;
increased competence in using methods or equipment;
increased competence in working as a team member;
increased interest in, or comfort with, the science taught; and
increased interest in, or comfort with, the computer skills needed.

The most pronounced impact of CCD projects is their impact on faculty. Compared to before the
project, 75 percent of the PIs reported that the faculty who were most affected by the project now spend
more time on teaching undergraduates, 83 percent reported that such faculty now collaborate more with
peers about teaching, and 96 percent reported that such faculty changed the way they think about teaching
and learning. A smaller but still substantial proportion of grant recipients — 44 percent, 65 percent, and 84
percent, respectively — reported that the project affected some additional colleagues in these same ways.
Impacts on departments and institutions were less common or pronounced. A majority of PIs
reported that their department had made a formal commitment to use project activities or materials on a
long-term basis, a proportion that was mirrored in the case study results, although the decision to continue
offering project courses on a long-term basis was not typically coupled with a commitment to re-examine
other department courses.
Over a third of the grant recipients reported that their department’s commitment to undergraduate
education had increased, at least partly because of their CCD project. Yet the program has had less impact
on faculty reward systems, and unless the CCD project dealt with a large-enrollment course required of
majors, many of the faculty in the department often knew little about it.

-41-


Five kinds of evidence were described bearing on the impact of the CCD program beyond the
institutions that received CCD grants:

data from a Mathematical Association of America-sponsored study of the calculus reform effort;
survey data, from both grant recipients and unsuccessful applicants, regarding their familiarity with and use of
materials and products that had been developed under previous DUE grants;

data from telephone interviews with faculty who did not receive grants, but who received or requested
information from, purchased materials developed by, or attended workshops conducted by CCD grant
recipients;

data from interviews and observations at institutions that did not receive project funds but that contained
faculty who attended project-sponsored workshops; and

responses to an electronically-administered evaluation of a project that both distributes a biweekly newsletter
electronically and maintains a database on the World Wide Web.

The totality of evidence supports the premise that funds invested to develop projects at a relatively
small number of institutions have noticeable impact on other institutions throughout the country. The
impact spreads from institution not only through direct personal contact at conferences and workshops, but
also through products — e.g., textbooks, instructors’ manuals, and software including through the internet,
especially the World Wide Web.
These data also confirm that there is sometimes a substantial delay between a first exposure of
individual faculty members or departments to reform ideas or materials and major reform activity.
Sometimes the delay consists of a relatively silent incubation period where the only activity related to
reform is reflection and discussion. Other times this period also includes several modest experiments with
reform ideas. An important implication is that whenever the goal of a program involves changing
long-standing attitudes and behavior, the full extent of both the local and national impact of many of its
best projects will not be manifested until several years after funding has stopped. A corollary is that any
assessment of CCD’s first six years that is based on changes that have already occurred will
underestimate its long-term, national impact.
The five kinds of evidence build a consistent picture: The CCD program has prompted faculty in
many institutions that did not receive any CCD grant money to try out project ideas and materials.
Furthermore, most of the faculty who conducted these experiments in reform intend to repeat and extend them.

-42-


Chapter Five: Site Dynamics and Factors Associated with Success
The previous chapters defined and described the success of the CCD projects. The evaluation team searched for
factors that facilitated or hindered success to assist NSF staff and CCD participants in promoting more
effective projects in the future. This chapter focuses on factors that were found to affect success.
According to the case study data, successful implementation is the best predictor of various forms
of positive, within-institution outcomes. Because the factors that affect implementation success overlap
considerably with those that affect project continuation, these two types of success are addressed together. The
second section of the chapter uses evidence based on the survey data to focus on factors associated with other
positive outcomes: changes in faculty beliefs and behavior, student learning gains, and impact on the department.

Factors Associated with Implementation Success and Continuation
Factors affecting success can be grouped under the following headings: contextual factors, process factors,
management and logistics, and innovation characteristics.

Contextual Factors
Several contextual factors affected the success of CCD projects:

Host-innovation fit. A particularly important contextual factor is the “fit” of the innovation with
the academic culture of the host environment. This was evident in many of the case study sites. At one
campus, the project meshed well with already established educational reform goals and, as faculty and
administrators said, the project was awarded “just at the right time.” On another campus, the project fit
very well into the series of seminars that were required of all entering freshmen. On a third campus, the
project was responsive to external pressures from an accreditation committee to make changes.
On the other hand, lack of fit into the local culture served as a potential barrier for some projects.
On one campus the project was primarily a pedagogical innovation focused on meeting the needs of learners
with a variety of learning styles. Although many faculty supported the concepts and new practices, the project
was connected to a consortium that was funded to address gender equity, which was not an issue or concern
at the case study site. The gender equity label kept several faculty members from buying into the project.
The importance of undergraduate education to an institution’s mission usually is reflected by the
faculty reward system, which in turn increases or decreases faculty motivation to devote attention to teaching.
As noted in Chapter Two, CCD projects are primarily innovations in teaching and are the product of faculty
involvement in experimentation or research on teaching. However, even when such research leads to the

-43-


publication of articles in refereed journals, it is not the form of scholarship that is typically rewarded in
mathematics, science, and engineering departments, particularly for tenure recommendations.

In many of the case study sites, untenured faculty reported that they were cautioned by senior
faculty that they were spending too much time on areas related to teaching and not enough time on their
academic research. In contrast, on a few campuses, there were performance credits or merit pay, not just
for high quality teaching, but for innovation in teaching. DUE staff and educational reformers are well
aware of the prevailing academic culture, and have explicitly identified “changing academic culture so that
it is more supportive of undergraduate teaching” as one of CCD’s goals. But until it changes, a campus’s
academic culture is usually a substantial barrier to implementing or sustaining curricular and instructional
innovations.
At a research university in the case study sample, an administrator tried to change the academic
culture by taking a step that other administrators might find effective. When recruiting a new project
director, this dean provided the new hire with written assurance that for personnel decisions affecting him,
scholarship related to teaching would be treated as equivalent to scholarship in his academic field. The
memo to the faculty member’s personnel file stated:

This position is, of course, tenure track, and you should know something about the expectations for
promotion and tenure within our departments and college. While we expect high quality classroom teaching and
service from all of our faculty, and especially for this position, scholarship is always a major component
in tenure decisions. Scholarship means laboratory research for many of our faculty; however, we
recognize that research into educational materials, approaches and strategies is equally suitable for
individuals whose talents lie in this area.
In any event, regardless of the particular focus of scholarship,
the characteristics are conceptually the same including publications in respected, refereed journals,
conference activity and successful grant applications. There is also a considerable overlap between the
administrative responsibilities and the scholarship with this particular position. [Emphasis added]

Status of innovators and support from senior administrators. The position of key CCD project
staff in the university context — in terms of their visibility, power, and reputation — played an important
role in successful implementation as well as continuation. In a majority of the case study sites, these staff
were deans, department chairs, or well-respected faculty members; or they were in departments that
enjoyed high status in the institution. Cross-case analysis of case study data reveals that higher status of
key actors or advocates is associated not only with broader faculty support, but also a higher level of
support from senior administrators. The case studies are replete with examples of a dean facilitating the
implementation process both tangibly — e.g., allocating resources — and intangibly — for example,
lending the project his or her imprimatur, thereby moderating the extent of overt faculty resistance.

-44-


At a research university where the success of the reform effort is nationally recognized, the
department head used three strategies to support reform efforts. First, he used the department’s poor
reputation for its freshman introductory course as leverage for convincing the dean to invest substantial
financial resources in an effort to improve teaching. Second, he worked with the principal investigator to
convince two high status, senior faculty to try teaching a reform section (and ensured that they had the
opportunity to receive the training and support to succeed). Here is how he explained his third strategy:

Whenever faculty are enthusiastic about experimenting with an innovative approach to teaching, I encourage
them to go ahead and make sure that what they are doing doesn’t come to the rest of the faculty for a vote
prematurely, because that would kill it. Improving teaching takes time: if the the basic ideas are sound,
then conscientious participants will make it work, refine it, and gather the evidence to make their case.
Then the idea has a fighting chance.

Administrative support is also crucial to continuation. Building ownership and commitment to a
new project is difficult and time consuming, especially for multidisciplinary projects that cut across several
departments. While the project is receiving funds, the PI’s host department usually claims some ownership
for the project. However, once the funding ends, ownership diminishes and unless a dean or senior faculty
member becomes an active champion of the program, the chances for continuation decline.
Even in sites where the project was successfully implemented, low status of the department or of
key participants is associated with low chances for continuation, as illustrated in this case study site:

Core courses in engineering, focusing on design issues, were adopted from another university and
implemented in a small low status department in a large school of engineering. The innovative courses
conflicted with the traditional structure of the curriculum, and the project staff were unable to convince
department chairs and their faculty of the benefits of the new approach.

Prestige of NSF. Administrators and faculty in projects of all sizes and in all kinds of institutions
asserted that the prestige associated with receiving an NSF grant contributed to the chances for implementation
success. In research universities, it enhanced the image of research on teaching, even when it did not fit
with the prevailing academic culture. In institutions, such as community colleges, that do not typically
receive NSF grants, PIs and faculty explicitly mentioned NSF support as a source of status for the project
and faculty.

Process Factors
Cross-case analysis of CCD projects reveals that certain processes or actions facilitate or hinder
successful implementation and outcomes.

-45-


Vision and direction. Having a clear vision and providing clear direction are associated with
success. When project leaders and participating faculty had a clear vision of what they intended — which
was true for most projects — the implementation was usually successful. Correspondingly, when
these elements were missing, the implementation tended to be flawed. Consider a project that involved
developing and implementing at two campuses a multidisciplinary science course that would meet general
education requirements:

Faculty from both campuses met frequently and struggled with the course content. By the end of
the summer, faculty had not fleshed out a detailed course outline that made connections among the fields or
that incorporated societal implications of the topics. As a result, instructors developed their own sections
of the course, mostly in isolation. In effect, this made the course a sequence of “mini-courses” that lacked
connections, both conceptually and operationally.

Consensus regarding the need for the innovation. Vision that is limited to a few participants is
not enough. It must be coupled with a high level of consensus about (1) the innovation and its vision, and
(2) the need for making a change. Although starting with volunteers maximized the chances that initial
implementation would proceed smoothly, projects that were successful in convincing additional faculty to
experiment with the innovation engaged their colleagues in discussions about the need for the change, and
about how the project’s goals and design attempted to address that need. In projects that were less successful,
there were obvious gaps in vision and a low level of consensus about that vision. One example is a project
in which there were clearly stated philosophical differences on whether the purported benefits of the
project justified abandoning certain content objectives. This conflict was not resolved.

Training and ongoing opportunities for discussion. The cross-case analysis points to faculty
training as the most significant factor influencing success and proficiency of implementation. This was
especially true for innovations that involved changes in pedagogy — e.g., active learning or new
approaches to labs — and changes in technology. Almost without exception, the projects that experienced
major implementation difficulties were those in which faculty received little or no opportunity to become
fully comfortable with the changes. Here is an example:

During the summer, three members of the faculty received the training to implement [a version of
calculus reform], and received a commitment — sight unseen — from their colleagues to adopt the
corresponding text starting that fall. Implementation differed vastly between those who had attended the
summer workshop and those who had not.

-46-


In courses where TAs are heavily involved in instruction, TA training is probably the most
important factor in determining the extent of project success. In a discovery science program taught by
TAs, first time TAs were overwhelmed by the need to respond to student design ideas (rather than being
able to follow a predictable plan), and tended to get bogged down in the mechanics. Securing high quality
instructional services from a TA for such courses probably requires one or two times through the course,
and substantial support from a faculty member.
For both faculty and TAs, the most effective training programs in the case study sites went beyond
merely providing technical knowledge and skill. The best provided most or all of the elements listed
in Figure 4.

Figure 4: Elements Provided in Effective Training Programs
Opportunities to hear and respond to discussions of the conceptual underpinnings
Experience participating in the kind of activities that the innovation provides for students
Knowledge and skill related to the uses and mechanics of the innovation
Early warning of potential pitfalls, along with strategies for avoiding them
A sense of being part of something larger
Awareness that others also occasionally feel awkward, unsure, and frustrated
Strategies for helping students adjust to new expectations
Opportunities to plan sample activities and receive feedback
Follow-up opportunities to share successes, receive advice, and engage in collaborative problem solving

Evaluation. Both kinds of evaluation described in Chapter Three contribute to implementation
and continuation success. Unless CCD materials have more or better summative evaluation than is typically
done, faculty who have not been involved with the innovation generally will find it difficult to appraise its
value for their purposes, and if they are in another institution, for their setting. Mere descriptions of
innovations seldom persuade potential adopters. Yet even when solid evaluation data are available, many
faculty do not understand or respect the legitimate differences between educational evaluation data and
physical science data, as illustrated by one faculty member interviewed:

I don’t have any idea if it [the educational innovation] is any good or not. I want hard data, numbers, not
mushy stuff. Like my experiments; look here... (faculty member shows interviewer a histogram). That’s
hard information, cut and dry. You know whether it worked.

-47-


There probably are many other scientists and mathematicians who are not familiar with the
complexities of educational research and the kinds of evaluation it entails, although they may not have as
much resistance to educational evaluation data as the above respondent. Even scientists and mathematicians
receptive to learning about educational evaluation can be daunted by the experience, as illustrated by the
principal investigator of a project that was evaluated by an outside consultant:

We had a seminar yesterday and most of the department showed up to hear (the consultant) do a
comprehensive presentation of the evaluation data. I think a lot of faculty members had no idea what it
meant and, I have to admit, even I was overwhelmed by the volume of information. They looked puzzled
by the jargon like “pre-test,” “post-test,” and “ANOVA” (Analysis of Variance).

Finally, we talked to one mathematics faculty member who had attended a workshop on calculus
reform, conducted by a CCD grant recipient. This individual was well connected in the calculus reform
community, and was already using many reform methods. When asked what he had learned from the
conference, he said:

To me the important part of the workshop was hearing a speaker talk about assessment — both how to
assess student comprehension and learning, and how to assess programs. I learned that it was much more
complicated than I thought, that you have to anticipate alternative explanations for positive results and
gather data that clarifies which one applies to your situation.

The need for increasing and improving formative evaluation is probably more critical than that for
summative evaluation. Although faculty incorporate cycles of testing and revision as standard practice in
their laboratory experiments, they sometimes fail to apply these principles to their educational experiments.
Instead, they sometimes treated development as a linear, one-shot process rather than subjecting each
component to systematic revision, based on data gathered from carefully designed instruments and
procedures.
One of the most effectively implemented large scale projects in the case study sample used at least
three different formative evaluation strategies:

During the first two years, they hired a staff member from the university’s Center for Teaching and
Learning to visit each section around the middle of the term. The evaluator would observe the first half
of the class; then the instructor would leave and the evaluator would talk with the students about what
was going well, and what was problematic. Specific feedback was given only to the instructor, but
themes were reported to project leaders. Second, a School of Education graduate student was hired to
administer questionnaires to students. Third, this graduate student also conducted focus groups with
students and with faculty.

Barriers and constraints. Barriers and constraints are to be expected, and PIs were usually able to
overcome them. The survey of PIs asked respondents to identify barriers that affected the grant to the point

-48-


where major aspects of the project: (a) would not be fully developed or implemented by the end of the
grant, or (b) were no longer in use at the institution. Predictably, the two most frequently mentioned
constraints had to do with time and money. Of the PIs who responded, 29 percent cited insufficient time or
money to complete development, and 19 percent cited insufficient time or money for routine implementation.
Nineteen percent also cited administrative indifference or opposition as a barrier to completing project
implementation and continuation.

Management and Logistics
The attention given to project management affected implementation success and prospects for
continuation. Some of the larger projects requested funds for project managers or coordinators. Without
these positions, many projects would not have achieved the success that they did. Project coordinators
assumed responsibilities for tasks that enabled the PI to concentrate directly on substantive and pedagogical
issues, and on working with faculty and students. Some of the work that was facilitated by project
coordinators included: getting materials ready for experiments, publishing and distributing newsletters,
coordinating meetings of TAs, and managing the recruitment of students into the new courses. Each of
these is multifaceted; for example, successful recruitment of students depends on linking with student
advisors, getting listed in course catalogs (or at least in the preregistration and registration schedules sent to
students), and gaining support of relevant faculty, who sometimes are in ancillary departments. When
funds for management or coordination were eliminated from the budget and PIs were not able to find funds
to support these functions, they found, to their frustration, that their management and coordination tasks
were preventing them from spending enough time on the substantive project work.

Innovation Characteristics
Two dimensions or characteristics of the innovation affected its success. One was its connections
to the rest of the curriculum. The influence of this dimension is complicated: as discussed below, tight
linkages had both advantages and disadvantages. The power of the intervention was enhanced when its
underlying principles were applied across the curriculum, or truncated when the project represented a
cultural island — meaning that there were few links to other courses in the department. Many faculty
members asserted that it was difficult to revise one or two courses without revising the curriculum. As one
said, “Faculty in subsequent courses must take into account what students have had in the new courses.
Mostly, that is not true now.”

Yet tight linkage to the rest of the curriculum — or to courses in other departments — made it
difficult to implement, since communication and coordination with other faculty became mandatory.

-49-


However, when these processes were carried out effectively and the faculty in the other courses saw tangible
evidence of the benefits (in terms of project students coming to them more motivated or better prepared),
they became allies for project continuation. Tight linkages seemed important in innovations
designed for majors or for service courses. Such linkages did not seem important for courses serving
non-majors that did not serve as prerequisites to other courses. Faculty who taught such free-standing
courses usually had considerable latitude to innovate, but had no collegial constituency that would try to
continue the course when the instructor left or changed focus.
One innovation characteristic affected the chances for continuation in a consistent way: the extent
of emphasis on instruction. Compared to innovations that focused solely on changing the curriculum,
those that changed instruction successfully were more likely to be judged by the case study team staff to be
still be in place three years from now.

Summary of Factors Associated with Implementation and Continuation
The cross-case analysis included coding the data on key variables, searching for patterns, and
developing and testing hypotheses. One analysis examined the institutional dynamics related to the quality
of implementation and the chances for project continuation. Figure 5 displays the relationship among
variables that affected the chances for continuation. Four variables directly exerted influence: the extent
of emphasis on instruction, the extent of mastery of the innovative features (which is the most critical part
of implementation success), the breadth of faculty support, and the extent of support from senior
administrators. Understanding the relationships involves tracing the factors that affected the second, third,
and fourth of these variables.

Factors Associated with Impact
The previous sections dealt with factors that seem to affect success of implementation and the
chances for continuation. The remaining section of this chapter examines factors that seem associated with
different kinds of impact: on faculty, on students, and on departments. Each of the reported relationships
came from analyzing data collected from the survey of grant recipients. Because there were important
differences among mathematics, science, and engineering education, the factors affecting each kind of
outcome were analyzed separately for each of these discipline categories. All relationships reported below
were statistically significant at the .001 level. See the technical report on the survey findings for more
details on these measures and analyses.

-50-


Figure

5:

Factors

Associated

with

Implementation

and

Continuation

Pre-Implementation Training

Emphasis on Instruction
Favorable Timing

Support

from
Senior Administrators

Monitoring of Student Achievement
&

Attitudes

Mastery

of Innovation
Features

Ongoing Opportunities

to

Share

Problems
and

Successes

Chances of Continuation
Breadth

of
Faculty Support

Status

of

Actors

and Supporters


Impact on Faculty
Analysis of the survey of PIs revealed that certain innovation characteristics were associated with
the extent to which mathematics faculty changed their conceptions of teaching and learning, and that other
characteristics were associated with the extent to which science faculty changed their conceptions. (There
were no innovation characteristics consistently associated with changes in the conceptions of engineering
faculty.) On the basis of data collected from grant recipients, mathematics faculty changed the way they
thought about teaching and learning when the innovation involved increasing the centrality or importance
of:

having students work in teams (r=.46);
teaching recent findings and theories (r=.37); and
eliciting and addressing student misconceptions (r=.34).

Science faculty changed the way they thought about teaching and learning when the innovation
involved increasing the centrality of:

eliciting and addressing student misconceptions (r=.42);
having students work in teams (r=.34);
using nontraditional assessment methods (r=.30); and
achieving high integration among course components — e.g., between lectures and laboratories (r=.25).

Note that two innovative characteristics are on both lists: increasing the centrality of having students
work in teams
and eliciting and addressing student misconceptions. Collectively, these findings suggest that
implementing new teaching techniques produces changes in faculty as well as producing gains for students
as described below. This bodes well for the possibilities of changing faculty culture through supporting the
implementation of cutting edge educational innovations in higher education.

Impact on Students
Two steps were taken to identify factors associated with student gains. First, a scale was created
containing student outcomes items listed in Table 11 (in Chapter Four) that met two criteria:

they correlated significantly with at least some of the changes in teaching on which data were collected, and
they were not self-evident outcomes of the changes in teaching.

The seven outcomes that met these two criteria for engineering and science projects are listed in
Figure 6. The scale for student gains in mathematics projects contains the first six outcomes listed in

-52-


Figure 6 (It does not contain the outcome that dealt with laboratory or field equipment since the only
equipment that most mathematics projects used were graphing calculators and computers, and these are
already dealt with in another outcome). Both scales gave equal weights to each outcome.

Figure 6: Outcomes in the Student Gains Scale for Engineering and Science Projects
Increased understanding of the scientific approach to problems
Increased competence in applying concepts, principles, or theories
Increased competence in using methods or equipment
Increased interest in or comfort with the science taught
Increased interest in or comfort with the mathematics involved
Increased interest in or comfort with the computer skills needed
Increased interest in or comfort with the lab or field equipment

Second, in each discipline, a search was made for the combination of changes in teaching that most
consistently was associated with increases in student gains scale scores. The changes-in-teaching scales
with the strongest associations for each discipline are shown in Figure 7. Across all three disciplines,
increases in student gains scale scores were associated with increasing the centrality of two changes in
teaching:

using software (other than word processing), and
having students frame researchable questions and devising methods, equipment or procedures.

The following two changes in teaching increased the strength of the prediction for two of the three
disciplines:

teaching recent findings, theories, or methods (mathematics and engineering); and
using a variety of methods to assess outcomes (mathematics and science).

Although there were changes in teaching besides those shown in Figure 7 that correlated
significantly with these student gains scales, when added to any of the changes-in-teaching scales shown in
Figure 7, none increased the strength of the relationship with the corresponding student gains scale. For
example, increasing the integration among course components is strongly related to increasing the
proportion of students who achieved the outcomes in the student gains scale for science projects; however,
if a project was having students serve in research apprenticeships and frame researchable questions,

-53-


Figure

7:

Changes

in

Teaching

Associated

with

Student

Gains,

by

Discipline

Outcomes

for

Mathematics

Students

Increased

understanding

of

the

scientific

approach

to

problems

Increased

competence

in

applying

concepts,

principles,

or

theories

Increased

competence

in

using

methods

or

equipment

Increased

interest

in

or

comfort

with

the

science

taught

Increased

interest

in

or

comfort

with

the

mathematics

involved

Increased

interest

in

or

comfort

with

the

computer

skills

needed

Outcomes

for

Engineering

and

Science

Students

Increased

understanding

of

the

scientific

approach

to

problems

Increased

competence

in

applying

concepts,

principles,

or

theories

Increased

competence

in

using

methods

or

equipment

Increased

interest

in

or

comfort

with

the

science

taught

Increased

interest

in

or

comfort

with

the

mathematics

involved

Increased

interest

in

or

comfort

with

the

computer

skills

needed

Increased

interest

in

or

comfort

with

the

lab

or

field

equipment

Changes

in

Teaching

for

Mathematics

Students

Teaching

recent

findings,

theories,

or

methods

Having

students

use

software

(other

than

word

processing)

Having

students

work

in

groups

Having

students

frame

researchable

questions/devise

methods

Using

a

variety

of

methods

to

assess

student

learning

and

attitudes

Changes

in

Teaching

for

Science

Students

Having

students

frame

researchable

questions/devise

methods

Having

students

serve

in

research

apprenticeships

Having

students

use

software

(other

than

word

processing)

Using

a

variety

of

methods

to

assess

student

learning

and

attitudes

Changes

in

Teaching

for

Engineering

Students

Teaching

recent

findings,

theories,

or

methods

Achieving

high

integration

among

course

components

Having

students

use

software

(other

than

word

processing)

Having

students

frame

researchable

questions/devise

methods


increasing the integration among course components would not be expected to further increase
the proportion of students achieving those outcomes.

Impact on Departments
Using survey data, a single composite variable was constructed to measure the impact of the
project on the department. The scale consisted of three components:

whether the department made a formal decision to use project activities or materials on a long-term basis;
if the department’s financial commitment to undergraduate education had changed because of the project; and
the proportion of colleagues who did not receive project funds who supported continuing the project (perhaps
with some modifications) or extending its basic principles to other sections or courses.

The following factors were found to be related to this measure of departmental impact:
the proportion of faculty who teach undergraduates who were involved in discussions about the project since
the project’s first year;

student gains scores; and
a person intensiveness factor that takes into account the number of professional staff supported by CCD funds to
work on the project, the amount of professional person time supported by CCD funds, and the number
of undergraduates per year who take project courses.

These findings underline the importance of communication as a factor associated with success.
They also illustrate the maxim that one type of success breeds another; in this case, perception of student
gains seems to give enough credence to the innovation to engender department support. The factors
associated with departmental impact are summarized in Figure 8.

Transfer
When effective innovations were successfully implemented in other campuses, the payoff of
the initial investment was multiplied. Yet not every activity or product merits transfer. Curriculum development
activity, like other forms of research, are experiments. Some experiments fail or only succeed marginally
and should not be replicated. Thus selecting a project for adoption is not as straightforward as it
may seem, as illustrated by one institution funded to implement a set of courses developed elsewhere:

The key feature of the engineering curriculum that had attracted the adopting institution was its emphasis
on student design work. One of the early discoveries of project personnel was that this design emphasis
did not actually exist. Hence, the project’s emphasis became development rather than adaptation and
implementation; the participating institutions never were able to use the previously existing materials.

-55-


Another institution attempted to transfer the curriculum just mentioned. This institution also found
the original materials did not meet their needs and similarly launched an effort that was more development
than implementation. But a different problem arose as well.

After gaining department authorization to import the new curriculum based upon its alternative content,
the participating faculty found that the materials did not incorporate newer instructional methods. Project
faculty infused the curriculum with such pedagogical strategies as active learning, cooperative learning,
and alternative methods of assessing student learning, but did not apprise their colleagues. A furor
erupted when non-participating faculty discovered the instructional methods of the new courses
were dramatically different from traditional ones, especially when students who had completed the new
courses complained about the traditional methods used in their subsequent courses.

In order to identify actions that increased the chances for successful outcomes of efforts (a) to
adopt innovations developed elsewhere, or (b) to scale-up smoothly running pilot versions, the study team
systematically compared sites where innovations were scaled-up or adopted successfully with those

Figure 8: Factors Associated with Departmental Impact
Discussions with
Colleagues after
Year 1 about
Implications

People
Intensiveness

Student
Gains Departmental Impact

Skilled
Implementation
of Changes in
Instruction

Assessment
Method Variety

-56-


that encountered problems such as those just described. Figure 9 contains a list of steps whose presence
consistently promoted success. The factors and steps displayed in Figures 5, 8, and 9 — based on data
collected during this study — are consistent with those found by Kozma (see the box on page 58).

Figure 9: Actions Consistently Associated with Successful Scale-Up or Transfer
Setting forth a compelling rationale for making changes
Clearly distinguishing between innovation components whose use is essential (versus optional) in order to obtain
the desired outcomes

Notifying potential adopters explicitly:
that the innovation contains changes in pedagogy that require a certain level of proficiency in order to
produce the desired results; and

that certain components might need to be adapted locally
Giving potential adopters guidance and examples regarding how common differences in institutional
contexts might be accommodated

Addressing the concerns of uncommitted or skeptical faculty by establishing conditions for a fair trial, e.g.:
collecting “before” data prior to initial implementation,
protecting the innovation from premature summative evaluation, and
undergoing two or three formative evaluation cycles prior to making decisions about the innovation’s fate

Chapter Summary
Factors found to be associated with successful implementation and good prospects for continuation, and
with other more specific outcomes (such as student gains, faculty changes, and department impact) were
discussed under several headings: contextual factors, process factors, management and logistics, and
innovation characteristics. Some of the most important factors are “fit” within the local context, the
position of key players, a focus on instructional strategies, training for faculty and TAs, ongoing
evaluation (including building in feedback and revision cycles), and regular communication. The next
chapter identifies the findings throughout the report that are likely to be of greatest interest to policy
makers, program designers and implementors, and college and university faculty and administrators. It
also suggests ways that some of these findings could be used.

-57-


Some Factors Affecting Diffusion and Adoption of Instructional Innovations in Higher Education
In several studies, Kozma (1980, 1985) examined two national funding programs that supported
course improvement; one was the Implementation of Materials and Programs that Affect College Teaching
(IMPACT) program of the Exxon Education Foundation and the other was the Local Course Improvement
(LOCI) program of the National Science Foundation. While both programs supported course-based instructional
innovation, the intent of the IMPACT program was to disseminate innovations previously identified as
successful, while the intent of the LOCI program was to encourage the development of local innovations.
The directors of 75 of these projects (42 IMPACT and 33 LOCI) were interviewed three years after
funding (Kozma, 1980) to assess the extent to which these projects were continued and disseminated to
others (either inside, or external to, the funded institution). After three years, LOCI directors were more
likely than IMPACT directors to judge their projects a success, to report that the project was considered a
success by their colleagues, and were more likely to have disseminated their project to others.
Five years after the projects were funded, a follow-up study assessed the long-term effect of these
programs on continuation and dissemination (Kozma, 1985). The study involved visits to 26 of the
projects (15 IMPACT, 11 LOCI). Here are three findings related to the adoption process:

The process of adoption. Innovation is typically evolutionary, not revolutionary. The new things that
instructors try in their courses are similar to previous approaches. A clear line of ancestry can usually
be traced to instructors’ early experiences with similar innovations that are broadened, extended, or
modified with subsequent generations. Even when innovations were imported, they were grafted onto
stocks of similar previous practice.

Resources required. Innovation frequently requires time and other resources, such as technical
assistance and equipment. Released time is frequently critical to the planning, development, and
implementation processes. Lack of such resources was the reason given most often by colleagues for
not adopting target innovations. When innovations were disseminated to others, the new adopters
frequently required the same investment of time and money as the original project.

Social mode of adoption. There were two modes of adoption — individual and collaborative — and
these interacted with the likelihood that the project would be continued five years after funding. The
dominant mode was individual; in an overwhelming number of cases (17 out of 26) the decision to
adopt was made by the project director acting alone. Although the project directors’ motivations varied,
they all were egocentric: some saw the innovation in terms of a promotion, others were driven by a
highly personal commitment to a particular educational philosophy or approach. Socially, these project
directors did not have positions of organizational responsibility or extended interpersonal networks
within the system. These projects did not fare well after external funding lapsed. All were reduced in
scope or discontinued five years after funding; none were adopted by colleagues.

Collaborative efforts fared much better; five years after funding, all nine of the adoptions in this mode
were continued at the same or an increased scope. The adoption process for these projects involved
multiple people and the decisions were cooperatively made. The directors of these projects were
well-integrated into the social system and were frequently department chairs. The motivation for
change was some identified need of the organization or group. These projects were frequently adopted
by others and institutionalized. Dissemination was usually the result of informal, one-to-one, personal
interactions rather than formal modes of communication such as workshops or publications.

-58-


Chapter Six: Conclusion
The previous chapters have demonstrated that the CCD program is making a significant
contribution to improving undergraduate education in mathematics, science, and engineering. As a result
of the National Science Foundation’s investment, a strong effort is being made to change the infrastructure
of undergraduate education in these areas, and the national impact of these efforts is noticeable. This
impact is beginning to be felt in science, is further along in engineering, and, as indicated in the
sidebar in Chapter 4, a major impact has occurred in the way that calculus is being taught. This chapter
reviews and explores the meaning of study findings by addressing the following questions:

How effectively are CCD’s objectives being achieved?
What was learned about factors that affect project effectiveness?
What modifications might make CCD more effective?

How Effectively are CCD’s Objectives Being Achieved?
Overall, the CCD projects are successful in achieving the program’s ultimate objective of increasing student
understanding of, interest in, and comfort with mathematics, science, and engineering. At project sites,
there is also an increased value placed on undergraduate education by participating faculty, and to some
degree by other departmental faculty. This success occurs despite the difficulty in affecting attitudes
toward undergraduate education in many departments. These environments hold few incentives for faculty
to engage in educational innovation except for the personal intrinsic reward of improving education for
students.
According to the survey results, NSF plays a critical role in the development and implementation of
most of the innovations studied; 81 percent of the PIs estimated that without NSF funds, they would not
have been able to implement more than half of their projects’ agendas. Furthermore, the innovations
(courses, curricula, materials) are addressing perceived institutional needs — the same needs identified
during the 1993-94 interviews with national experts.

Figure 10 provides a graphic summary of salient features that contribute towards achieving the
CCD objectives. It also provides the framework for the discussion that follows.

Student Outcomes
Both survey and case study evidence strongly suggest that greater percentages of students are
achieving important gains that match the intent of the CCD program and the goals expressed by the national

-59-


Figure

10:

An

Integrated

View

of

Project

and

Site

Dynamics

Pre-Implementation Training

and
Ongoing Opportunities

to

Discuss

Problems
and

Successes Ongoing Monitoring of

Student

Responses,
and

Use

of Resulting

Data

to

Modify Instructional

Behavior

and Materials

Innovations

Implemented

That

Address

Perceived

Institutional

Needs

by,

Among

Other

Things,

Actively

Engaging

Students,

Tightening

Linkages

Among

Course

Components,

and

Focusing

on

Recent

Problems,

Methods,

Findings,

&

Theories

Departmental

Discussions

about

the

Need

for

and

the

Nature

of

the

Innovation,

as

well

as

Its

Implications

for

the

Department

Departmental

Outcomes
Increased

Value

Placed

on

Undergraduate

Education

Department-Wide

Understanding
of

the

Need

for

Pedagogical

Changes

that

Require

Mastering

New

Teaching

Competencies

More

Faculty

Involved

in

Efforts

to

Improve

Undergraduate

Education

Enhanced

Likelihood

that

the

Innovation

will

Continue

More Students Achieving Val ued Gains
Faculty Mastery of

the Innovation’s Instructional Competencies

Resources Devoted to Project Management
At

least

One

Widely

Respected

Colleague

is

a

Project

Participant


experts interviewed in 1993 and 1994. Responding to individual survey items, between 82 and 90 percent
of the PIs reported that compared to students taking analogous traditional courses, more students taking
CCD project courses increased their:

understanding of the scientific approach to problems;
competence in applying concepts, principles, or theories;
competence using methods or equipment;
interest in or comfort with the science taught; and
interest in or comfort with the computer skills needed.

Furthermore, between 35 and 44 percent of these respondents reported that "many more" students achieved
these five outcomes.

Institutional Outcomes
The CCD program also aims to encourage changes in faculty, departments, and institutional
culture. DUE staff understand that organizational changes are required to sustain the new instructional
approaches so that future cohorts of students will also benefit.
The biggest changes were in the practices, skills, and attitudes of participating faculty (described
below). Somewhat less changed are non-project faculty and the departments overall. In several
institutions, faculty not listed in the proposal were either recruited to participate during the first cycle of
project work or in subsequent rounds. Furthermore, some faculty who never used project activities or
materials did, as a result of the project, begin to think differently about — and sometimes spend more time
on — teaching. But many of the non-participating faculty did not appear aware of the need for change —
that is, the problems with traditional curriculum and instruction that had prompted project faculty to seek
an alternative approach. When meetings or workshops had been held to explain the rationale for the
project to departmental colleagues, these faculty often did not attend.
The evidence suggests that CCD may have a long-term impact on the number of faculty striving to
improve undergraduate education. In particular, several projects were training and providing experience to
TAs, postdoctoral fellows, and visiting faculty. Eventually, some of these individuals may plant the seeds
of educational reform at other institutions.
One indicator of department change is the prospect for project continuation. According to the PIs
who responded to the survey, two-thirds of their institutions had made a formal decision to offer project
activities or use materials on a long-term basis. As has been true for other issues, PI self-reports were
consistent with site visit data. In this case, based on their visits, evaluation team members predict that:

-61-


in 90 percent of the projects, some aspect of the project is likely to still be in place three years following the
visit; and

in about half of the projects, the prospects are excellent for substantial portions of the project to still be in place
three years following the visit.

The remaining objective of CCD is to change the academic culture, especially the relative value
placed on undergraduate education — both on teaching and on scholarship related to teaching.
Although the cultures of academic institutions are highly resistant to change, participants in CCD were
mavericks in this regard, often taking risks in environments that did not reward their efforts. This was
sometimes a problem for non-tenured faculty. However, at some institutions in the case study sample,
administrators placed a high premium on undergraduate education. Some deans, especially in public
research universities whose state legislature demanded greater accountability, were highly committed to
the changes, sometimes more so than most of the faculty. At other sites, deans were less committed than
the faculty, especially at comprehensive and research universities — particularly colleges of engineering —
some of whose budgets were based on the expectation that faculty would be obtaining grants and contracts
to support a substantial part of their salaries.

In one project, administrators rewarded faculty for innovations in undergraduate education, not
just for excellence in teaching. In another, an administrator wrote a letter to a new project director’s file
asserting that, because of his responsibilities, research related to teaching and learning would be valued
equally with laboratory research. Yet these instances were worthy of comment because they were the
exception. Nevertheless, one-third of the survey respondents reported that commitment to undergraduate
education had increased in their departments, at least in part due to the CCD project. This suggests that
there has been some positive impact of CCD on academic culture.

What was Learned about Factors that Affect Project Effectiveness?
Awareness of the factors that affect project effectiveness may provide lessons for project design. Among
the major factors identified in the study is the importance of implementing all of the core tasks of the CCD
program: developing or adopting sound innovations, increasing faculty knowledge about teaching and
learning, and increasing the instructional competence of the faculty.

The Nature of the Innovations
Content specialists judged CCD innovations to be substantively and theoretically sound, reflecting
recent problems, methods, and findings in the field. The new courses typically provide tightened linkages

-62-


among course components. They also contain pedagogical innovations that actively engaged students in
the learning process. In a minority of projects, content specialists felt that project faculty did not adopt
some of the pedagogical changes advocated by the reform community. In their opinion, some project
faculty are moving in the right direction, but had not progressed far enough.

Faculty Mastery of Instructional Features
An important focus of the CCD program is on enhancing the instructional competence of faculty by
changing their knowledge and attitudes related to the teaching and learning process. One of the most
dramatic findings of this study is that 96 percent of the grant recipients reported that some project faculty
had changed their conceptions of teaching and learning. Additional evidence from survey data and case
study interviews indicated, as mentioned in Chapter Two, that the nature of these changes was consistent
with expert opinion regarding best practices.
The case study evidence shows that the extent of faculty proficiency in using the innovation’s new
instructional features is pivotal to project success and project continuation. Exemplary implementors of
the new instructional features often played lead roles in the innovation’s development or were involved in
intensive workshops or training.
Inadequate implementation of the innovation’s instructional features was a significant weakness in
some projects that were visited. Some of the faculty who had not been involved in the development work
appeared to equate implementing the innovation solely with using new materials. In one institution that
was adopting a course developed elsewhere, the single faculty member who was teaching the project
course had failed to master the innovation’s instructional features adequately; his project students did less
well than non-project students. In this institution and in a small number of others with low faculty mastery
of the innovation’s instructional features, opposition to the project from faculty colleagues was substantial,
and study team members predicted that the project would not continue for another three years.

Support for Faculty Mastery of the Innovation
Several important features were found to enhance and support faculty mastery of the innovation’s
instructional features. These include:

Faculty training and ongoing discussions. The data suggest that the kinds of changes in teaching that lead to
improved student outcomes require both pre-implementation training for participating faculty and TAs,
and ongoing opportunities to share successes and engage in collaborative problem solving. Exemplary

-63-


approaches to training include co-teaching of the course the first time with an experienced implementor, bringing
in outside experts for extended periods, and ongoing workshops.

Formative evaluation opportunities. Some of the more successful evaluation activities affect how the
project evolves by monitoring, on an ongoing basis, student attitudes and learning and using the resulting data
to modify instructional behavior and materials. However, evaluations tend to be limited in scope, and it is rare
for summative evaluations to be conducted that could serve as an effective stimulus for dissemination or
for convincing the institutional power structure of the effectiveness of the innovation.

Involvement of Respected Colleagues
Leadership and support for the innovation are often critical to project success. The status and
respect of the key participants, including principal investigators, legitimized the changes and helped to
defuse resistance. Enlisting the support of well-respected colleagues in later stages of implementation also
helps insure continuation of the new curricula, courses, and materials. This factor is particularly
important to achieving the CCD objective of increasing the value placed on undergraduate education and in
helping change the academic culture.

What Modifications Might Make CCD More Effective?
Study findings have potential implications for the Division of Undergraduate Education as it considers the
future of the CCD program. To realize this potential, the findings must be connected to actions or decisions
that DUE staff can make. To build such a connection, the study team addressed three questions:

Are there implications for the mix of grants awarded?
Are there implications for projects that seek to adopt or scale-up the implementation of existing innovations?
and

Are there implications for the kind of orientation that is provided for proposal reviewers?

The Mix of Grants Awarded
When CCD was new, development was needed. Since most projects draw upon the same small
set of changes in teaching and since there are now several solid innovations dealing with most of these
changes in each discipline, it seems wise to build on what has already been developed, and to devote
greater attention to promoting the adoption of innovations. This would also increase CCD’s national
impact. Below are specific suggestions for changing the mix of grant awards.

An increase in the proportion of grants promoting adoption. On the basis of the findings, at least
six kinds of grants would help to promote the adoption of existing innovations at other sites:

-64-


grants to develop databases so faculty could retrieve information on all the projects in their field featuring
a particular change in teaching;

grants for awareness conferences that present an array of projects in a given discipline for potential adopters to
explore (CCD already has funded such grants; the two that were in the case study sample led to some successful
adoptions);

grants to provide training and on-site consultation in connection with complex projects with a large number of
potential adopters (If such grants were coordinated with grants to provide awareness conferences, then faculty
wanting to adopt an innovation that they learned about during an awareness conference could seek help from
faculty who were both skilled in implementing the project and prepared to help);

grants to provide demonstrations of complex innovations in action, which could fund former PIs so that visitors
could make arrangements to observe and talk with someone who could help them interpret what they saw;

grants to provide support networks, which could range from smaller versions of the 1994 national
dissemination conference — perhaps limited to a single discipline — to regional conferences that include
administrators, to financially supporting newsletters; and

grants for institutions to adopt complex innovations developed elsewhere or to scale-up modest
implementations of complex innovations — i.e., to increase the number of faculty involved well beyond those
who participated voluntarily in their implementation.

Adding new kinds of grants. The evaluation team found two weaknesses in the infrastructure for
dissemination that could be somewhat remedied by two types of grants:

Grants for summative evaluation. Creditable evaluation could be required for renewal of grants in developer
sites. Similarly, unless developers present reasonably compelling evidence that their innovations are effective,
it is hard to justify spending money to promote their adoption elsewhere. Yet reading the abstracts and
conducting the case studies caused the evaluation team to conclude that few projects had mounted
careful summative evaluations, partly because they did not: have enough funds in their budgets,
possess the necessary expertise, or attend to evaluation matters until it was too late to collect sound
baseline data. The purpose of evaluation grants would be to conduct careful summative evaluations
of well-established and apparently promising innovations.

Grants to provide technical assistance. It appeared to the case study team that many grant recipients would
have done even better than they did if they had been able to turn to specialists for technical assistance. Five
kinds of technical assistance needs were identified. Many grant recipients would probably do a more effective
job if they received help in:

• understanding the process of developing or adapting curricular and instructional innovations;
• making innovations more transportable;
• designing and conducting sound training programs;
• designing and conducting sound evaluations — both formative and summative; and
• thinking through the intricate and confusing social and political aspects of planning to adopt an
innovation that will affect a sizable number of faculty.

-65-


Guidelines for Scale-Up and Adoption Proposals
Applications that focus on the use of existing course and curriculum innovations pose special
concerns. Although the nature of the innovation itself is important, other factors also need to be
considered, such as the two highlighted below.

The context for change at the proposed host institution. Characteristics of the context proved to be
an important factor associated with project success. Yet current proposals include very little on what the
context is for the proposed change. The following information would help reviewers assess the proposer’s
chances for success:

the perceived problem(s) that the innovation is supposed to help solve;
evidence that departmental colleagues would be receptive to experimenting with the change;
evidence that the administrator perceives the proposed change to be consistent with his or her understanding of
the institution’s mission and priorities;

evidence that at least one faculty member involved in some key aspect of the proposed activity is highly
respected by many department colleagues; and

evidence that the attitudes and concerns of relevant colleagues outside the department have been solicited and
taken into account.

Perhaps more important than helping reviewers reach wise decisions is the effect that such
questions may have on the proposal planning and writing process. In many cases, developing good
answers to these requests will improve the plan for the project, and therefore the chances that any
subsequent implementation will be a success. (Note that 66 percent of the applicants whose proposals
were declined reported that they implemented some aspect of what they proposed. Accordingly, improving
the planning process has the potential for improving the results on campuses that do not receive awards.)

The plans for maximizing the chances that implementation will be successful. The site visit data
suggest that applicants need to do more thinking and planning about such issues as how they will:

provide training and ongoing support;
conduct formative evaluation;
attend to the project’s management and coordination needs; and
communicate with relevant colleagues and administrators within and beyond the department.

Applicants should be encouraged to give more than pro forma attention in the proposal to each of
these elements.

-66-


Orienting Proposal Reviewers
From observing review panel discussions of proposals, evaluation team members noted that, quite
appropriately, a great deal of time was spent on evaluating the nature of the innovation itself. Yet very
little time was spent on the plans for making the project successful on dimensions other than content
and pedagogical soundness. Here are suggestions for reviewers of development and adoption grants:

For development grants: reviewers should ensure that there are adequate plans for formative evaluation and
revision. Development projects can be more effective when they gather and use formative evaluation data.
However, for grant recipients to make the adjustments and revisions required, the project duration must be long
enough to permit more than one or two cycles, both for the innovation itself and for any materials or products that
are developed. Furthermore, the proposal should demonstrate that the applicants know what kind of information
they will collect to monitor how the project is going, and how they will collect, analyze, and interpret it.

For adoption grants: reviewers should understand the reasons for any of the guidelines suggested in the
previous section — dealing with adoption grants — that DUE elects to adopt. They should review the proposal
to assess whether the effort has been thought through well enough to have a good chance to succeed.

Summary and Conclusions
The CCD program is largely successful in accomplishing its major goals related to students and
participating faculty. In more than half the institutions that received awards, at least some part of the
project will continue. Many faculty now understand that students must be more actively engaged with the
material and with other individuals in order to develop deep understanding. Although some non-project
faculty have been striving to do something about their emerging conceptions of teaching and learning,
others are attending to their own research, partly because, in most institutions, the faculty reward system
has not significantly changed. Thus while the ultimate goal of increasing student understanding and
comfort with mathematics and science is being achieved by most projects, the much more difficult goal of
changing institutions is, with a few notable exceptions, being achieved only modestly.
Yet in recent years important changes have taken place in the context for reform in higher
education. For example, there have been increases in:

societal pressure for high quality undergraduate education;
the availability of sound activities and supporting materials; and
the proportion of faculty involved — and receptive to becoming involved — in efforts to improve
undergraduate education.

Along with these changes has come the gradual development of an infrastructure supporting dissemination.

-67-


Through CCD and other programs, NSF personnel — DUE staff in particular — have played a
major role in bringing about the last three changes listed. Collectively, these changes make some of the
suggestions included in this chapter more feasible than they would have been when CCD began. NSF has
already made a significant national impact on undergraduate education, and is advantageously positioned
to deepen and extend that impact.

-68-


Page Navigation Panel

  1   2   3   4   5   6   7   8   9  10
 11  12  13  14  15  16  17  18  19  20
 21  22  23  24  25  26  27  28  29  30
 31  32  33  34  35  36  37  38  39  40
 41  42  43  44  45  46  47  48  49  50
 51  52  53  54  55  56  57  58  59  60
 61  62  63  64  65  66  67  68  69  70
 71  72  73  74  75  76  77  78  79  80
 81  82  83  84  85  86  87  88  89  90