NSF LogoThe Cultural Context of Educational Evaluation

The Role of Minority Evaluation Professionals

June 1 - 2, 2000
Arlington Hilton and Towers Hotel
Arlington, Virginia

Bypass Navigation
Home
 
Introduction
 
Opening Session
 
Session One
 
Session Two
 
Session Three
 
Workshop Recommendations
 
Closing Remarks
 
Appendices

SESSION THREE: Evaluator Training Issues: Barriers, Recruitment and Outreach Strategies and Training Mechanisms

Session Chair
Floraline I. Stevens
Floraline Stevens and Associates

Presenters
Henry T. Frierson
Professor
School of Education
University of North Carolina, Chapel Hill

Sandra Fox
Retired and Adjunct Professor
University of New Mexico

Discussant
Costello L. Brown
Division Director
Educational System Reform
EHR/NSF

Guiding Questions

  • How does the academic environment influence career choice and support or deter the entry and persistence of underrepresented minorities in the evaluation field? What are other barriers?
  • Does this population have specific training needs? If so, how do we meet them? The discussion should include three levels of training - preparation to enter the field; professional development for current evaluators to transfer into the area of math and science program evaluation; expanding credentials of practicing evaluators who lack advanced degrees.

Back to Top


Discussion Highlights
Floraline I. Stevens

Prior to the presentations, the session chair reviewed salient points from Sessions One and Two that might lead to the final recommendations made by the group as a whole. Next, the Session Three presenters brought forward their suggested strategies to address the training barriers to increasing the number of underrepresented minority evaluators. Some of the strategies had emerged earlier in Sessions One and Two; however, the Session Three presentations further emphasized the importance of the issues and provided some new information.

Floraline Stevens, the chair of the session and a former Director of Research and Evaluation for the Los Angeles Unified School District, presented a summary of interviews of current underrepresented minority evaluators. Those interviewed emphasized that their presence was required on evaluation teams to make sure that all viewpoints were at least examined before opinions were formed and findings were determined.

Barriers to Becoming an Evaluator. The minority evaluators reported that there was a lack of professional development or classes provided by school districts to enhance the skills of non-traditional evaluators who may not have formal university education and training in evaluation. The cost of university classes and a lack of time to attend the classes were deterrents. To remedy some of these barriers, fellowships should be offered to encourage minorities to become evaluators and to ensure that they will get jobs once their training is completed.

Recruitment and Outreach. The minority evaluators reported that the evaluation profession is generally unknown because of lack of communication about it and lack of outreach activities by colleges and universities. Presently, evaluation is not touted to mainstream and minority university students as a desirable profession. The minority evaluators indicated that better career counseling at secondary and university levels and more outreach activities on the part of university evaluation departments were needed to give evaluation a higher profile as a profession.

Specific Training Mechanisms Needed. Skilled and talented minority evaluators can be developed with the proper framework for education and training. University course work should include statistics, research methods, evaluation theories, tests, measurement, etc. Practical evaluation training should include some knowledge and experience in teaching and learning as well as conducting evaluations under the supervision of a trainer/mentor.

The second presenter, Henry T. Frierson of the University of North Carolina at Chapel Hill, felt that there had not been a concerted effort to remedy the small numbers of underrepresented minority evaluators. He indicated that the January NSF Workshop where concern was voiced about the lack of underrepresented minorities in program evaluation served as a beginning.

Although there is a need to educate individuals in program evaluation, short-term workshops are insufficient for proper training and production of more evaluators. The focus must be on in-depth preparation of evaluators through formal education and training. NSF can make this possible by supporting participants in graduate programs and seriously seeking to enroll minority students. When funds are available and minority students are targeted, most university graduate programs will be proactive in their recruitment activities to identify and enroll prospective minority graduate students.

Barriers. There is a general lack of awareness of what evaluation is, and the important role it can play in improving promising programs. To address this problem information should be disseminated about the field of evaluation, and underrepresented minority groups should be targeted for recruitment.

Recruitment. Increasing awareness of program evaluation by minority students should be one of the major efforts undertaken. Moreover, there must be opportunities to participate in evaluation projects. Since the need to give back or contribute to the community still runs strong in many minority individuals, when program evaluation is seen as improving programs, one would predict that the interest in program evaluation by minorities would expand tremendously. Evaluators without advanced degrees or non-traditional evaluators would benefit professionally from acquiring advanced degrees. They could serve in part-time and full-time paid positions at such agencies as NSF, NIH and other federal departments while they are completing or as part of their formal graduate studies.

Preparation. Quick-fix efforts in the form of short-term in-service training have not been successful. NSF should follow its model for increasing the production of PhDs in science, engineering, mathematics and technology from underrepresented minorities in the production of well-prepared program evaluators. Training grants can be awarded to doctoral programs that can train and prepare evaluators and such programs should ensure and demonstrate that they will enroll a sufficient number or proportion of minority students.

Sandra Fox from the University of New Mexico was the third presenter. She described her experience as an evaluator for the Bureau of Indian Affairs, Office of Indian Education Programs. The evaluation process for the 185 schools in the program was based upon the Effective Schools with its 10 correlates. The correlates were used to evaluate and monitor a school's efforts. One-fourth of the schools were visited annually to determine correlate implementation and to acquire information on attendance, achievement, and other specified outcomes. Monitoring and evaluation teams of four to five persons used a correlates implementation checklist so that the evaluation system was uniform. Prior to the team visits, schools used the correlates to conduct self-evaluations and provided this information to the team. This evaluation process lasted from 1990 to 1995. Major results from this process resulted in improved academic achievement, increased attendance, and increased enrollment in the system. Another outcome was the inclusion of Native Americans trained in the Effective Schools evaluation process.

Responses to the Issues

In the discussion that followed the presentations, the issues of barriers (e.g., money and time), recruitment and outreach, and mechanisms for training were found to be interrelated. For example, when the discussion centered principally on recruitment and outreach, the location of the sites selected for training and funding possibilities were also elements thought to be important to this issue. All of the presenters and the group agreed that little has been done to make graduate students - particularly underrepresented minority graduate students - aware of program evaluation as a graduate specialization and as a profession. It was agreed that an evaluation training program funded by NSF should be established and should follow the model for increasing the number of underrepresented minority PhDs.

In-service projects have not been successful. In-service programs make their attendees more aware of program evaluation, but do not produce evaluators. When recruitment and outreach efforts are attached to the funding of student fellowships, most universities with the capabilities to provide quality education and evaluation training will make concerted efforts to recruit and have outreach activities. In addition, the group agreed that non-traditional evaluators who are underrepresented minorities can be the nucleus for forming a potential pool of individuals seeking advanced degrees in program evaluation. However, these evaluators would need financial assistance to increase their educational knowledge and skills at universities because they are usually older and have jobs. Fellowships should be offered at their current salary range to encourage this group to earn advanced degrees, while fellowships should be offered to traditional graduate students at the appropriate rate.

The group was particularly concerned that university training efforts be placed at sites where large pools of interested underrepresented minority students may be located. Dr. Sandra Fox, a Native American, further emphasized this point. She stated that if Native American graduate students were part of focused recruitment efforts, then at least one training site should be on a college or university campus for Native American students.

Recommendations from Session Three:

  • NSF should fund training where the greatest pool of potential evaluators can be located.
  • NSF should establish collaboratives between university training sites and consortiums of school districts with teachers and administrators interested in becoming evaluators. These persons have the essential knowledge of teaching and learning.
  • NSF should fund internships or fellowships at a level of current salaries, so financial barriers are eliminated.
  • Evaluation training must include traditional evaluation courses such as theories, methodology; non-traditional courses, such as thinking systematically, multicultural education, socio-cultural and linguistic cognition; practical experiences of conducting evaluations; and support from mentors.

Back to Top


Papers/Presentations

Reflections and Interviews: Information Collected about Training Minority Evaluators of Math and Science Projects
Floraline I. Stevens

Introduction

Equity and equal access are seen by educators and policymakers as bridging the gap between the haves and have-nots. Unfortunately, in many of our public schools the have-nots include a disproportionate number of underrepresented minority students. The issues of how best to address educational opportunity inclusion versus exclusion among students on many occasions bring about emotional debates. However, Federal, State, and local governments responded with numerous educational programs to enhance the educational opportunities for racial minorities, people with disabilities, and students from low socioeconomic status. In particular, the NSF, through its science and technology education programs responded to the continuing concern that the numbers of underrepresented minorities need to be increased in careers of science and technology. When evaluating these programs to determine their quality and impact, we must be careful to really hear the voices of underrepresented students, and not have these voices screened through channels that may not always be reflective of the students. We do this by having multi-ethnic evaluation teams. Through this process we help ensure that the voices and their nuances have a chance to heard. Just as equity and equal access are necessary for the students, equity and equal access are necessary for the professionals - minority evaluators - that interact with these students.

Therefore, when deliberating about whether there is a need to train more minority evaluators of math and science projects, the issue of the need for multiple voices in evaluation became real. In response, I felt that instead of synthesizing previous research findings, two types of qualitative information should be gathered because this might prove to be useful and insightful.

The information came first through the reflections of an experienced minority evaluator of math and science projects; and second, from interviews of currently working public school system minority evaluators who are at different stages of evaluator development.

Data Sources

Information from Reflections was composed of my experiences in becoming an evaluator. These experiences included: receiving on-the-job and university training; networking with other evaluation professionals at meetings; and serving as the administrator responsible for securing in-service training for evaluators.

The interviews were conducted with minority staff members of the Los Angeles Unified School District's Program Evaluation and Research Branch (formerly Research and Evaluation Branch/Program Evaluation and Assessment). The intent was to gather their opinions of whether or not minority evaluators are needed to evaluate math and science projects in public schools. There was a professional dichotomy among the members of the evaluation staff: some were educators (teachers and school administrators) with little/some technical knowledge while others had much technical knowledge but no school experience.

The information from these two types of data-gathering efforts provide some insight into such issues as diversity in evaluation teams, academic and other barriers, recruitment and outreach, and need for specific evaluation training.

Reflections of an Experienced Minority Evaluator of K-12 Math and Science Education Projects

In 1965, I became an "evaluator" when the ESEA Title I legislation was enacted and Federally funded education programs were being implemented in the Los Angeles Unified School District and other public school districts. One of the requirements for receiving the money from the Federal government was that the funded programs had to be evaluated. At that time, I was a demonstration training teacher in an elementary school. I had recently received my master's degree in educational psychology in counseling from UCLA.

There were no evaluation types in the school district. However, in response to the Federal guidelines, the school district recruited a cadre of persons who had training in counseling because of their coursework in tests and measurements, statistics and research design. After a review of our resumes and an interview, we were selected to be ESEA Title I evaluators. We knew nothing about evaluation theories and evaluation procedures. In the beginning, we did evaluations by rote. We were given a format for data collection and writing the reports. Some of the early reports were awful because there was little or no intellectual decision-making when planning and conducting the evaluations. After three to four years of on-the-job experience and reading a lot, most of us became adequate in our attempts to provide quality information about the ESEA Title I projects. In this group of 10 evaluators, four were African Americans.

What helped us to survive was our extensive knowledge of teaching and learning in classrooms and, for some of us, our familiarity with the predominantly minority students who were in schools where the ESEA Title I programs were operating. We knew how to gain access to people and information in schools, a critical element in evaluation. We knew when the information provided did or did not make sense.

From those early experiences, I later became an evaluator of my first science education project, an ESEA Title III, K-12 ecology - biology - focused project. I used the evaluation project for my dissertation when I investigated the application of a specific evaluation theory. Since the early evaluation of the Ecology Program, I have evaluated many K-12 science education and science-focused professional development programs. Currently, I am the evaluator of two science education programs funded by NSF and NIH. Although I do not have a degree in science, mathematics or technology, I have taken many science courses. I found that extensive or "deep" science knowledge was not necessary to evaluate these K-12 science projects and programs effectively. What was most helpful was my graduate training at UCLA in research methods and evaluation and the on-the-job training.

Training Needed: University Coursework and Real Life Experiences

Statistics, research methods, evaluation theory, norm-referenced testing, criterion-referenced testing, learning theory, evaluation seminars, evaluation fieldwork and conducting research for the dissertation were required to receive a professional degree in research methods and evaluation at the doctoral level at UCLA. All of this information proved to be valuable to me when conducting project evaluations. In addition, the university training gave me the confidence to know that the evaluation procedures that I selected had educational and theoretical foundations. On-the-job training was also invaluable because through doing the actual evaluation work, receiving professional development on specific topics such as writing evaluation reports, and receiving critical remarks from evaluation experts improved the professional quality of my day-to-day evaluation work. It is the combination of these experiences that continues to guide me in my work as an evaluator.

An Administrator Responsible for the Training of Evaluators

When I became the Director of the Research and Evaluation Branch, I developed an ongoing program of professional development to assist the evaluation staff to become better qualified. The technical staffers, while having knowledge of research design, statistics and sometimes tests and measurement, were not always good writers. Educators needed reports to be clear and not burdened with technical and statistical terminology. In addition, technical staff had to learn about teaching and learn to relate their evaluation activities to findings that were meaningful and substantive. On the other side of the issue were the data collectors who were teachers and administrators with few technical skills. Classes were provided that taught evaluation and research design, statistics, and also report writing. For practical evaluation information, these persons were paired with evaluators such as Drs. Marvin Alkin, Winston Doby and Joan Herman from UCLA, and other evaluators from universities and research firms. In particular, Marvin Alkin spent a lot of time teaching qualitative evaluation procedures through what could be described as "clinical learning."

Interview Responses from Five Underrepresented Minority Evaluators

Interviews were conducted with five minority staff members of the Los Angeles Unified School District's Program Evaluation and Research Branch. The five interviewees were each asked the following questions:

  • Have you ever evaluated a math, science or technology project?
  • Is there a need to train and increase the number of minority evaluators of math and science projects? Why?
  • Do you think an evaluator of math and science projects is different from other evaluators? Yes. No. Why?
  • Do/did you have to meet special needs/requirements to become an evaluator? Yes. No. What?
  • What professional development have you received to become a better or more efficient and effective evaluator?
  • What barriers or obstacles have you encountered in receiving training to become an evaluator?
  • What do you think are the requirements to receive an advanced college/university degree in evaluation?
  • What barriers or obstacles have you encountered in seeking an advanced college/university degree in evaluation?
  • What conditions would encourage you or other minority persons to seek more training to become an evaluator and/or seek an advanced degree in evaluation?

Minority Evaluators Feel There Is a Need for Training More Minority Evaluators

Four of the five minority evaluators (underrepresented in the fields of evaluation) who were interviewed were educators and one was not. None of the five evaluators had evaluated a math, science or technology project. They agreed that there was a need for minority evaluators of these types of projects. Selected explanations were:

"There is an absolute need for minority evaluators for every project. This is a multiethnic, multilinguistic society. Diversity of an evaluation team is paramount to paint a true picture for program staff. Unfortunately, most evaluation is from the white perspective. This is not meant to be interpreted as a negative comment. It is just that various cultures/lifestyles perceive the same scenario through different eyes, differing experiences. Having a diverse team ensures that all viewpoints are at least examined before opinions are formed."

"Yes absolutely. All segments of our society must be included in the data gathering and analysis of information pertaining to math, science and technology. This equalizes the division of labor and promotes knowledge and information parity throughout every strata of our country's ethnic and social groups."

"There is a serious shortage of minority evaluators all over the country. In my opinion, there are even fewer in the specialized fields of math and science. Last fall, LAUSD Program Evaluation Branch advertised nation-wide for evaluators and the results were sad relative to minorities. Out of more than 100 respondents, less than five percent were minorities (Hispanic and African American). There was a fair amount of Asian representation. The African Americans who responded and who were interviewed had very limited academic preparation in evaluation. Sadly to say, only whites and Asians made the list. This experience suggests that there is a serious need for African American evaluators. The training is needed."

Evaluators of Math and Science Projects Need Not Be Different

None of the evaluators felt that an evaluator of math and science projects was different from other evaluators. They indicated that a trained evaluator can evaluate any program. However, it helps to have knowledge of the subject area. Rather than content-focused, evaluators' orientation is toward being quantitative or qualitative methodologists and those who are able to successfully combine both perspectives. Precision was believed to be most important in any type of project evaluation.

Need to Meet Special Needs and Requirements to Become an Evaluator

The four certificated evaluators started out as data collectors who had easy access to schools, and not as program evaluators who could design an evaluation and plan evaluation activities. However, they worked with and were trained by experienced evaluators, attended in-service classes within the branch, attended conferences/seminars and took other professional development courses and continuing education units to help them to become more proficient. They indicated that they still needed such coursework as SPSS data analysis, statistics, report writing and technology/computer classes, and to attend evaluation conferences.

Barriers and Obstacles Encountered to Receive Training to Become an Evaluator

The evaluators stated that in the 1980's and early 1990's, professional development was provided but now there was no professional development support in the district to become a better evaluator. They reported that they must seek training external to the district. They said:

"Currently, there are no professional development or classes offered by the school district. To receive additional training would require finding the resources."

"Unfortunately, I have found very little 'formal' training available to those not in a terminal degree program (besides statistics classes). Most of my training has been self-initiated and self-sought. Heretofore, I had been very fortunate to have been granted most requests to attend training conferences which have been available locally."

"Currently, one must find one's own way, one's own training, professional development, etc. There is precious little time to do this adequately."

"Not enough money to take other courses and attend other conferences."

Barriers to College/University Degree in Evaluation

Lack of finances and time to attend a college/university were the barriers identified by most of the evaluators. One evaluator will be attending UCLA in Fall 2000 to earn a doctorate but not in evaluation. This evaluator will be using subsidized and unsubsidized loans along with grants to finance her graduate degree program.

Conditions That Would Encourage Minority Persons to Seek More Training to Become an Evaluator or Seek an Advanced Degree in Evaluation

According to the evaluators, evaluation as a profession is unknown because of lack of communication about it and lack of outreach activities by colleges and universities. The evaluators indicated:

"Better counseling in secondary and university levels. More outreach activities on the part of university evaluation departments. Higher profile of evaluation as a professional pursuit."

"I think that this is a relatively non-traditional field for many minority persons. It is not one that is touted by many institutions (at least not to the mainstream population) and there is not a lot of publicity given to it. Many of us 'find' this field, much like myself, and became self-taught. I am continually seeking training because I like this type of work, but there needs to be much more said about this profession to those entry-level undergraduate and graduate students who are at the onset of their careers."

"More awareness. Most minorities are not aware of the opportunities in evaluation. Also, more fellowships should be offered to encourage minorities to becomes evaluators."

"Higher salary, more responsibilities and knowing that one will get a JOB as a professional evaluator. Right now, this field is becoming less diverse."

Summary

Knowledge of teaching and learning is important when evaluating math and science projects. Some knowledge of K-12 math and science content is necessary but university-level training in research methods and evaluation is essential. On-the-job training when there are opportunities for "clinical" interactions with evaluation experts and attending inservice training are valuable but should be combined with university level training.

According to the five interviewees, minority evaluators are needed so that the viewpoints of various cultures and diversity will be at least examined before evaluative opinions are formed.

There is a dearth of African American and Hispanic evaluators to serve on evaluation teams.

Technical staff need procedural knowledge and experience about schools to become effective evaluators.

Educators who serve as evaluators indicated that they needed additional training in SPSS, technology/computers, report writing, statistics and data analysis. They did not mention the need to learn evaluation theories.

Lack of minority evaluators was attributed to multiple factors: lack of support to improve qualifications within school systems; lack of finances; and sometimes, inadequate time to seek external professional development in colleges and universities.

Evaluation as a profession is unknown to many undergraduate and graduate college students because of lack of communication about the profession and lack of outreach activities, particularly for minorities, by the colleges and universities.

Conclusion

The two qualitative information-gathering approaches proved to provide pertinent information about increasing the number of underrepresented minority evaluators to evaluate mathematics and science projects. Access to several opinions about the meaning of the data gathered from multi-ethnic students help to ensure that meaningful results will be reported. Skilled and talented minority evaluators can be developed with the proper framework for training - university course work and practical evaluation experiences, particularly those with skilled evaluation mentors. Aspiring non-traditional minority evaluators should be sought through outreach activities and provided with financial assistance for them to acquire university graduate degrees in evaluation. Although, multi-ethnic evaluation teams are desirable for a number of reasons, right now there is a dearth of these evaluators. The National Science Foundation should step forward to train minority evaluators of science and technology projects.

Back to Top


The Need for the Participation of Minority Professionals in Educational Evaluation
Henry T. Frierson

Introduction

If there were means to accurately determine the number of individuals from underrepresented minority groups who are professionally involved in program evaluation, the obvious hunch is that the numbers would be dismally small. Moreover, regarding those who have received formal graduate education and training in program evaluation, one can further assume that as for essentially every other academic discipline, the number of minorities is scarce among those formally trained in program evaluation. One can merely look at the numbers of underrepresented minorities who are receiving doctorates in the social sciences and research-based educational fields to quickly surmise that the number of individuals of color who might be formally trained in program evaluation is and will be low (see Sanderson and Dugoni, 1999). In the social sciences, the combined percentage of 1997 doctoral degrees received by African Americans, American Indians, Mexican Americans and Puerto Ricans was only 6.6 percent. In education, when the administrative degrees were removed, the percentage was just 7.2 percent. The social sciences and education are the general fields from which the majority of professional evaluators tend to come, and given the proportion of minority doctoral recipients, their underrepresentation in the field of evaluation will continue unless concerted efforts are made to address the situation.

If the democratization of program evaluation is to occur and if the field is to be truly pluralistic, then the field should apply assiduous efforts to ensure the involvement of more individuals who have been historically and traditionally underrepresented. Currently on the face of it, however, little concerted effort appears to have taken place. For example, one begs the question concerning the number of underrepresented minorities who have participated in the NSF/American Educational Research Association Evaluation Training Program (ETP). Although I have not seen the data, my hunch is that the number is significantly small. Importantly, however, it was noted in the Proceedings from the January NSF Workshop on Training Mathematics and Science Evaluators, that there is a concern about the lack of underrepresented minorities in program evaluation. If this holds true, it is a start.

Needed Effort

It is widely recognized that there is a clear need to educate and train individuals in program evaluation (see, for example, Altschuld et al., 1994, and Worthen, Sanders, and Fitzpatrick, 1997). Short-term workshops or institutes to expose individuals to evaluation principles and applications are insufficient as a measure to involve more minorities in evaluation. The same holds true for minority principal investigators (PIs) of programs supported by NSF and other funding agencies. Efforts to train PIs in evaluation through two- or three-day workshops are not going to result in the production of more evaluators. PIs have their own disciplines and with the added responsibility of running their programs are unlikely going to find the time to learn a new discipline such as program evaluation.

As there is a clear need to educate individuals in program evaluation, the focus should be on in-depth preparations, not brief exposures. Consequently, they should have extensive experience in the evaluation process through formal education and training. This would be most appropriately gained via graduate programs, and the participation of individuals from underrepresented minority groups should be a purposeful objective.

NSF can indeed play a considerable role in the formal education and training of increased numbers of minorities in evaluation. If NSF provides support for individuals to participate in graduate evaluation programs and if there is a clear aim to increase minority participation, many graduate programs will accordingly and seriously seek to enroll minority students. With opportunities to gain funding if minority students are targeted, most graduate programs will be proactive in their efforts to enroll minority students in programs where they can receive education and training in evaluation.

Barriers

A major task is to inform minority students of the opportunities and excitement that exists related to program evaluation and then recruit them into the graduate educational programs. Special training needs are not an issue for minority students. However, a problem that needs to be overcome is the lack of awareness of what evaluation is and the important role it can play in the continued existence and improvement of promising programs and certainly for those programs that can be shown to be successful. Actually, making students aware of the concept of evaluation is a critical task. Few graduate students can discern the difference between evaluation and research. Many will probably suggest that evaluation is just some or another form of applied research. Many will feel that if they have had some statistics or research methodology classes, they can naturally conduct evaluation projects. As most of us know, for years that was the thinking of many who made forays, so often unproductive, into evaluation. Evaluation has evolved into a discipline and should be treated as such. Just as graduate students specialize in educational statistics, qualitative research methodology, quantitative research methodology and testing and measurement, there can also be a specialization in program evaluation. Thus, the dissemination of information about the field of evaluation, graduate programs and career opportunities should occur while ensuring that individuals from underrepresented minority groups are targeted.

Recruitment

How can minority students be recruited into graduate evaluation programs? One cannot assume that there is a quick fix or that, just because there is a pronouncement that minority students are being sought, individuals of color will flock to those programs. Moreover and not surprisingly, a compounding problem is that few individuals from underrepresented minority groups are familiar with or have been exposed to the concept of pursuing a graduate program in evaluation. Increasing awareness would be one of the major efforts that need undertaking.

It will even be worth the effort to expose minority undergraduate students to evaluation and opportunities to participate in evaluation projects - just as there is an increasing opportunity to provide undergraduate students with more opportunities to participate in research projects. Of course, this exposure and experience should be gained in a setting where quality work is occurring to ensure meaningful learning occurs. Such opportunities would lay the groundwork for a flow of individuals seeking graduate programs and advanced degrees related to evaluation. As the increase in research opportunities and programs for minority undergraduate students appear to be occurring, similar effects may accrue for programs providing exposure and real-life opportunities in evaluation for undergraduate minority students. A carrot to entice students into evaluation is the concept that a main goal of evaluation is program improvement. If that is the philosophy that is transmitted, rather than that evaluation is essentially about program judgment per se, one would predict that the interest in program evaluation by minorities would expand tremendously. The sense of giving back or contributing to the community still runs strong in a number of minority individuals. This is clear in the types of graduate programs that attract significant numbers of minority students.

On the other hand, there may well be a number of practicing evaluators from underrepresented minority groups that do not have advanced degrees. The January proceedings cited the difficulty in providing training programs for individuals who may be considered "non-traditional" because of their experiences and economic needs. Such individuals could certainly benefit professionally by having the desired credentials an advanced degree allows, but importantly, the more advanced formal training and education should add considerably to their knowledge and skills. Support for such individuals to participate effectively in graduate programs should be made available and the graduate programs should seek to accommodate and complement the individuals' background and experiences. It is important that the education program be meaningful for such individuals. Practicing evaluators could probably more readily serve in evaluation positions where they can apply their old and newly acquired skills and knowledge. These individuals might be great candidates to serve in temporary full-time and fully paid evaluation positions at agencies such as NSF, NIH, the Departments of Education and Defense, and GAO either while they are completing or as part of their graduate studies.

Preparation

As indicated in the proceedings from the January meeting, there appear to be relatively few formal graduate education programs in evaluation despite the need for professionally educated evaluation. Interestingly enough, Worthen, Sanders, and Fitzpatrick (1997) predict that graduate programs in evaluation are unlikely to expand. Ironically, although Worthen et al. made such a prediction, they noted that the need for more trained program evaluators exist exactly for the list of reasons they enumerated in their chapter on the future of evaluation. They listed, for example, an increase in the opportunity for careers in evaluation; an increasing institutionalization of evaluation; and the need for more trained evaluators. They further predicted that this need for more evaluators will result in quick-fix efforts to provide more in-service training of evaluators. An example is the National Institute of General Medical Sciences of the National Institutes of Health initially employing the American Physiological Society to train Minority Access to Research Careers and Minority Biology Research Support programs principal investigators and program directors to become evaluators. So far, this effort appears to be less than successful. Despite signs of increased demands for evaluation, Worthen et al. believe that there will be little growth in graduate evaluation programs because of the lack of understanding of its importance by university administrators. If Worthen et al.'s predictions are even partially accurate, then it is even more important to redouble efforts to bring about the involvement of minority individuals in program evaluation because under normal circumstances, their access will continue to be limited.

An agency such as NSF can play a major role in serving as a catalyst to increase the number of minorities in graduate evaluation programs. An example is the role agencies such as NSF and the NIH have played in the production of science PhDs from underrepresented minority groups. Granted the numbers are less than impressive, but if one thinks that the situation is bad now, just imagine what the situation would be if NSF and the NIH did not provide support to encourage the enrollment and training of minority PhD students. The availability of support often galvanizes the call to action. Training grants can be awarded to doctoral programs that can train and prepare program evaluators. Selected programs should have to ensure and demonstrate that they will enroll a sufficient number or proportion of minority students. Additionally, minority graduate students with training in evaluation can be targeted for internships at NSF and other governmental funding agencies where they might assist and eventually serve as evaluators to projects funded by these agencies.

The funding of training grants should serve to increase the number of quality graduate programs. The University of California - Berkeley model, for example, appears to be quite impressive, although one might expand the evaluation activities beyond education, the focus of the Berkeley model. The idea of allowing students to do dissertations based on evaluation studies should prove quite beneficial for those who will continue on with careers in evaluation. This approach should allow students to strengthen their skills as evaluators, just as the research dissertation serves to strengthen research skills. The increase in minority students participating in quality graduate programs in evaluation will produce evaluators able to carry out well developed evaluation studies or conduct and manage high quality evaluation projects. A number may hold academic positions and teach program evaluation theory and methodology. Further, we can expect some of these individuals to become leaders in the field of evaluation.

Conclusion

Clearly, there is a need for highly trained evaluators. Evaluation is too important to be left in the hands of individuals with less than adequate preparation. Major problems arise regarding the small number of minorities pursuing graduate education and careers in evaluation and limited awareness of the field. As mentioned earlier, minorities do not have special training needs and it would be a folly to think so. What is really needed, however, is the opportunity to gain exposure to and experiences in evaluation. Moreover, informing individuals of the usefulness and value of evaluation in program development, in addition to program improvement would make the field even more attractive and gain greater attention from a broader population. Further, to tap practicing evaluators from underrepresented minority groups who lack advanced degrees, opportunities for those individuals should be made more readily available and educational programs should be structured to accommodate their backgrounds. Because of their nontraditional status and the economic constraints those individuals are more likely to face, adequate fellowships and other funding opportunities should exist for such individuals.

References

  • Altschuld, J.W., Engle, M., Cullen, C., Kim., and Macce, B.R. (1994). The Directory of Evaluation Training Programs. New Directions for Program Evaluation, 62, 71-94.
  • Sanderson, A. and Dugoni, B. (1999). Summary Report 1997: Doctorate Recipients from United States Universities. Chicago: National Opinion Research Center.
  • Worthen, B.R., Sanders, J.R., and Fitzpatrick, J.L. (1997). Program Evaluation: Alternative Approaches and Practical Guidelines, 2nd Ed. White Plains, NY: Longman.
  • National Science Foundation (2000). Proceedings from the NSF Workshop on Training.

Back to Top


An Effective School Evaluation and Training Program
Sandra Fox

In 1988 the Bureau of Indian Affairs, Office of Indian Education Programs, began a program of school improvement for its 185 schools. The Effective Schools improvement process was utilized and training and technical assistance for schools was provided on the implementation of ten correlates or characteristics of effective schools. These characteristics were based upon research on schools that were in high poverty areas but were achieving results. The 10 correlates were: 1) sense of mission, 2) monitoring and feedback of school and student progress, 3) challenging curriculum and appropriate instruction, 4) access to resources for teaching and learning, 5) high expectations for success, 6) safe and supportive environment, 7) strong instructional leadership, 8) home/school/community partnerships, 9) participatory management/shared governance, and 10) cultural relevance.

A second part of the process included a monitoring and evaluation system to determine the implementation of the 10 correlates of excellence and to gather and report process and outcome data. One-fourth of the 185 schools were visited annually to determine correlate implementation and to acquire data regarding attendance, achievement and other outcomes. A checklist was developed for each of the correlate areas to provide for uniform site visits. Extensive school reports were written outlining site visit findings.

The reports outlined schools' major strength and weakness areas in regard to each of the 10 correlates of excellence and indicated outcome data. Follow-up visits to schools were based upon findings outlined in the school reports.

The outcome data gathered were highlighted in the school reports and were aggregated for the whole system. Extensive databases were maintained. This data-gathering effort led to the school report cards required by the Improving America's Schools Act. Annual reports for the system were submitted to Congress utilizing aggregated process and outcome information gathered through the monitoring and evaluation effort.

Monitoring and evaluation teams were made up of four or five members, depending upon the size of the school. A cadre of team leaders for the monitoring and evaluation teams was created. The team leaders were Indian educators from outside the Bureau system who were recognized as distinguished educators. They were usually from colleges or universities, were private consultants or were superintendents or principals from public schools. Team members were administrators from public schools with large Indian populations or administrators of Bureau schools and education specialists from the Bureau's Central Office or the Line (district) Offices representing Title I and special education programs.

The team leaders made arrangements with the schools so that the school staff would be ready for the visitations. Using the checklists for each correlate area, the schools did self-evaluations prior to the teams' visits and this information was made available to the teams and provided starting points. The schools also made many reports and other documents available for the teams to review as documentation of correlate implementation and data gathering. On-site, the teams first met with the administrators and school boards to explain the process before going separate ways to evaluate the implementation of the various correlate areas and to gather outcome data.

Team members were assigned specific correlate areas to investigate during the on-site visits. Each evening, during on-site visits that lasted at least three days, team members met to share information from their assigned correlate areas and for other correlates if they observed something pertinent. The team leaders gathered information from each of the team members to write extensive reports on the school's implementation of the correlates and outcome data. The reports were written after the teams left the school, but executive summaries of the reports were crafted by the teams and presented to the administrators and school boards before the teams left. Sometimes the school asked that the executive summaries be presented to as many staff members as possible in exit presentations. The final written reports were provided to the schools no longer than two weeks after the visitations.

Prior to each school year, a training session was provided for team leaders and team members. Training included in depth information on each correlate area with the latest research findings and recommendations, on outcome data gathering, and on all aspects of the monitoring and evaluation process including such things as following local protocol and filing for consultant fees and travel expenses. Team leaders and members from outside the Bureau system were paid consultant fees and travel expenses. Team members from inside the system were provided only travel expenses.

At the end of each school year, the team leaders and team members were brought together for a debriefing session. At that time, successful practices and problems with the process were discussed. Modifications were made, if necessary. The checklists were revised for the next year if it was found that items were not providing reliable or valid information. New items were added if it was found that certain variables were affecting the success of implementation of the correlates but were not addressed in the checklists.

This monitoring and evaluation process was started in 1990 and ended in 1995 when the Bureau of Indian Affairs' Central Office received a 50 percent budget cut from Congress.

Major results of the Bureau's Effective Schools project included increased academic achievement, increased attendance and increased enrollment in the system. Another outcome was the provision of many Indian educators trained in the Effective Schools process and the evaluation portion of it. Many of these individuals are still using their training to improve their individual schools or to provide training for others.

The number of team leaders involved was 12. Two hundred different team members were involved in the process over the five-year period. The monitoring and evaluation process was coordinated by a staff of four professionals and one clerical staff. This group secured and trained the team members, scheduled the on-site visitations, received the written reports and disseminated them to the schools, maintained data bases and summarized the process and outcome data for annual reports for the system and to Congress.

At present, the need for many more Indian evaluators is great. The Department of Education, as a result of President Clinton's executive order of 1998, has made research in Indian education a priority. The National Science Foundation needs more Indian evaluators to assist in that evaluation process. The Bureau of Indian Affairs wants to resurrect the Effective Schools monitoring and evaluation process. Many of the evaluators who were involved in the Effective Schools process from 1990 until 1995 have moved on to positions that make them less available to serve on evaluation teams, although a handful of them are still actively utilizing the process to evaluate schools. The process has influenced their work, wherever they are, however.

Most Indian evaluators are not trained in evaluation but have learned by being involved in some evaluation process. The need for formal training is critical, however, as schools serving Indian students participate in the results-oriented school reform process and the outcome data and sound evaluation practices become more and more important. As the smallest minority, statistics on Indian education are often not reported even by the National Center for Education Statistics. It will be up to Indians to ensure that evaluation of Indian education is carried out and that it is done in the most fair and effective way.

Back to Top

<-- Back
EHR Home | nsf.gov
| About NSF | Funding | Publications | News & Media | Search | Site Map | Help
NSF Celebrating 50 Years The National Science Foundation
4201 Wilson Boulevard, Arlington, Virginia 22230, USA
Tel: 703-292-5111, FIRS: 800-877-8339 | TDD: 703-292-5090
Contact NSF
Customize