Chapter 9
Glossary

 

Accuracy: The extent to which an evaluation is truthful or valid in what it says about a program, project, or material.
Achievement: Performance as determined by some type of assessment or testing.
Affective: Consists of emotions, feelings, and attitudes.
Anonymity:
(provision for)
Evaluator action to ensure that the identity of subjects cannot be ascertained during the course of a study, in study reports, or in any other way.
Assessment: Often used as a synonym for evaluation. The term is sometimes recommended for restriction to processes that are focused on quantitative and/or testing approaches
Attitude: A person’s mental set toward another person, thing, or state.
Attrition: Loss of subjects from the defined sample during the course of a longitudinal study.
Audience(s): Consumers of the evaluation; those who will or should read or hear of the evaluation, either during or at the end of the evaluation process. Includes those persons who will be guided by the evaluation in making decisions and all others who have a stake in the evaluation (see stakeholders).
Authentic
assessment:
Alternative to traditional testing, using indicators of student task performance.
Background: The contextual information that describes the reasons for the project, including its goals, objectives, and stakeholders’ information needs.
Baseline: Facts about the condition or performance of subjects prior to treatment or intervention.
Behavioral
objectives:
Specifically stated terms of attainment to be checked by observation, or test/measurement.
Bias: A consistent alignment with one point of view.
Case study: An intensive, detailed description and analysis of a single project, program, or instructional material in the context of its environment.
Checklist approach: Checklists are the principal instrument for practical evaluation, especially for investigating the thoroughness of implementation.
Client: The person or group or agency that commissioned the evaluation.
Coding: To translate a given set of data or items into descriptive or analytic categories to be used for data labeling and retrieval.
Cohort: A term used to designate one group among many in a study. For example, "the first cohort" may be the first group to have participated in a training program.
Component: A physically or temporally discrete part of a whole. It is any segment that can be combined with others to make a whole.
Conceptual
scheme:
A set of concepts that generate hypotheses and simplify description.
Conclusions
(of an evaluation):
Final judgments and recommendations.
Content analysis: A process using a parsimonious classification system to determine the characteristics of a body of material or practices.
Context (of an
evaluation):
The combination of factors accompanying the study that may have influenced its results, including geographic location, timing, political and social climate, economic conditions, and other relevant professional activities in progress at the same time.
Criterion, criteria: A criterion (variable) is whatever is used to measure a successful or unsuccessful outcome, e.g., grade point average.
Criterion-
referenced test:
Tests whose scores are interpreted by referral to well-defined domains of content or behaviors, rather than by referral to the performance of some comparable group of people.
Cross-case
analysis:
Grouping data from different persons to common questions or analyzing different perspectives on issues under study.
Cross-sectional
study:
A cross-section is a random sample of a population, and a cross-sectional study examines this sample at one point in time. Successive cross-sectional studies can be used as a substitute for a longitudinal study. For example, examining today’s first year students and today’s graduating seniors may enable the evaluator to infer that the college experience has produced or can be expected to accompany the difference between them. The cross-sectional study substitutes today’s seniors for a population that cannot be studied until 4 years later.
Data display: A compact form of organizing the available information (for example, graphs, charts, matrices).
Data reduction: Process of selecting, focusing, simplifying, abstracting, and transforming data collected in written field notes or transcriptions.
Delivery system: The link between the product or service and the immediate consumer (the recipient population).
Descriptive data: Information and findings expressed in words, unlike statistical data, which are expressed in numbers.
Design: The process of stipulating the investigatory procedures to be followed in doing a specific evaluation.
Dissemination: The process of communicating information to specific audiences for the purpose of extending knowledge and, in some cases, with a view to modifying policies and practices.
Document: Any written or recorded material not specifically prepared for the evaluation.
Effectiveness: Refers to the conclusion of a goal achievement evaluation. "Success" is its rough equivalent.
Elite
interviewers:
Well-qualified and especially trained persons who can successfully interact with high-level interviewees and are knowledgeable about the issues included in the evaluation.
Ethnography: Descriptive anthropology. Ethnographic program evaluation methods often focus on a program’s culture.
Executive summary: A nontechnical summary statement designed to provide a quick overview of the full-length report on which it is based.
External
evaluation:
Evaluation conducted by an evaluator from outside the organization within which the object of the study is housed.
Field notes: Observer’s detailed description of what has been observed.
Focus group: A group selected for its relevance to an evaluation that is engaged by a trained facilitator in a series of discussions designed for sharing insights, ideas, and observations on a topic of concern to the evaluation.
Formative
evaluation:
Evaluation designed and used to improve an intervention, especially when it is still being developed.
Hypothesis
testing:
The standard model of the classical approach to scientific research in which a hypothesis is formulated before the experiment to test its truth.
Impact evaluation: An evaluation focused on outcomes or payoff.
Implementation
evaluation:
Assessing program delivery (a subset of formative evaluation).
Indepth interview: A guided conversation between a skilled interviewer and an interviewee that seeks to maximize opportunities for the expression of a respondent’s feelings and ideas through the use of open-ended questions and a loosely structured interview guide.
Informed consent: Agreement by the participants in an evaluation to the use, in specified ways for stated purposes, of their names and/or confidential information they supplied.
Instrument: An assessment device (test, questionnaire, protocol, etc.) adopted, adapted, or constructed for the purpose of the evaluation.
Internal evaluator: A staff member or unit from the organization within which the object of the evaluation is housed.
Intervention: Project feature or innovation subject to evaluation.
Intra-case analysis: Writing a case study for each person or unit studied.
Key informant: Person with background, knowledge, or special skills relevant to topics examined by the evaluation.
Longitudinal study: An investigation or study in which a particular individual or group of individuals is followed over a substantial period of time to discover changes that may be attributable to the influence of the treatment, or to maturation, or the environment. (See also cross-sectional study.)
Matrix: An arrangement of rows and columns used to display multi-dimensional information.
Measurement: Determination of the magnitude of a quantity.
Mixed method
evaluation:
An evaluation for which the design includes the use of both quantitative and qualitative methods for data collection and data analysis.
Moderator: Focus group leader; often called a facilitator.
Nonparticipant
observer:
A person whose role is clearly defined to project participants and project personnel as an outside observer or onlooker.
Norm-referenced
tests:
Tests that measure the relative performance of the individual or group by comparison with the performance of other individuals or groups taking the same test.
Objective: A specific description of an intended outcome.
Observation: The process of direct sensory inspection involving trained observers.
Ordered data: Non-numeric data in ordered categories (for example, students’ performance categorized as excellent, good, adequate, and poor).
Outcome: Post-treatment or post-intervention effects.
Paradigm: A general conception, model, or "worldview" that may be influential in shaping the development of a discipline or subdiscipline. (For example, "The classical, positivist social science paradigm in evaluation.")
Participant observer: A person who becomes a member of the project (as participant or staff) in order to gain a fuller understanding of the setting and issues.
Performance
evaluation:
A method of assessing what skills students or other project participants have acquired by examining how they accomplish complex tasks or the products they have created (e.g., poetry, artwork).
Planning
evaluation:
Evaluation planning is necessary before a program begins, both to get baseline data and to evaluate the program plan, at least for evaluability. Planning avoids designing a program that cannot be evaluated.
Population: All persons in a particular group.
Prompt: Reminders used by interviewers to obtain complete answers.
Purposive
sampling:
Creating samples by selecting information-rich cases from which one can learn a great deal about issues of central importance to the purpose of the evaluation.
Qualitative
evaluation:
The approach to evaluation that is primarily descriptive and interpretative.
Quantitative
evaluation:
The approach to evaluation involving the use of numerical measurement and data analysis based on statistical methods.
Random sampling: Drawing a number of items of any sort from a larger group or population so that every individual item has a specified probability of being chosen.
Recommendations: Suggestions for specific actions derived from analytic approaches to the program components.
Sample: A part of a population.
Secondary data
analysis:
A reanalysis of data using the same or other appropriate procedures to verify the accuracy of the results of the initial analysis or for answering different questions.
Self-administered
instrument:
A questionnaire or report completed by a study participant without the assistance of an interviewer.
Stakeholder: A stakeholder is one who has credibility, power, or other capital invested in a project and thus can be held to be to some degree at risk with it.
Standardized
tests:
Tests that have standardized instructions for administration, use, scoring, and interpretation with standard printed forms and content. They are usually norm-referenced tests but can also be criterion referenced.
Strategy: A systematic plan of action to reach predefined goals.
Structured
interview:
An interview in which the interviewer asks questions from a detailed guide that contains the questions to be asked and the specific areas for probing.
Summary: A short restatement of the main points of a report.
Summative
evaluation:
Evaluation designed to present conclusions about the merit or worth of an intervention and recommendations about whether it should be retained, altered, or eliminated.
Transportable: An intervention that can be replicated in a different site.
Triangulation: In an evaluation, triangulation is an attempt to get a fix on a phenomenon or measurement by approaching it via several (three or more) independent routes. This effort provides redundant measurement.
Utility: The extent to which an evaluation produces and disseminates reports that inform relevant audiences and have beneficial impact on their work.
Utilization of
(evaluations):
Use and impact are terms used as substitutes for utilization. Sometimes seen as the equivalent of implementation, but this applies only to evaluations that contain recommendations.
Validity: The soundness of the inferences made from the results of a data-gathering process.
Verification: Revisiting the data as many times as necessary to cross-check or confirm the conclusions that were drawn.
Sources: Jaeger, R.M. (1990). Statistics: A Spectator Sport. Newbury Park, CA: Sage.

Joint Committee on Standards for Educational Evaluations (1981). Standards for Evaluation of Educational Programs, Projects and Materials. New York: McGraw Hill.

Scriven, M. (1991). Evaluation Thesaurus. 4th Ed. Newbury Park, CA: Sage.

Authors of Chapters 1-7.


Previous Chapter | Back to Top | Back to Table of Contents

Table of Contents