Chapter 8:

Economic and Social Significance of Information Technologies

IT, Education, and Knowledge Creation

Information technologies are likely to have a substantial impact on the entire spectrum of education by affecting how we learn, what we know, and where we obtain knowledge and information. IT influences everything from the creation of scientifically derived knowledge (see "IT, Research, and Knowledge Creation") to how children learn in schools; lifelong learning by adults; and the storage of a society's cumulative knowledge, history, and culture. Because IT networks create remote sources of instruction and geographically distributed information resources, learning, knowledge generation, and information retrieval are no longer confined to the traditional spaces of laboratories, schools, libraries, and museums. [Skip Text Box]

IT, Research, and Knowledge Creation top

IT has fundamentally enhanced the conduct of scientific research, data modeling and analysis, and the creation of scientifically derived knowledge. The most pervasive change is the emergence of the "global laboratory"—Internet connections now allow scientists to control instrumentation from a distance, share research findings instantaneously with other scholars around the world, and develop international databases and collaborations. The research applications of networking are extensive (see NRC 1993 and 1994b); IT has gradually eroded the geographic constraints on the conduct of research and knowledge cumulation.

High-speed computing and advanced software applications have also enhanced the analysis of scientific data and drastically shortened the amount of time required to perform certain scientific tasks. For example, data visualization provides dynamic, three-dimensional modeling of complex systems data such as fluid dynamics, while imaging technology provides visual tracking of such minute cellular changes as embryo development. In biology, major advances in rapid gene sequencing and protein mapping are the direct result of highly advanced computational programs (Science 1996).

IT also provides knowledge we would not otherwise have-Internetsatellite-based technologies are particularly noteworthy. For example, the global positioning system allows geologists to measure precise movements in the earth's crust, thus identifying communities that are vulnerable to earthquakes and aiding in earthquake prediction (Herring 1996). Global climate change modeling similarly owes its advances to satellite-based data collection and transmission; in turn, more detailed knowledge of historical climate patterns has improved our understanding of some infectious diseases and epidemics (Colwell 1996).

Advancement in scientific knowledge is not limited to higher education. An innovative IT-based program includes precollege students from around the world in the research process. The Global Learning and Observations to Benefit the Environment (GLOBE) Program provides environmental science education to K-12 students in more than 3,500 schools and 45 countries. These students collect environmental data for use by the scientific community in research on the dynamics of the earth's global environment. Students report data on the Web, generate graphical displays and contour maps, and interact with international research scientists as well as other GLOBE students (Finarelli n.d.).

As a consequence, new education activities such as distance learning are clearly on the rise.[18]  (See chapter 2, "Distance Learning and Its Impact on S&E Education.") In 1984, fewer than 10 states had distance learning facilities, but by 1993 all 50 states did. In the same year, the 20 largest providers of satellite-based instruction purchased 75,000 hours of satellite transponder time; assuming that coursework would be broadcast during a 40-hour week, these top 20 providers purchased the equivalent of 36 instructional years (Katz, Tate, and Weimer 1995).[19]  Internet Distance Education Associates estimates that more than 100 U.S. colleges and universities offer courses over the Internet;[20]  National Technical University, a satellite TV educational institution, has over 800 receiver sites (Katz, Tate, and Weimer 1995). Also new are digital museums, which allow millions of individuals access to history and culture they would not otherwise have.[21]  These include Fisk University's collection of 7 million African American artifacts; the Vatican's 600-year-old collection of over 1 million books and manuscripts (including the only known copies of many historically significant documents); and the Library of Congress's American Memory Collection of documents, films, photographs, and sound recordings (Memmott 1997). Libraries are undergoing comparably profound digital revolutions (see "IT and the Changing Nature of Libraries"). [Skip Text Box]

IT and the Changing Nature of Libraries top

Historically, the public library has played a central role as an information resource in American communities. Gallup polls indicate that the majority of American adults still regard the public library as a key center for educational support, independent learning, research, community information, and reference resources (NRC 1994b). Not surprisingly, the United States has the most extensive public library system in the world, and more than one-half of the adult population and three-quarters of adolescent children use public libraries.

With the advent of new information technologies, the nature of library information resources is changing. Electronic information services are increasingly common; and CD-ROM databases, remote database searching, and on-line catalogs are available in more than two-thirds of the U.S. public libraries that service communities of 100,000 or more individuals. Networks allow libraries to leverage their resources through interlibrary loans and have stimulated the development of the "digital library," a concept that reflects the digitization of library resources and collections. The ability to store and transmit mixed-media resources (see the text discussion of digitized museums) gives greater significance to nonprint information resources (such as sound recordings and visual images) and has stimulated librarians to expand and diversify their information management skills. Stewardship of library information has moved from a "just-in-case" model of on-site materials to a "just-in-time" model of resource access and sharing.

The Internet has significantly expanded the resource base of public libraries. Library-based access provides a number of advantages to Americans. First, it is an affordable, equitable, and ubiquitous point of access for most individuals. Second, locally developed databases and network linkages are highly responsive to the needs of the local community and can facilitate more effective network use. Third, the library can act as an electronic gateway to information for people who would not otherwise have access to the Internet.

Barriers to library connectivity exist, as evidenced by the relatively low level of library linkage to the Internet. Less than half of public libraries (45 percent) were connected to the Internet in 1996, a figure that masks considerable variation by community size: less than one-third of libraries in communities of less than 5,000 individuals had Internet access (31 percent), compared to the 82 percent of libraries serving metropolitan areas over 1 million people. For the roughly 500 libraries that serve metropolitan areas of 100,000 or more, 93 percent offer Internet access to the library staff, and one-half offer access directly to patrons (ALA 1996). As access fees decline, it is likely that more libraries will connect to the Internet: representatives of more than 70 percent of those libraries not now connected say they plan to do so within the next year (ALA 1996).

In contrast, academic libraries are highly "wired," but with variations by type of institution. Associates degree colleges lag other institutions of higher education on most dimensions of electronic access to resources, including direct patron access to the Internet, the availability of electronic catalogs, and the library's development of its own home page for students. (See figure 8-14.)

For more information on the substantial changes in library activities related to the use of IT, see NRC (1994b), Lyman (1996), and Lesk (1997).

Because of the growing role of IT in learning and in expanding the scope of educational resources, many people are concerned about the accessibility of information technologies to the nation's schools and the effects of these technologies on students-particularly K-12 children. This section looks at the diffusion of IT in U.S. elementary and secondary schools as well as the effects of computer-based instruction on student achievement. Inequities in student (and household) access to IT are considerable; these are addressed in the final section of this chapter, "IT and the Citizen."

IT and Precollege Education  top

National pressures to increase the scope and use of information technologies in U.S. elementary and secondary schools are persistent (PCAST 1997, NIIAC 1995, The Children's Partnership 1996, and McKinsey and Company 1995). In most instances, the primary reason for greater emphasis on IT is adequate job training, although there is notable concern over avoiding a stratified society in which the "information haves" are divided from the "information have-nots." Greater use of IT at the precollege level is frequently understood as the training students need to be competent members of the information society and to enjoy an information-enhanced quality of life.

Assumptions about the educational, employment, and life-enhancing benefits of IT are not universal, however. Silicon Snake Oil: Second Thoughts on the Information Highway by Clifford Stoll (1995) represents one popularized critique of claims about the social payoff of IT (including educational benefits). Additionally, scholar Larry Cuban (1994) has repeatedly raised the question "should computers be used in classrooms?"; and journalist Todd Oppenheimer (1997) has explored the significant educational opportunity costs that may emerge with greater spending on IT. The fundamental dilemma of computer-based instruction and other IT-based educational technologies is that their cost effectiveness compared to other forms of instruction-Internetfor example, smaller class sizes, self-paced learning, peer teaching, small group learning, innovative curricula, and in-class tutors-Internethas never been proven (Rosenberg 1997, Cuban 1994, and Kulik and Kulik 1991). Although real IT learning benefits have been measured and demonstrated, whether the magnitude of these benefits is sufficiently large to justify limiting other school curricula and programs is open to question.

The budget issues and educational opportunity costs associated with IT are not trivial. In a report to the U.S. Advisory Committee on the National Information Infrastructure, McKinsey and Company (1995) estimated that about 1.3 percent of the national school budget is spent on instructional technology. Heightening the level of IT in K-12 public schools would require raising this share to as much as 3.9 percent, depending on the degree of IT intensity desired.[22]  This spending increase is for hardware only, and does not include IT operational expenses or the cost of teacher training, a significant factor in the effectiveness of computer-based instruction (U.S. OTA 1995, McKinsey and Company 1995, Ryan 1991, and PCAST 1997). Yet inflation and the expanding school-age population account for almost all growth in school budgets; the McKinsey report states that only 1.6 percent of the growth in per student expenditures is available for IT and other discretionary spending, including building repair and maintenance, school security, teacher salaries, and the operating costs of IT hardware. Because school districts are under increasing fiscal stress, expanding IT resources could mean cutting other important programs. Oppenheimer (1997) details sacrifices in art, music, physical education, vocational trade ("shop") classes, and textbook purchases that have been made for computers. The negative impacts of these sacrifices on learning and job skills are not usually considered in the growing emphasis on CBI. Also not considered is the potential decline of "collateral" and experiential learning that may occur as more instruction is shifted to the computer (Cuban 1994).

Our understanding of the learning impacts of CBI is primitive. The research reported below does find systematically positive learning benefits associated with computer use in schools, but the magnitude of the benefits is often modest and varies by level of education, characteristics of the learners, characteristics of the instruction and teachers, subject matter, and type of computer application. CBI is a broad category that captures computer-assisted instruction (typically drill-and-practice exercises or tutorial instruction), computer-managed instruction (the computer monitors student performance and progress and guides student use of instructional materials), and computer-enriched instruction (the computer functions as a problem-solving tool). Categories of software use are even more extensive and include generic information-handling tools, real-time data acquisition, simulations, multimedia, educational games, cognitive tools, intelligent tutors, construction environments, virtual communities, information access environments, information construction environments, and computer-aided instruction (Rubin 1996). Software (courseware) for inquiry-based learning[23] -the ultimate goal of most CBI advocates and the most cognitively demanding form of learning-is in short supply. This resource deficiency may be one cause of the limited measurable impacts of CBI on higher order thinking skills and learning (Kulik and Kulik 1991, PCAST 1997, and McKinsey and Company 1995).

Research data suggest that CBI appears to be most effective with rote learning at the elementary school level and in special education settings (that is, in remedial work, or with low achievers or those with learning disabilities). The impact of CBI on critical thinking and synthesis skills is much harder to demonstrate. Also unclear is how to maximize the effectiveness of CBI in the classroom: empirical studies often account for 40 or more contextual variables related to learning and instruction. Schofield (1995) sheds light on this contextual complexity of computers and learning in her detailed two-year case study of computer use at a typical urban high school. Her findings demonstrate how the social organization of the school and classrooms affect computer-related learning, behavior, attitudes, and outcomes. On balance, Schofield found that computers aggravated pre-existing differences between academically advanced and "regular" students, boys and girls, and college-bound and non-college-bound students.

Computer-based instruction clearly does not take place in a vacuum, but systematic understanding of the social and cognitive complexity of computer-based learning is limited. As the President's Committee of Advisors on Science and Technology, Panel on Educational Technology, concluded,

Funding levels for educational research have been alarmingly low...In 1995, less than 0.1 percent of our nation's expenditures for elementary and secondary education were invested to determine which educational techniques actually work, and to find ways to improve them (PCAST 1997).

Diffusion of IT in K-12 Education top

Over the past 20 years, computers and other information technologies have been diffused widely in the U.S. K-12 educational system. Ninety-eight percent of all schools have at least 1 microcomputer for instructional use, and about 80 percent have 15 or more (see figure 8-5 earlier in this chapter). One-half of a school's instructional computers are located in the classroom, 37 percent are located in computer labs, and approximately 13 percent are placed in other public access rooms such as a library or media center. This distribution has not changed since the mid-1980s.[24]

Estimates of the average number of computers per pupil in the early 1980s vary widely-Internetfrom 42 to 125 students per computer; but data consistently indicate that the ratio now is approximately 10 or 11 students per computer (ETS 1997).[25]  Disparities in student access are considerable, however. The differences in students per computer by state of residence range from about 6 pupils per computer in Florida to 16 in Louisiana. (See figure 8-15.)

Schools have also adopted other information technologies. For example, in 1996, 85 percent of schools had access to multimedia computers, 64 percent had Internet access, and 19 percent had satellite connections. (See figure 8-16.) Diffusion of these additional information technologies has been quite rapid. In 1992, only 8 percent of schools had an interactive videodisk, compared to 35 percent in 1995; 5 percent of schools had local area networks, compared to 38 percent in 1995; and only 1 percent had satellite dishes, compared to 19 percent in 1995. (See appendix table 8-7.)

School access to these technologies does not necessarily mean that they are used for instructional purposes, however. The National Center for Education Statistics reported that 65 percent of all public schools had Internet access in 1996, but only 14 percent of school rooms actually used for instruction were linked to the Internet (NCES 1997).[26]  On a state-by-state basis, the number of schools with Internet access varies far more than the student-to-computer ratios. (See figure 8-17.)

A 1992 survey of elementary and high school principals indicates that the three most important reasons schools adopt computer technologies are to (1) give students the experience they will need with computers for the future, (2) keep the curriculum and teaching methods current, and (3) improve student achievement (Pelgrum, Janssen, and Plomp 1993). There are, however, notable differences in the ways in which computers are actually used: the higher the school grade, the more computers are used for computer training and the less they are used for teaching academic content. At the fifth grade level, 58 percent of CBI time relates to subject matter, and only 32 percent relates to computer skills such as word processing and spreadsheets. By 11th grade, more CBI time is spent on computer skills (51 percent) than academics (43 percent); instructional use for math and English drops from 35 to 14 percent.[27]  (See text table 8-5.)

Impacts of IT on K-12 Student Learning top

A keyword search of ERIC, the primary bibliometric database used for educational research, yields thousands of citations related to computer-assisted instruction and student achievement. The signal characteristic of this research is its seeming lack of comparability: studies range from anecdotal reports to formal experimental designs, many of which control for different sets of variables and include different types of computer use in different subject areas. Moreover, interest in the effects of computers on young people is not limited to learning and achievement. Concerns about the emotional and psychological effects of prolonged exposure to computing environments have also been raised. (See "Children, Computers, and Cyberspace.") [Skip Text Box]

Children, Computers, and Cyberspace top

Education scholar Larry Cuban once remarked that "if the full influence, both positive and negative, of television watching on children continues to be debated three decades after its introduction, how can anyone assess the complexity of what happens to children using classroom computers?" (1994, p. 537).

The comparison to television is pertinent, since many of the concerns raised about children, computers, and the Internet are similar to those raised in the past for TV. Of particular concern is the psychological and emotional well-being of children. Cyberspace brings its own set of potential dangers to children, including cyberhate and cyberporn-forms of adult expression that children might not be able to handle.

The extent of the problem—the access by children to age-inappropriate materials on the Internet—is debated intensely. The ease with which children may access sexually explicit materials is disputed (see Gay 1996), but congressional testimony (U.S. Senate 1995) and case studies (Turkle 1995) reveal that young people can easily obtain graphic sexual photographs and engage in "netsex" on the Internet; one young teen also reported being electronically stalked by an adult who clearly knew she was a minor.

The primary legislative effort to protect children on the Internet was the Communications Decency Act of 1996, a law that made providing "indecent" material to minors over the Internet a crime. The act was declared unconstitutional by the Supreme Court in 1997 (Reno v. American Civil Liberties Union) for a variety of reasons, including the fact that indecent material has consistently been protected as a form of free speech in the United States. (The Supreme Court has historically made a distinction between indecent and obscene material; obscenity is not protected under the First Amendment.) Limiting children's access to inappropriate materials has now become a technological challenge, relying on such filtering software as Cyber Patrol™ and CYBERsitter™. Adult verification systems such as Adult Check™ also are becoming more common. Note that children's vulnerability is not limited to sexually explicit materials; parental concerns have also been raised with regard to Internet violence, hate speech, deceptive advertising, "false front" Web sites, and exploitation of children's privacy.

Concerns about core psychological processes—such as self-identity—have also been examined. Sherry Turkle, a behavioral scientist who studies the impact of early and prolonged use of computing environments on children, has uncovered patterns that suggest the computer culture is not benign. Computing and cyberspace may blur children's ability to separate the living from the inanimate, contribute to escapism and emotional detachment, stunt the development of a sense of personal security, and create a hyper-fluid sense of identity (Turkle 1984 and 1995).

While there may be psychological benefits associated with computer-mediated reality (including greater empathy for those of different cultures, sex, or ethnicity; heightened adaptability; and a more flexible outlook on life), the "darker side" of the technology is nonetheless unsettling. Turkle raises the possibility that extensive interaction with cyberspace (especially through multi-user domains) may create individuals incapable of dealing with the messiness of reality, the needs of community building, and the demands of personal commitments (Turkle 1995).

One way of integrating disparate research is with "meta-analysis," a statistical technique used in the field of education and other disciplines.[28]  Meta-analysis provides a way of standardizing the statistical results of quantitative research so that multiple studies can be compared in a reliable manner. About a dozen meta-analyses have been conducted on the effects of computer-based instruction, all of which find positive (though not necessarily large) learning effects. The magnitude of learning effects varies across a host of conditions, including type of instruction, subject matter, duration of the experimental treatment, and degree of teacher training. Detailed information on meta-analysis as an integrating methodology is presented here, as are two meta-analyses of the impacts of CBI on precollege learning. These studies illustrate the methodology of meta-analysis, the resulting metrics, and the interpretation of such metrics.

Meta-Analysis top

Meta-analysis is a method for combining the statistical results of research studies that test the same general hypothesis and use the same statistical measures. Meta-analysis related to CBI focuses primarily on the impact of computer usage on student learning, although other educational outcomes may be studied (such as attitudes toward computers, attitudes toward school subjects, and amount of time needed for instruction).

As discussed here, meta-analysis yields a standardized metric called an "effect size." Effect size is a score that measures the difference in performance between experimental and control groups. For CBI, effect size is based on the performance of control and experimental groups on a common examination. As a metric, effect size is expressed as a proportion of standard deviation (a z-score) and has a percentile equivalent; it also controls for the influence of sample size.[29]  The z-score reflects how much better (or worse) the experimental group performed on an exam relative to the mean of the control group. Examples of effect sizes and their interpretation are provided below in the discussion of specific meta-analytic research.

Meta-analysis is valuable for two key reasons. First, it allows researchers to cumulate and integrate the findings of multiple studies (particularly those that are small in size) into a single measure of outcome. Second, it estimates a specific magnitude for an independent variable's impact. Other methods that aggregate diverse studies, such as "tallies" and "box scores," indicate overall patterns and trends in research findings but do not estimate the degree of influence of one variable on another.

Meta-analysis has a number of potential weaknesses, however. First, biased or flawed studies, when cumulated, will generate biased or flawed meta-analysis. Second, for some types of meta-analysis, aggregative data can lead to "Simpson's Paradox"—an outcome in which the statistics indicate a relationship opposite to what is actually occurring. Simpson's Paradox is most likely to occur with the aggregation of categorical data, such as that of risk assessment (see Utts 1996). Third, the results of meta-analysis are point estimates; that is, they are single numeric values (z-scores) without reference to a confidence interval or some estimate of the precision of the estimates themselves. As a consequence, meta-analysis z-scores reflect a level of unmeasured uncertainty.

Meta-analysis therefore relies upon rigorous research designs to avoid potential pitfalls. Two factors affect the overall quality of a meta-analytic study (Hittleman and Simon 1997, and Utts 1996). One factor is the decision about what primary research to include in the synthesis. Ideally, only research that reflects experimental and quasi-experimental design should be used; target groups should contain large numbers (typically 20 or more units of observation); data on means, standard deviations, and sample sizes should be reported; and methodologically flawed studies should be eliminated from the analysis. In this way, only the most rigorous studies are included for synthesis. Second, because meta-analysis must account for a large number of other factors that can potentially affect the dependent variable, coding of these additional factors must be particularly precise and consistent. As a consequence, the meta-analysis research design must contain provisions for checking intra- and/or inter-coder reliability. The two studies reported below conform to the general requirements for proper meta-analysis and avoid the most common pitfalls associated with this type of research.

Meta-Analyses of Computer-Based Instruction top

In general, traditional literature reviews and box-score tallies of computer-based instruction research indicate positive effects. CBI appears to result in some degree of observable achievement effect most of the time, but not always. Meta-analyses try to quantify precisely the magnitude of these effects; over a dozen meta-analyses of CBI covering precollege and postsecondary education have been conducted.[30]  Three observations are worth noting here.

Kulik and Kulik (1991) performed a meta-analysis on 254 studies conducted between 1966 and 1986. The CBI effectiveness studies included in the meta-analysis reflect (1) all levels of education-precollege and postsecondary, (2) controlled evaluations[32]  in real classroom settings, not laboratories; and (3) research free from a number of methodological flaws specified by the authors. Even though the studies themselves reflect controlled research designs, they often capture different contextual factors. As a consequence, Kulik and Kulik coded a large set of "control" variables that might affect learning outcomes, such as the type of computer application, the instructor, and course content. (See text table 8-6.)

Although the Kulik and Kulik study synthesized findings for several educational levels, many findings for K-12 CBI were reported separately. Two examples and interpretations of K-12 effect size from this study are presented here. In the first example, the overall effect size for 68 studies on computer-assisted instruction at the precollege level was 0.36. Because effect size is a standardized value based on the normal distribution and standard deviation, it can be interpreted as a z-score and converted to a percentile equivalent. A z-score of 0.36 is equivalent to the 64th percentile; students using CBI scored (on average) in the 64th percentile on measures of learning and achievement compared to the 50th percentile for students in a traditionally taught class.[33] 

Obviously, factors other than CBI could have influenced these learning outcomes. Of the nine major categories of variables that Kulik and Kulik evaluated as alternative predictors of effect size, four were statistically significant influences: the type of application, the duration of instruction, the year of research publication, and the publication status of the research.[34]  Effect sizes were systematically higher for:

These findings are somewhat suggestive, since they indicate that (1) some types of CBI may be more effective than others (at least given existing courseware), (2) learning effects may diminish when computers are used for long periods, (3) there may be "novelty" learning effects associated with computer use,[35]  and (4) not all CBI has a demonstrable achievement effect.

As a second example of meta-analysis, effect sizes for specific academic subjects varied from 0.10 for science (54th percentile), 0.25 for reading and language (60th percentile), and 0.37 for mathematics (64th percentile). Although it may be tempting to conclude from these metrics that CBI is more effective for mathematics than other subjects, these subject matter differences in effect size were not statistically significant in the meta-analysis.

When presented as percentile equivalents or as fractions of a standard deviation, gains in computer-assisted instruction appear modest. Another way of interpreting meta-analysis effect size is as grade-equivalent scores, which provide a more concrete sense of impact (Ryan 1991). Glass, McGaw, and Smith (1981) report that in elementary school grade levels, the standard deviation on most achievement tests is the equivalent of one grade level. As a consequence, a 0.36 effect size, which is equivalent to 0.36 standard deviations, represents a gain of about 3 to 4 months of instruction, assuming the school year is about 9 to 10 months long. This indicates that about three to four months' more learning occurred than could generally be expected in a school year. However, because the Kulik and Kulik effect sizes reported above are for both elementary and secondary schools, they cannot be interpreted in grade-equivalent gains.

Ryan's (1991) meta-analysis of the effects of microcomputers on kindergarten through sixth grade achievement can be reported on a grade-equivalent basis. Her meta-analysis of 40 studies[36]  conducted between 1984 and 1989 found an overall effect size of 0.309. Thus, the average K-6 student using a microcomputer as an instructional tool performed in the 62nd percentile on tests, compared to the 50th percentile for the average K-6 student who did not use a microcomputer. Ryan explains that

the effect size of .309 means that the effect of computer instruction is approximately one-third greater than the effect of control group instruction. In terms of grade-equivalent units, .309 can be interpreted as one third greater than the expected gain in a school year, approximately 3 months additional gain in [grade level]" (p. 171).

Ryan likewise evaluated several sets of variables other than CBI that may have had an impact on effect size. (See text table 8-6.) Of these variables, only the degree of teacher pretraining was statistically significant. In experimental groups where teachers had fewer than 10 hours of computer pretraining, the effect size of CBI was negligible and, in some instances, negative. In groups where teacher pretraining exceeded 10 hours, the effect size was 0.53, equivalent to one-half a school year gain, or 70th percentile performance. Ryan's findings reinforce other studies that identify the crucial role of teacher preparedness in effective CBI (U.S. OTA 1995 and PCAST 1997).

Computers and Alternative Instruction top

Computer-based instruction can also be incorporated into enriched learning environments. Isolating the effects of computers in these alternative approaches to education is difficult, but the positive impacts of the full instructional package for several special projects merit note. One is the Higher Order Thinking Skills (HOTS) Program, an intervention program for economically disadvantaged students in the fourth through seventh grades. Students were taken from their traditional classrooms and taught through an innovative curriculum that integrated computer-assisted instruction, drama, and Socratic method. Students in the HOTS Program outperformed other disadvantaged students in a control group on all measures and had double the national average gains on standardized tests in reading and mathematics (Costa and Liebmann 1997). The Buddy Project in Indiana, in which students in some classrooms were given home computers, also reported highly positive results across a variety of skills. Similar results were reported for the Computers Helping Instruction and Learning Development in Florida, an elementary school program that emphasized student empowerment, teacher training and teamwork, and independent learning (ETS 1997). These studies suggest that the use of computers in enriched, nontraditional learning environments might achieve the fundamental changes in student learning that advocates of computer-based instruction desire.

green bar


[18] "Distance learning" refers to education that takes place via electronic means, such as satellite television or the Internet. There is no single organization responsible for certifying or monitoring distance education in the United States, so there are no uniform statistics or a centralized directory of distance learning providers.

[19] Note that there are more than 100 major providers of satellite-based instruction.

[20] Data available from

[21] This "digitizing of history"—particularly that of perishable or fragile photographs, artwork, documents, recordings, films, and artifacts—represents a socially invaluable preservation function and allows people to view items that could never be displayed publicly. Note that the Library of Congress Web site had over 42 million hits in April 1997 alone (Memmott 1997).

[22] For example, ensuring adequate pupil-to-computer ratios and Internet connections to the school versus universal classroom deployment of full multimedia computers, Internet connections, and school networks. The McKinsey report details three alternative IT models and estimated costs.

[23] Inquiry-based learning represents active learning on the part of a student rather than the passive assimilation of information "taught" by an instructor. Inquiry-based learning reflects active construction of models for conceptual understanding, the ability to connect knowledge to the world outside of the classroom, self-reflection about one's own learning style, and a cultivated sense of curiosity. See Rubin (1996).

[24] See U.S. Bureau of the Census (1996), table 262; these data are based on the Computers in Education Study conducted by the International Association for the Evaluation of Educational Achievement (see IEA n.d.).

[25] See also U.S. Bureau of the Census (1996), table 261 (based on unpublished data from Market Data Retrieval), and table 262 (based on IEA n.d.).

[26] There were also significant differences in Internet access based on minority enrollment. See chapter 1, figure 1-17 and related discussion, and appendix table 1-25.

[27] The President's Committee of Advisors on Science and Technology has recommended that the primary focus of computer-based instruction be content-oriented rather than skills training. See PCAST (1997).

[28] Detailed reviews of the meta-analytic technique may be found in Hittleman and Simon (1997); Kulik and Kulik (1991); Ryan (1991); McNeil and Nelson (1990); and Glass, McGaw, and Smith (1981).

[29] Studies with small sample size tend to have large statistical variance, while large studies have smaller variance. Meta-analysis controls for variance and provides greater weight to the results of large studies, which tend to be more statistically robust.

[30] Niemiec and Walberg (1987) identify 11 such studies; more recent meta-analyses include Kulik and Kulik (1991), Ryan (1991), and McNeil and Nelson (1990).

[31] This is not necessarily problematic. Bibliometric databases in the field of education include dissertations, technical reports, and unpublished conference papers.

[32] For example, two courses teach the same subject matter, but one course uses CBI (the experimental or treatment group) and one course uses conventional instruction (the control group). At the end of the treatment period, students are given the same test and test performance is compared.

[33] Note that students in a conventionally taught course are the control group. Assuming a normal distribution of test scores, the average test score (the mean) of this group represents the 50th percentile: one-half of all students score above the mean, and one-half score below the mean. The z-score that corresponds to the mean of a normal distribution is zero, so the effect of CBI would be zero because it is not used in the control group.

[34] Statistical significance means that the average response (e.g., test score) is distinctly different between the groups in question. Tests of statistical significance are based on probability theory.

[35] The higher effect sizes for studies published before 1970 are intriguing, and may reflect novelty impacts. Such effects result from the greater attention and effort an individual gives to a substantially new technology and not necessarily from any intrinsic contributions of the technology itself.

[36] Ryan also had a precise set of stringent selection criteria, including the requirements that the study reflect experimental or quasi-experimental design, that the sample size be at least 40 students (a minimum of 20 students in the treatment and control groups), and that the treatment last eight weeks or longer.

Previous | Top | Next

Indicators Home | Contents | Help | Comments