Do Computers in the Classroom Boost Academic Achievement?

Report Education

Do Computers in the Classroom Boost Academic Achievement?

June 14, 2000 17 min read
Kirk Johnson
Former Visiting Fellow
Kirk is a Former Visiting Fellow.

Over the past 20 years, computers and the sharing of information that they facilitate have penetrated nearly every aspect of American life. Indeed, reliance on computers grows every day, from shopping at grocery stores and filing taxes to driving an automobile and communicating with relatives and business associates.

This explosion in the technology has increased efforts to equip every classroom with computers and "wire" every school to the Internet. Between September 1984 and September 1997 alone, the number of computers in America's K-12 schools increased elevenfold to more than 8 million units.1 Educators have been forced to keep up, and some are finding themselves teaching general skills in how to use a computer while they use them to teach other subjects.

Few Americans would question the role that computers could play in education. For the United States to maintain its high-technology status in the global economy, it seems fair to expect computers to be given a more integral role. Some educators claim that ready access to computers and increased use of computers in K-12 education has a beneficial effect on educational outcomes. In the same way that computer technology has improved the operation of automobiles, these proponents believe computers will make the classroom a better environment in which to teach the difficult concepts that lead to higher academic achievement. To these educators, a computer in the classroom may become the deus ex machina of Education in the 21st century.

But are classroom computers delivering on this expectation? Does access to a computer or use of a computer in instructing students improve their academic achievement? Answering these questions is especially critical today because politicians are proposing to spend billions of tax dollars on expanding access to computers in schools in order to bridge a so-called digital divide. For example:

  • President Bill Clinton has proposed a $2 billion program to increase access to computers and the Internet in low-income neighborhoods and schools.2

  • Senator Joseph Biden (D-DE) has proposed spending tens of millions of dollars for computer-based instruction.3

  • Vice President Al Gore has made access to computers in the classroom a major policy issue of the 2000 presidential campaign, calling for "[e]very classroom and library [to be] wired to the Information Superhighway."4

  • The President's Panel on Educational technology has argued that the federal government should spend between $6 billion and $28 billion each year on an ambitious program of computer infrastructure development (both hardware and software), teacher training, and research.5

Such spending would supplement the $1.25 billion in federal money already spent between fiscal year (FY) 1997 and FY 2000 on the technology Literacy Challenge Fund,6 which provides funding for new computers, software, and teacher training.

Although politicians may be quick to call for government subsidies to increase the number of computers in the classroom, previous research on the effectiveness of computers in improving academic achievement has been inconclusive at best.7 In other words, it is not clear that spending more tax dollars on computers will boost test scores.

To help fill this gap in the research, the author used data from the National Assessment of Educational Progress (NAEP) to determine whether the use of computers in the classroom has direct and positive effects on academic achievement. The analysis showed that:

  • Students who use computers in the classroom at least once each week do not perform better on the NAEP reading test than do those who use computers less than once a week.

An important consideration in an analysis of this issue is teacher training and preparation in the use of computers, since the students of teachers who are not adequately trained to use them in reading instruction may not perform as well on the NAEP reading test as students whose teachers are adequately trained. This report specifically analyzes computer usage in the classrooms of teachers who responded that they are at least moderately well-prepared in the use of computers in reading instruction.

Background

The existing research on how academic achievement is affected by computers in the classroom offers varying conclusions. Some research indicates that computers may aid in achievement. Other research concludes that computers are of questionable effectiveness.

In 1997, Harold Wenglinsky of the Educational Testing Service, which works closely with the National Center for Education Statistics in preparing the NAEP data file, published a major study on computers and academic achievement. Using data from the 1996 National Assessment of Educational Progress math examination, Wenglinsky analyzed student computer use both in class and at home,8 as well as a variety of social and behavioral factors that could explain math achievement. That study generally showed a positive reaction to the technology. Wenglinsky noted, however, that students who used computers predominantly for drill and practice, as opposed to using them in ways that develop higher-order thinking skills, tended to do worse on the NAEP math test.

The results of other studies extolling the benefits of computer-aided instruction are questionable because they overlook the factor of the instructor's capabilities. Many early studies of computers in elementary educational settings employed highly trained educational researchers rather than ordinary teachers. Their advanced training and experience may have facilitated the learning process, making the effect of the computers alone difficult to ascertain.9 Those studies suggest that students who use computers in the classroom show at least a modest level of achievement gain over students who do not use computers. Clearly, the extent of teachers' computer training and their level of preparation in using computers in Education will vary and thus affect the level of success of computer-aided instruction.

In recent years, criticism of previous studies on the beneficial effects of computers and the role of computers in the classroom has grown. Todd Oppenheimer, an associate editor at Newsweek Interactive, has noted that each time a new technology has been developed in the United States, whether it was Thomas Edison's motion picture machine, the portable radio receiver, or some other technological marvel, enthusiasts purported that these inventions would replace and revolutionize Education in America.10 These claims have never been fully realized, and Oppenheimer is not alone in his criticism.11 Some critics consider computers in the classroom a mere fad, while others assert that because computers are growing in their importance to every aspect of society, it is better to expose children early to this evolving technology.12 Otherwise, American students may continue to perform more poorly on standardized tests than do their peers in other countries.13 Clearly, the debate on computers in the classroom is far from settled.

How to Interpret These Findings

This report contains the results of statistical tests that use NAEP data to explain differences in reading test scores. These statistical tests isolate the independent effects of a number of factors on reading scores (such as the Education of parents) in order to determine whether computer use at least weekly matters to these test scores. The statistical tests (or correlations) cover data on a wide array of school children, as defined by their race, income, and other socioeconomic characteristics. Because the statistical model used here includes these socioeconomic characteristics, the reader can interpret these findings as applicable to each of these groups of students. Thus, the findings about computer use and reading scores apply as much to upper-income as to lower-income students, to blacks as to whites, to girls as to boys, and so forth.

These correlations suggest that there is a statistical relationship between the factor and achievement in reading, but they do not suggest that these independent factors cause differences in academic achievement.

The variables in the model came from the NAEP database and do not include everything that might have an effect on academic achievement, such as the methods used to teach reading. These factors may be much more important in general, or for a particular child, than the factors recorded in the NAEP data. Moreover:

  • Some variables, such as participation in the federal free and reduced-price lunch program, are proxies (substitutes) for other unobserved factors. For example, eligibility for the free and reduced-price lunch program is determined by income; only children from low-income families may participate. Although not all low-income children will participate in the free and reduced-price lunch program, many will. Such information may be used, then, to analyze the effect of different characteristics on achievement.

  • Some variables also may be used to determine the effect of some unobservable "third factor." For example, this model does not suggest that poor families have children who do worse on the NAEP because they are poor. Rather, poor families may have some unobservable characteristics or challenges that make it more difficult to succeed in school. Similarly, the categories of black and Hispanic students cover children whose characteristics other than their race may make it more difficult for them to score well.

  • "Statistically insignificant" means that the effect of the variable/factor is no different from zero effect. For example, if the relationship between computer use at least weekly and academic achievement is statistically insignificant, that means that those students who use computers at least weekly do no better than those who do not.

Characteristics of the NAEP Data

The author used the 1998 NAEP database on reading to analyze the influence of computers on academic achievement. The National Assessment of Educational Progress, first administered in 1969, is an examination that measures academic achievement in a variety of fields, such as reading, writing, mathematics, science, geography, civics, and the arts. Currently, the NAEP is administered to 4th, 8th, and 12th grade students, and the tests for math and reading are given alternatively every two years. In 1998, for example, the NAEP reading test was administered; math was assessed in 1996 and 2000.

The NAEP actually involves two tests: a nationally administered test and the state-administered tests. Over 40 states participate in the separate state samples that are used to gauge achievement within individual jurisdictions. For the purposes of this study, only the 1998 national data were used.

The most significant benefit of using the NAEP data is that in addition to test scores in the subject area, it included an assortment of background information for the students taking the exam, their main subject-area teacher, and their school administrator. Responses from the teachers and school administrators are linked to the student's information, which yields a rich database of information. The background questions include:

  • TV viewing habits,

  • Computer usage at home and school,

  • Teacher tenure and certification,

  • Socioeconomic status,

  • Basic demographics, and

  • School characteristics.

By incorporating this information with their assessments of NAEP data, researchers can better understand the factors that can explain the differences in results found among children who take the NAEP tests.

The Heritage Analysis

This analysis considered the effect of computers in the classroom on academic achievement by analyzing six factors: frequent in-class computer use by trained teachers, race and ethnicity, parents' educational attainment, number of reading materials in the home, free or reduced-price lunch participation, and gender. The effect of each factor can be isolated using regression analysis. The Heritage model employs a jackknifed ordinary least squares model14 and examines the effects of each factor on the NAEP 1998 reading test's nationwide sample of public school children.15

Independent Variables

  1. Frequent In-Class Computer Use by Trained Teachers
    The effect of computers in the classroom on achievement can be adequately assessed only when two conditions are met. First, computers must be available and accessible for use by both teachers and students. Second, the teacher using the computer for instructional purposes must be versed in the operation of the hardware and subject-matter software. The quality of computer-assisted instruction cannot be determined simply from the number of computers available. If teachers are not prepared to use computer hardware and software specific to the academic subject matter (in this case, reading), then even if there are computers present, their students may actually learn less because of unqualified instruction. Sherry Turkle, a professor of the sociology of science at the Massachusetts Institute of technology, notes that the possibilities of using a computer poorly "so outweigh the chance of using it well, [that] it makes people like us, who are fundamentally optimistic about computers, very reticent."16 It is critical, then, that any model that purports to analyze computers in the classroom and student achievement include a variable to control for teacher preparation.

The interaction of computer availability and teacher preparation is critical to understanding the effectiveness of computers in the classroom. If the analytical model did not control for regularity of use, the relative effectiveness attributable to the computers would be questionable. It is impossible to assess accurately the effectiveness of any teaching tool if the tool is not used often enough to have some pedagogical effect. Further, if teachers are not qualified to teach with computers, the effect of the availability of computers alone might generate biased achievement statistics that would be limited in their usefulness. Thus, the Heritage model considers both of these factors to estimate the true effect of computer-aided instruction on academic achievement.

  1. Race and Ethnicity
    Many studies and reports have demonstrated that over time, African-American and Hispanic students tend to perform more poorly on standardized tests than do white students (although the gap has generally narrowed over the past 25 years).17 There are a number of possible explanations for this trend.18 Because strong differences in academic achievement exist among the races, the variables of race and ethnicity are included in the analysis.

  2. Parents' Education
    Many researchers have noted that the educational attainment of a child's parents is a good predictor of their child's academic achievement. Parents who, for instance, are college educated could be better equipped to help their children with homework and understanding concepts than are those who have less than a high school Education, other things being equal. Because the Education level of one parent is often highly correlated with that of the other, only a single variable is included in the analysis.

  3. Number of Reading Materials in the Home
    The presence of books, magazines, encyclopedias, and newspapers generally indicates a dedication to learning in the household. Researchers have determined that these reading materials are important aspects of the home environment.19 The analysis thus includes a variable controlling for the number of these four types of reading materials found at home.

  4. Free/Reduced-Price Lunch Participation
    Income is often a key predictor of academic achievement because low-income families seldom have the financial resources to purchase extra study materials or tutorial classes to help their children perform better in school. Although the NAEP does not collect data on household income, it does collect data on participation in the federal free and reduced-price lunch program that are used here.20

  5. Gender
    Empirical research has suggested that girls tend to perform better on reading and writing subjects while boys perform better in the more analytical subjects of math and science.21 Many authors have expounded on this idea,22 yet the data on the male-female achievement gaps can lead researchers to often inconsistent observations. For example, in 1998, young men scored higher than young women on both the verbal and quantitative sections of the Scholastic Achievement Test (SAT). Some writers noted that this may be because of a fundamental bias against females in America's educational system.23 Another explanation, however, is that the test results reflect a selection bias in which more "at-risk" females opt to take the SAT relative to males.24 In order to account for this difference, the analysis includes a variable for gender.

  6. Omitted variables
    Previous research25 has included more family background variables in the model specification. In the 1998 NAEP database, the only information available on children's parents is their educational attainment. The NAEP does not ask whether the child lives with both parents (or parental figures), one parent, or no parents (i.e., in a group home). Future administrations of the NAEP test should include this type of question since a great deal of research has found that having both parents in the home can improve a child's academic achievement.

Results of the Analysis

The six factors were entered into a statistical model26 that was then applied to the NAEP's 1998 nationwide sample of public school children who took the reading test.27 Chart 1 and Chart 2 show the percent change in 4th and 8th grade reading scores attributable to the factors in the model, compared with a base case.28 Here, the base case is defined as a child with the following characteristics:

  • White;

  • Female;

  • Non-poor (that is, not participating in the free and reduced-price lunch program);

  • Parents who did not attend college;

  • Has two out of the four possible reading materials in the home; and

  • Did not have weekly computer instruction by a teacher who is at least moderately well-prepared in using computers for reading education.




A white female child who is not poor, whose parents did not attend college, who has two out of the four possible reading materials in the home, and who does not have weekly computer instruction by a prepared teacher would score 233.3 points on the 1998 NAEP (out of a maximum of 500) in the 4th grade or 258.6 points in the 8th grade. If she were poor, black, or Hispanic, her score would drop, on average; if her home had more than two reading materials, or if her parents had taken any college-level courses, her score would increase.

For both 4th and 8th grades, the variable for computer instruction and teacher preparation is not statistically significant, meaning that the effect of the variable is not statistically different from zero. These results mean that the variable for computer instruction shows no effect on the academic achievement of the students.

Thus, the Heritage model predicts that students with at least weekly computer instruction by well-prepared teachers do not perform any better on the NAEP reading test than do students who have less or no computer instruction.29 These findings are consistent for both 4th and 8th graders. In fact, if the variable were significant, it would indicate that those students who were frequently taught using computers would do slightly worse on the NAEP than those who were not. Both Chart 1 and Chart 2 show that there is a negative percent change in the NAEP reading score for the computer variable. Such a result might indicate that children are not learning critical higher-order thinking skills that achievement exams like the NAEP aim to test. Further, these results are consistent with Wenglinsky's analysis of 1996 NAEP math data.30

At the same time, variables such as race, income, home environment, and parents' college attendance are all significant factors in explaining differences in reading test scores.

Both 4th and 8th grade girls score slightly higher than do boys on the NAEP reading exam, a fact that bolsters recent evidence on gender differences in academic achievement. American Enterprise Institute W. H. Brady Fellow Christina Hoff Sommers notes that girls on average "get better grades, are more engaged academically, and are now the majority sex in higher education."31 The results here support the contention that schools are not shortchanging girls.32



Conclusion

As this analysis shows, the use of computers in the classroom may not play a significant role in explaining reading ability. Thus, dedicating large amounts of federal tax dollars to the purchase of computer hardware, software, and teacher training could crowd out other worthwhile Education expenditures on, for example, new textbooks, music programs, vocational Education, and the arts. This report does not suggest that there is no place for computers in the classroom. It does, however, demonstrate that computers may not have the effect on academic achievement in reading that some might expect, even when they are used by well-trained instructors.

Kirk A. Johnson, Ph.D. is a Policy Analyst in the Center for Data Analysis at The Heritage Foundation.


Appendix A: Results of the Statistical Models

Table 1 reports the results of the Heritage analysis of data from the National Assessment of Educational Progress (NAEP) on reading in the 4th and 8th grades. As shown in this table, the variables in the Heritage model are statistically significant,33 with the exception of the socio-economic factors-other non-white communities variable in the 8th grade analysis and the computer variable analyzed in this report.34



In analyzing the effects of computers in the classroom, there are two statistical issues to consider. First, the NAEP exam is a long test and therefore is not administered in its entirety to all children. Rather, different parts are given to different children. Certain students will do better on certain portions of the test than others. Consequently, a "true" score must be estimated, or imputed, from the incomplete information. The NAEP estimates five plausible composite reading scores and recommends that researchers use all five in any analysis. The Heritage model used in this analysis follows the guidelines specified by the Educational Testing Service (which works closely with the National Center for Education Statistics in developing the file) for incorporating all five reading scores into the analysis.35

Second, the NAEP utilizes a complex sample design that oversamples children with certain characteristics.36 Each child is assigned a unique weight calculated from the probability of being selected from the population at large (in this case, from the U.S. population of 4th or 8th graders in public schools). The NAEP's sample design requires a complex modeling technique, which the Heritage model employs.37

Endnotes

1. U.S. Bureau of the Census, Statistical Abstract of the United States, 1998 (Washington, D.C.: U.S. Government Printing Office, 1998), Table No. 281, p. 179.

2. See CNN.com, "President Clinton Announces Initiative to `Help Bridge the Digital Divide,'" at
http://www.cnn.com/2000/ALLPOLITICS/stories/02/02
/clinton.internet/index.html
(February 2, 2000).

3. See CNN.com, " New Bill Would Bring Thousands of Computers to Youths," (February 15, 2000).

4. See Gore 2000, "Revolutionizing American Education in the 21st Century," at http://www.algore2000.com/agenda/education_agenda.html.

5. President's Committee of Advisors on Science and Technology, Panel on Educational Technology, Report to the President on the Use of Technology to Strengthen K-12 Education in the United States (Washington, D.C.: U.S. Government Printing Office, March 1997).

6. U.S. Department of Education, "Total Appropriation for ESEA, 1990-2001," unpublished table, available from the author upon request.

7. See the discussion of this prior research in the following section.

8. Harold Wenglinsky, Does It Compute? The Relationship Between Educational Technology and Student Achievement in Mathematics (Princeton, N.J.: Educational Testing Service, 1997).

9. See, for example, Shousan Wang and Phillip J. Sleeman, "Computer-Assisted Instruction Effectiveness...A Brief Review of the Research," International Journal of Instructional Media, Vol. 20 (1993), pp. 333-348, and Claire M. Fletcher-Flinn and Breon Gravatt, "The Efficacy of Computer Assisted Instruction (CAI): A Meta-Analysis," Journal of Educational Computing Research, Vol. 12 (1995), pp. 219-242.

10. Todd Oppenheimer, "The Computer Delusion," The Atlantic Monthly, Vol. 280 (July 1995), pp. 45-62.

11. See Lawrence Baines, "Future Schlock: Using Fabricated Data and Politically Correct Platitudes in the Name of Education Reform," Phi Delta Kappan, Vol. 78 (1997), pp. 492-498.

12. A full critique of Oppenheimer and others is available in Thomas C. Reeves, "`Future Schlock,' `The Computer Delusion' and `The End of Education': Responding to Critics of Educational Technology," Educational Technology, Vol. 38 (September/October 1998), pp. 49-53.

13. American students performed near the bottom on the Third International Mathematics and Science Study (TIMSS). See William H. Schmidt et al., Facing the Consequences: Using TIMSS for a Closer Look at U.S. Mathematics and Science Education (Dordrecht, the Netherlands: Kluwer, 1999).

14. Ordinary least squares is a general statistical regression technique that is often used by researchers. See Michael Lewis-Beck, Applied Regression: An Introduction (Beverly Hills, Cal.: Sage Publications, 1980). From Sage Publications' Quantitative Applications in the Social Sciences, Series No. 07-022. A jackknife is a complex resampling technique that is designed to accurately estimate statistical significance from data in surveys such as the NAEP that employ a complex sampling methodology. See Appendix A for the results and a more complete discussion of the jackknifed ordinary least squares model.

15. This analysis excludes private school children.

16. Oppenheimer, "The Computer Delusion," p. 46.

17. For an analysis of the long-term achievement gap, see U.S. Department of Education, Report in Brief: NAEP 1996 Trends in Academic Progress (Washington, D.C.: U.S. Government Printing Office, 1997), Figure 2, p. 14.

18. For a recent compilation on this subject, see Christopher Jencks and Meredith Phillips, eds., The Black-White Test Score Gap (Washington, D.C.: Brookings Institution Press, 1998).

19. Such opinions have been prevalent for years. See, for example, James S. Coleman, Thomas Hoffer, and Sally Kilgore, High School Achievement (New York: Basic Books, 1982).

20. Since eligibility for the free and reduced-price lunch program is determined by household income relative to the official poverty line, this variable provides a good proxy for income.

21. U.S. Department of Education, NAEP 1994 Trends in Academic Progress (Washington, D.C.: U.S. Government Printing Office, 1996).

22. For a brief discussion of this point of view, see Thomas Hancock et al., "Gender and Developmental Differences in the Academic Study Behaviors of Elementary School Children," Journal of Experimental Education, Vol. 65 (1996), pp. 18-39.

23. See Myra Sadker and David Sadker, Failing at Fairness: How America's Schools Cheat Girls (New York: Simon & Schuster, 1994).

24. The College Board, 1999 College Bound Seniors (New York: The College Board, 1999).

25. See, for example, Kirk A. Johnson, "Comparing Math Scores of Black Students in D.C.'s Public and Catholic Schools," Heritage Foundation Center for Data Analysis Report No. CDA99-08, October 7, 1999.

26. See Appendix A for the results and a more complete discussion of the jackknifed ordinary least squares model.

27. This analysis excludes private school children.

28. Specifying a base case from which to assess the results of a regression model is fairly arbitrary. Changing the base model case does not alter the interpretation of the results.

29. See Appendix A for the results of these significance tests.

30. Wenglinsky, Does It Compute?

31. Christina Hoff Sommers, "The War Against Boys," The Atlantic Monthly, Vol. 285 (May 2000), p. 60.

32. See, for example, American Association of University Women, ed., Gender Gaps: Where Schools Still Fail Our Children (New York: Marlowe & Co., 1998).

33. Usually pegged at a 5 percent or 10 percent level. See Lewis-Beck, Applied Regression: An Introduction.

34. This means that these variables have no statistically discernable difference between the coefficient value and zero, so there is no effect.

35. From a multivariate regression perspective, the model below must be replicated five times using each of the plausible values individually and then averaging the resulting coefficients to yield the final model results. In technical terms, this process corrects for measurement error in the reading score variable, since the test administrators do not actually observe the test score from taking the exam in its entirety.

36. For example, the NAEP typically oversamples for race and geography of school attended (e.g., urban, rural).

37. A procedure called a jackknife must be employed to correctly assess the variance of each variable's coefficient, and the NAEP database has a series of 62 "replicate weights" to aid in this task. These 62 jackknifes must be applied and the variances of each coefficient averaged for each of the five plausible test score models above (yielding a total of 315 models compiled for the purpose of this research). The WesVar Complex Samples software (produced by SPSS, Inc.) did much of this replication work. Using the jackknife results with the five plausible values models allows for a variance correction mechanism. The purpose of the jackknife is to estimate a true sampling error. Correcting for the two types of error (measurement and sampling) allows for the most accurate estimates possible. See Bradley Efron, The Jackknife, the Bootstrap, and Other Resampling Plans (Philadelphia: Society for Industrial and Applied Mathematics, 1982), and Jun Shao and Dongsheng Tu, The Jackknife and Bootstrap (New York: Springer Verlag, 1995), for a more complete discussion of how this jackknife technique works.

Authors

Kirk Johnson

Former Visiting Fellow

Exclusive Offers

5 Shocking Cases of Election Fraud

Read real stories of fraudulent ballots, harvesting schemes, and more in this new eBook.

The Heritage Guide to the Constitution

Receive a clause-by-clause analysis of the Constitution with input from more than 100 scholars and legal experts.

The Real Costs of America’s Border Crisis

Learn the facts and help others understand just how bad illegal immigration is for America.