Over the past 20 years, computers and the sharing of information that they facilitate have penetrated nearly every aspect of American life. Indeed, reliance on computers grows every day, from shopping at grocery stores and filing taxes to driving an automobile and communicating with relatives and business associates.
This explosion in the technology has increased efforts to equip every classroom with computers and "wire" every school to the Internet. Between September 1984 and September 1997 alone, the number of computers in America's K-12 schools increased elevenfold to more than 8 million units.1 Educators have been forced to keep up, and some are finding themselves teaching general skills in how to use a computer while they use them to teach other subjects.
Few Americans would question the role that computers could play in education. For the United States to maintain its high-technology status in the global economy, it seems fair to expect computers to be given a more integral role. Some educators claim that ready access to computers and increased use of computers in K-12 education has a beneficial effect on educational outcomes. In the same way that computer technology has improved the operation of automobiles, these proponents believe computers will make the classroom a better environment in which to teach the difficult concepts that lead to higher academic achievement. To these educators, a computer in the classroom may become the deus ex machina of Education in the 21st century.
But are classroom computers delivering on this expectation? Does access to a computer or use of a computer in instructing students improve their academic achievement? Answering these questions is especially critical today because politicians are proposing to spend billions of tax dollars on expanding access to computers in schools in order to bridge a so-called digital divide. For example:
-
President Bill Clinton has proposed a $2 billion program to increase access to computers and the Internet in low-income neighborhoods and schools.2
-
Senator Joseph Biden (D-DE) has proposed spending tens of millions of dollars for computer-based instruction.3
-
Vice President Al Gore has made access to computers in the classroom a major policy issue of the 2000 presidential campaign, calling for "[e]very classroom and library [to be] wired to the Information Superhighway."4
- The President's Panel on Educational technology has argued that the federal government should spend between $6 billion and $28 billion each year on an ambitious program of computer infrastructure development (both hardware and software), teacher training, and research.5
Such spending would supplement the $1.25 billion in federal money already spent between fiscal year (FY) 1997 and FY 2000 on the technology Literacy Challenge Fund,6 which provides funding for new computers, software, and teacher training.
Although politicians may be quick to call for government subsidies to increase the number of computers in the classroom, previous research on the effectiveness of computers in improving academic achievement has been inconclusive at best.7 In other words, it is not clear that spending more tax dollars on computers will boost test scores.
To help fill this gap in the research, the author used data from the National Assessment of Educational Progress (NAEP) to determine whether the use of computers in the classroom has direct and positive effects on academic achievement. The analysis showed that:
- Students who use computers in the classroom at least once each week do not perform better on the NAEP reading test than do those who use computers less than once a week.
An important consideration in an analysis of this issue is teacher training and preparation in the use of computers, since the students of teachers who are not adequately trained to use them in reading instruction may not perform as well on the NAEP reading test as students whose teachers are adequately trained. This report specifically analyzes computer usage in the classrooms of teachers who responded that they are at least moderately well-prepared in the use of computers in reading instruction.
Background
The existing research on how academic achievement is affected by computers in the classroom offers varying conclusions. Some research indicates that computers may aid in achievement. Other research concludes that computers are of questionable effectiveness.
In 1997, Harold Wenglinsky of the Educational Testing Service, which works closely with the National Center for Education Statistics in preparing the NAEP data file, published a major study on computers and academic achievement. Using data from the 1996 National Assessment of Educational Progress math examination, Wenglinsky analyzed student computer use both in class and at home,8 as well as a variety of social and behavioral factors that could explain math achievement. That study generally showed a positive reaction to the technology. Wenglinsky noted, however, that students who used computers predominantly for drill and practice, as opposed to using them in ways that develop higher-order thinking skills, tended to do worse on the NAEP math test.
The results of other studies extolling the benefits of computer-aided instruction are questionable because they overlook the factor of the instructor's capabilities. Many early studies of computers in elementary educational settings employed highly trained educational researchers rather than ordinary teachers. Their advanced training and experience may have facilitated the learning process, making the effect of the computers alone difficult to ascertain.9 Those studies suggest that students who use computers in the classroom show at least a modest level of achievement gain over students who do not use computers. Clearly, the extent of teachers' computer training and their level of preparation in using computers in Education will vary and thus affect the level of success of computer-aided instruction.
In recent years, criticism of previous studies on the beneficial effects of computers and the role of computers in the classroom has grown. Todd Oppenheimer, an associate editor at Newsweek Interactive, has noted that each time a new technology has been developed in the United States, whether it was Thomas Edison's motion picture machine, the portable radio receiver, or some other technological marvel, enthusiasts purported that these inventions would replace and revolutionize Education in America.10 These claims have never been fully realized, and Oppenheimer is not alone in his criticism.11 Some critics consider computers in the classroom a mere fad, while others assert that because computers are growing in their importance to every aspect of society, it is better to expose children early to this evolving technology.12 Otherwise, American students may continue to perform more poorly on standardized tests than do their peers in other countries.13 Clearly, the debate on computers in the classroom is far from settled.
How to Interpret These Findings This report contains the results of statistical tests that use NAEP data to explain differences in reading test scores. These statistical tests isolate the independent effects of a number of factors on reading scores (such as the Education of parents) in order to determine whether computer use at least weekly matters to these test scores. The statistical tests (or correlations) cover data on a wide array of school children, as defined by their race, income, and other socioeconomic characteristics. Because the statistical model used here includes these socioeconomic characteristics, the reader can interpret these findings as applicable to each of these groups of students. Thus, the findings about computer use and reading scores apply as much to upper-income as to lower-income students, to blacks as to whites, to girls as to boys, and so forth. These correlations suggest that there is a statistical relationship between the factor and achievement in reading, but they do not suggest that these independent factors cause differences in academic achievement. The variables in the model came from the NAEP database and do not include everything that might have an effect on academic achievement, such as the methods used to teach reading. These factors may be much more important in general, or for a particular child, than the factors recorded in the NAEP data. Moreover:
|
Characteristics of the NAEP Data
The author used the 1998 NAEP database on reading to analyze the influence of computers on academic achievement. The National Assessment of Educational Progress, first administered in 1969, is an examination that measures academic achievement in a variety of fields, such as reading, writing, mathematics, science, geography, civics, and the arts. Currently, the NAEP is administered to 4th, 8th, and 12th grade students, and the tests for math and reading are given alternatively every two years. In 1998, for example, the NAEP reading test was administered; math was assessed in 1996 and 2000.
The NAEP actually involves two tests: a nationally administered test and the state-administered tests. Over 40 states participate in the separate state samples that are used to gauge achievement within individual jurisdictions. For the purposes of this study, only the 1998 national data were used.
The most significant benefit of using the NAEP data is that in addition to test scores in the subject area, it included an assortment of background information for the students taking the exam, their main subject-area teacher, and their school administrator. Responses from the teachers and school administrators are linked to the student's information, which yields a rich database of information. The background questions include:
By incorporating this information with their assessments of NAEP data, researchers can better understand the factors that can explain the differences in results found among children who take the NAEP tests.
The Heritage Analysis
This analysis considered the effect of computers in the classroom on academic achievement by analyzing six factors: frequent in-class computer use by trained teachers, race and ethnicity, parents' educational attainment, number of reading materials in the home, free or reduced-price lunch participation, and gender. The effect of each factor can be isolated using regression analysis. The Heritage model employs a jackknifed ordinary least squares model14 and examines the effects of each factor on the NAEP 1998 reading test's nationwide sample of public school children.15
Independent Variables
- Frequent In-Class Computer Use by
Trained Teachers
The effect of computers in the classroom on achievement can be adequately assessed only when two conditions are met. First, computers must be available and accessible for use by both teachers and students. Second, the teacher using the computer for instructional purposes must be versed in the operation of the hardware and subject-matter software. The quality of computer-assisted instruction cannot be determined simply from the number of computers available. If teachers are not prepared to use computer hardware and software specific to the academic subject matter (in this case, reading), then even if there are computers present, their students may actually learn less because of unqualified instruction. Sherry Turkle, a professor of the sociology of science at the Massachusetts Institute of technology, notes that the possibilities of using a computer poorly "so outweigh the chance of using it well, [that] it makes people like us, who are fundamentally optimistic about computers, very reticent."16 It is critical, then, that any model that purports to analyze computers in the classroom and student achievement include a variable to control for teacher preparation.
The interaction of computer availability and teacher preparation is critical to understanding the effectiveness of computers in the classroom. If the analytical model did not control for regularity of use, the relative effectiveness attributable to the computers would be questionable. It is impossible to assess accurately the effectiveness of any teaching tool if the tool is not used often enough to have some pedagogical effect. Further, if teachers are not qualified to teach with computers, the effect of the availability of computers alone might generate biased achievement statistics that would be limited in their usefulness. Thus, the Heritage model considers both of these factors to estimate the true effect of computer-aided instruction on academic achievement.
-
Race and Ethnicity
Many studies and reports have demonstrated that over time, African-American and Hispanic students tend to perform more poorly on standardized tests than do white students (although the gap has generally narrowed over the past 25 years).17 There are a number of possible explanations for this trend.18 Because strong differences in academic achievement exist among the races, the variables of race and ethnicity are included in the analysis. -
Parents' Education
Many researchers have noted that the educational attainment of a child's parents is a good predictor of their child's academic achievement. Parents who, for instance, are college educated could be better equipped to help their children with homework and understanding concepts than are those who have less than a high school Education, other things being equal. Because the Education level of one parent is often highly correlated with that of the other, only a single variable is included in the analysis. -
Number of Reading Materials in the Home
The presence of books, magazines, encyclopedias, and newspapers generally indicates a dedication to learning in the household. Researchers have determined that these reading materials are important aspects of the home environment.19 The analysis thus includes a variable controlling for the number of these four types of reading materials found at home. -
Free/Reduced-Price Lunch Participation
Income is often a key predictor of academic achievement because low-income families seldom have the financial resources to purchase extra study materials or tutorial classes to help their children perform better in school. Although the NAEP does not collect data on household income, it does collect data on participation in the federal free and reduced-price lunch program that are used here.20 -
Gender
Empirical research has suggested that girls tend to perform better on reading and writing subjects while boys perform better in the more analytical subjects of math and science.21 Many authors have expounded on this idea,22 yet the data on the male-female achievement gaps can lead researchers to often inconsistent observations. For example, in 1998, young men scored higher than young women on both the verbal and quantitative sections of the Scholastic Achievement Test (SAT). Some writers noted that this may be because of a fundamental bias against females in America's educational system.23 Another explanation, however, is that the test results reflect a selection bias in which more "at-risk" females opt to take the SAT relative to males.24 In order to account for this difference, the analysis includes a variable for gender. - Omitted variables
Previous research25 has included more family background variables in the model specification. In the 1998 NAEP database, the only information available on children's parents is their educational attainment. The NAEP does not ask whether the child lives with both parents (or parental figures), one parent, or no parents (i.e., in a group home). Future administrations of the NAEP test should include this type of question since a great deal of research has found that having both parents in the home can improve a child's academic achievement.
Results of the Analysis
The six factors were entered into a statistical model26 that was then applied to the NAEP's 1998 nationwide sample of public school children who took the reading test.27Chart 1 and Chart 2 show the percent change in 4th and 8th grade reading scores attributable to the factors in the model, compared with a base case.28 Here, the base case is defined as a child with the following characteristics:
-
Non-poor (that is, not participating in the free and reduced-price lunch program);
-
Has two out of the four possible reading materials in the home; and
- Did not have weekly computer instruction by a teacher who is at least moderately well-prepared in using computers for reading education.
A white female child who is not poor, whose parents did not attend
college, who has two out of the four possible reading materials in
the home, and who does not have weekly computer instruction by a
prepared teacher would score 233.3 points on the 1998 NAEP (out of
a maximum of 500) in the 4th grade or 258.6 points in the 8th
grade. If she were poor, black, or Hispanic, her score would drop,
on average; if her home had more than two reading materials, or if
her parents had taken any college-level courses, her score would
increase.
For both 4th and 8th grades, the variable for computer instruction and teacher preparation is not statistically significant, meaning that the effect of the variable is not statistically different from zero. These results mean that the variable for computer instruction shows no effect on the academic achievement of the students.
Thus, the Heritage model predicts that students with at least weekly computer instruction by well-prepared teachers do not perform any better on the NAEP reading test than do students who have less or no computer instruction.29 These findings are consistent for both 4th and 8th graders. In fact, if the variable were significant, it would indicate that those students who were frequently taught using computers would do slightly worse on the NAEP than those who were not. Both Chart 1 and Chart 2 show that there is a negative percent change in the NAEP reading score for the computer variable. Such a result might indicate that children are not learning critical higher-order thinking skills that achievement exams like the NAEP aim to test. Further, these results are consistent with Wenglinsky's analysis of 1996 NAEP math data.30
At the same time, variables such as race, income, home environment, and parents' college attendance are all significant factors in explaining differences in reading test scores.
Both 4th and 8th grade girls score slightly higher than do boys on the NAEP reading exam, a fact that bolsters recent evidence on gender differences in academic achievement. American Enterprise Institute W. H. Brady Fellow Christina Hoff Sommers notes that girls on average "get better grades, are more engaged academically, and are now the majority sex in higher education."31 The results here support the contention that schools are not shortchanging girls.32
Conclusion
As this analysis shows, the use of computers in the classroom may not play a significant role in explaining reading ability. Thus, dedicating large amounts of federal tax dollars to the purchase of computer hardware, software, and teacher training could crowd out other worthwhile Education expenditures on, for example, new textbooks, music programs, vocational Education, and the arts. This report does not suggest that there is no place for computers in the classroom. It does, however, demonstrate that computers may not have the effect on academic achievement in reading that some might expect, even when they are used by well-trained instructors.
Kirk A. Johnson, Ph.D. is a Policy Analyst in the Center for Data Analysis at The Heritage Foundation.
Appendix A: Results of the Statistical Models
Table 1 reports the results of the Heritage analysis of data from the National Assessment of Educational Progress (NAEP) on reading in the 4th and 8th grades. As shown in this table, the variables in the Heritage model are statistically significant,33 with the exception of the socio-economic factors-other non-white communities variable in the 8th grade analysis and the computer variable analyzed in this report.34
In analyzing the effects of computers in the classroom, there are two statistical issues to consider. First, the NAEP exam is a long test and therefore is not administered in its entirety to all children. Rather, different parts are given to different children. Certain students will do better on certain portions of the test than others. Consequently, a "true" score must be estimated, or imputed, from the incomplete information. The NAEP estimates five plausible composite reading scores and recommends that researchers use all five in any analysis. The Heritage model used in this analysis follows the guidelines specified by the Educational Testing Service (which works closely with the National Center for Education Statistics in developing the file) for incorporating all five reading scores into the analysis.35
Second, the NAEP utilizes a complex sample design that oversamples children with certain characteristics.36 Each child is assigned a unique weight calculated from the probability of being selected from the population at large (in this case, from the U.S. population of 4th or 8th graders in public schools). The NAEP's sample design requires a complex modeling technique, which the Heritage model employs.37