Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Caution Flags For Tech In Classrooms

LA Johnson
/
NPR

A group of recent studies on technology in education, across a wide range of real-world settings, have come up far short of a ringing endorsement.

The studies include research on K-12 schools and higher ed, both blended learning and online, and show results ranging from mixed to negative. A deeper look into these reports gives a sense that, even as computers become ubiquitous in classrooms, there's a lot we still don't know — or at least that we're not doing to make them effective tools for learning.

First, a quick overview of the studies and their results:

Last fall, the Organization for Economic Co-operation and Development published its first-ever, and one of the largest-ever, international analyses of student access to computers and how that relates to student learning. (The OECD administers the PISA test, the world-famous international academic ranking.)

For this report, the researchers asked millions of high school students in dozens of countries about their access to computers both in the classroom and at home, and compared their answers to scores on the 2012 PISA. Here's the money quote:

"Students who use computers very frequently at school do a lot worse in most learning outcomes, even after controlling for social background and student demographics."

That's right. Lots of computer time meant worse school performance — by a lot.

A little bit of computer use was modestly positive, the authors found. But countries that invested the most in technology for education in recent years showed "no appreciable results" in student achievement.

And, striking at the root of one of the biggest claims made about tech in education, "perhaps the most disappointing finding in the report is that technology is of little help in bridging the skills divide between advantaged and disadvantaged students."

Now let's move to the U.S. In April, the research firm SRI published a report at the behest of the Bill & Melinda Gates Foundation (which is a supporter of NPR Ed). It looked at college courses that are using so-called "adaptive learning" software as an enhancement to blended courses.

NPR Ed has covered adaptive learning before. The creators of one of the products looked at in this report compared the technology to "a robot tutor in the sky that can semi-read your mind."

The results in this study were a bit more prosaic. Researchers looked at course grades, course completion and in some cases scores on common assessments across 14 colleges and 19,500 students.

"We saw no effects, weak effects, and modest positive effects," says study co-author Louise Yarnall.

Finally, a study published in July looked at high-achieving eighth-graders across North Carolina who had the opportunity to take Algebra I online. The study found that they did much worse than students who took the course face-to-face — about a third of a letter grade worse, in fact.

The study author, Jennifer Heissel, a doctoral student at Northwestern University, noted that across education research, "There's not a lot of cases where you see these big of drops in high-achieving students. Usually you can throw a lot at them."

A note of caution: These studies are all very different in their settings, their designs and the types of technology examined.

What they do have in common, besides results that would disappoint most ed-tech cheerleaders, is that they were field studies. They looked at how technology is really being used, beyond the hype.

"This is technology that people have been developing for 30 years in the lab," Yarnall observed. "This is one of the first chances to see how it looks out in the wild, with real students, real instructors and all the variables."

The authors all told NPR Ed that their studies are not perfect, with a lot of gaps in the data. But here are some observations we can make.

  • Implementation is really important, yet it's often ignored.
  • In the SRI higher education study, "The major concern expressed by instructors was getting students to use the adaptive courseware frequently enough."

    In other words, these colleges had: applied for grants, invested in the software programs, invested in retraining their instructors and redesigning courses, invested further time in adapting the software to individual courses, and spent time participating in the evaluation. But they didn't go the last mile, or the last thousand feet, to ensure that students were actually using the software, or perhaps make it clear to them why it was potentially helpful.

    Learning software collects lots of information on student usage, which could in theory have made it possible to relate the time that students actually spent on the software to outcomes. But the organizers of this study faced logistical and ethical hurdles in actually getting ahold of that data.

    It's as if you tried to do a medical evaluation on a bunch of new headache medicines, but with no information on whether, or how much, the patients took.

  • Imperfect data and inadequate evaluation make it hard to understand or improve the use of ed-tech.
  • The OECD survey asked about the availability of computers and the frequency of computer use in math lessons and for homework. But it leaves very little idea exactly what various countries are doing with all those computers in the classroom: what software they are using, what training teachers get.

    In the SRI study, despite its size and the resources devoted to it, the researchers faced a lot of "challenges to validity," as co-author Yarnall observed.

    Colleges each designed their own impact evaluations. They didn't always find it feasible to administer a pre- and post-test, which is considered a better measure of student learning than course grades.

    In the seven cases where Yarnall's team could make side-by-side comparisons of common learning assessments, they found a "modest but significantly positive effect" of the adaptive software.

    In the algebra study, Northwestern's Heissel says she had no information on which students took the course in which setting. She couldn't differentiate between students who: studied at home on their own time; or in a computer lab with lots of students doing different courses and an adult who's simply there to supervise; or in a computer lab with other students who were also taking Algebra and a certified math teacher on hand to answer questions.

    That last scenario for teaching math, sometimes called the "emporium model," has proven very successful in other studies. "I would love the chance to study teacher quality," as a factor in online courses, says Heissel.

  • Computers are enhancing access. There's less evidence that they're enhancing learning.
  • In the North Carolina study, the students taking algebra online in eighth grade would otherwise not have had the chance to take it until ninth grade. Even if they knew they might pass with a lower score or learn less, it's possible that they would still choose to the online course online, either to get it out of the way or to accelerate.

    "It's up to the parents, the districts, and the students to weigh the lower grade against the increased access to courses," Heissel says.

    Similarly, the four-year colleges in the SRI study were specifically using adaptive courseware to let more students into so-called gateway courses.

    These are the general-education requirements that are often oversubscribed at large public universities. Again, in this situation, colleges and their students might prefer to have the increased access that software provides — even if their results are no better.

    "I was chatting with one of the grantees at a four-year that had underwhelming impacts," says Yarnall. "I asked, 'Are you going to keep going?' And they said, 'Absolutely.' I have students who can't get into courses in the timeline they need to. So they want these options. Colleges are looking to become more flexible."

    Copyright 2020 NPR. To see more, visit https://www.npr.org.

    Anya Kamenetz is an education correspondent at NPR. She joined NPR in 2014, working as part of a new initiative to coordinate on-air and online coverage of learning. Since then the NPR Ed team has won a 2017 Edward R. Murrow Award for Innovation, and a 2015 National Award for Education Reporting for the multimedia national collaboration, the Grad Rates project.
    KUER is listener-supported public radio. Support this work by making a donation today.