Hat tip to Steve G for sending me this NY Times article by Annie Murphy Paul. It’s about computer tutoring.
In contrast to a human tutor, who has a nearly infinite number of potential responses to a student’s difficulties, the program is equipped with only a few. If a solution to a problem is typed incorrectly — say, with an extra space — the computer stubbornly returns the “Sorry, incorrect answer” message, though a human would recognize the answer as right. “In the beginning, when Tyler was first learning to use ASSISTments, there was a lot of frustration,” Andrea says. “I would sit there with him for hours, helping him. A computer can’t tell when you’re confused or frustrated or bored.”
Heffernan, as it happens, is working on that. Dealing with emotion — helping students regulate their feelings, quelling frustration and rousing flagging morale — is the third important function that human tutors fulfill. So Heffernan, along with several researchers at W.P.I. and other institutions, is working on an emotion-sensitive tutor: a computer program that can recognize and respond to students’ moods. One of his collaborators on the project is Sidney D’Mello, an assistant professor of psychology and computer science at the University of Notre Dame.
Katharine Beals blogs her skepticism.
We have checked out some math and English software as part of our research into Match Next. So far: generally unimpressive.
We’re long-term bullish on computers helping kids learn, but short-term skeptical on the products out there. Math better than English.
Our experience in the 9 turnaround Houston schools, as measured by Harvard economist Roland Fryer, was that human tutors helped kids make unprecedented gains; but the computer tutors, zero.
In fact, the computer tutoring software in Houston was a Carnegie Learning product, one of the companies profiled in the article. Fryer’s finding here, no gains associated with the computer tutored subjects, squares with a different study that Paul mentions.
A review conducted by the Department of Education in 2010 concluded that the product had “no discernible effects” on students’ test scores, while costing far more than a conventional textbook, leading critics to charge that Carnegie Learning is taking advantage of teachers and administrators dazzled by the promise of educational technology.
Koedinger counters that “many other studies, mostly positive,” have affirmed the value of the Carnegie Learning program. “I’m confident that the program helps students learn better than paper-and-pencil homework assignments.”
That’s an interesting turn of phrase. I thought we were comparing computer tutoring to human tutoring. But Koedinger is comparing computer tutoring to old-fashioned homework.
Perhaps he has a point. In his world, most schools are not considering the ROI of computer versus human tutoring. They’re considering the ROI of pricier computer assisted math curriculum versus old-fashioned cheaper kind.