Sorry folks. We’re going uber-nerdy today. I want to figure out what I think by writing out what I think. You’re free to come along for the ride. If not, this is way more fun to read.
Today we have:
1. Colin Hitt
2. Matt Di Carlo
1. Colin Hitt
So you say charter schools don’t work. That’s an empirical claim. It needs to be backed up by evidence. Here’s a helpful guide to the most rigorous research available. Once you’ve tackled this material, you’ll be in position to prove your point.
As you probably know, the gold standard method of research in social science is called random assignment. Charter schools are particularly well-suited for random assignment evaluations, since they’re usually required by law to admit students by lottery. The lotteries are fair to families – that’s why they’re put in place. But they also allow researchers to make fair comparisons between students who win or lose lotteries to attend charter schools.
To date, nine studies (updated to: 10) lottery-based evaluations of charter schools have been released. Let’s go through them, starting with the earliest work.
Here’s the punch line:
Altogether, these studies have remarkably similar findings that urban charter schools are producing significant gains in reading or math, or both. Suburban charter schools perform less well – you could cite this fact, but frankly this a minor concern in the battle to close the racial achievement gap in American education.
2. Matt Di Carlo
He works for the Shanker Institute. He strikes me as fair and nerdy. I like both these characteristics, and share the second one (and aspire to the first).
Among the more persistent arguments one hears in the debate over charter schools is that the “best evidence” shows charters are more effective….
The basic point is that we should essentially dismiss – or at least regard with extreme skepticism – the two dozen or so high-quality “non-experimental” studies, which, on the whole, show modest or no differences in test-based effectiveness between charters and comparable regular public schools.
In contrast, “randomized controlled trials” (RCTs), which exploit the random assignment of admission lotteries to control for differences between students, tend to yield positive results. Since, so the story goes, the “gold standard” research shows that charters are superior, we should go with that conclusion.
RCTs, though not without their own limitations, are without question powerful, and there is plenty of subpar charter research out there. That said, however, the “best evidence” argument is not particularly compelling (and it’s also a distraction from the positive shift away from obsessing about whether charters do or don’t work toward an examination of why). A full discussion of the methodological issues in the charter school literature would be long and burdensome, but it might be helpful to lay out three very basic points to bear in mind when you hear this argument.
1 – Only a relatively tiny handful of charters have ever been part of an RCT….
You can read his whole critique here.
He goes on to argue:
In general, charter schools are one of the few reforms under heavy discussion today that actually has a solid, long-term research base. If some charter supporters want to argue that this body of evidence – RCT and otherwise – suggests that charters might be more effective with low-income students, or when they set up shop in urban areas, there’s a case to be made there (though the evidence is far more mixed than is sometimes implied).
However, the research overall is rather clear – there are good, bad and medium charters, and the same goes for regular public schools.
Those who cling to the “best evidence” theory certainly score points for social scientific caution, but taking this viewpoint too far – i.e., essentially ignoring all research but RCTs – seems like wishful thinking more than anything else. And, more importantly, it distracts from the far more important task of trying to explain the wide variation in measured charter performance in terms of concrete policies and practices, which can inform all schools, regardless of their governance structures. The charter movement as a whole seems to be moving in this direction. Let’s hope this continues.
I’m not fully sure what I think. One reason I write this blog is the actual writing helps me decide what I think. So off we go.
a. I Heart RCT
Like Collin Hitt, I really value RCTs as “gold standard research.” I believe in the scientific method. I wrote an essay about that here.
As a sidebar, my brain is now asking Why. Why do I value this particular type of study?
My brain says:
*I’m influenced by scholars like MIT’s Josh Angrist and several from Harvard (Roland Fryer, Tom Kane, Marty West, etc), who champion this method, not as perfect, but as better than other methods.
*I’m influenced by Tom Loveless, a scholar predisposed to pointing out weaknesses in other research that is less empirically strong.
*Many nights when my wife and I talk about how our day went, I’m struck by the contrast between education research (which seems to go in “fad waves”) and cancer research (my wife’s work).
Closing the achievement gap and curing cancer are tough problems. If I had to bet on where bigger progress will made in the next 10 years, I’d bet cancer, because for all its faults — and there are many — RCTs are standard operating procedure. Learning in that field is generally reliable (insert a million exceptions here). Researcher A discovers a little thing. Researchers B-Z get to build on that.
Let me restate. There are many limitations of RCTs. Ed Liu inserted some yesterday into the comments section of this blog, including specifically that an edu-RCT is different from a medi-RCT.
For those of you youngsters thinking of going to a PhD program in the social sciences, Christopher Winship wrote an extremely wonky paper about RCT limitations, which is here. Disclosure: Chris once bought me a sandwich at the Harvard Faculty Club, and I thought it was soggy. Plus no chips, just some upscale lettuce on the side.
Anyway, what do they say about democracy? It’s the worst form of government except all the others that have been tried? That’s RCTs in education research.
End of sidebar.
Annnnnnnnway, Hitt’s point — 10 gold standard studies of charters, all positive — shouldn’t be taken lightly.
b. I agree with Di Carlo
Here’s a simple thought experiment. If you could do an RCT of every charter school in the nation, what do you guess you would find? Two likely choices.
Choice 1. A mix of good, bad, and medium charters, in roughly equal proportions.
Choice 2. Many more good charters than bad (i.e., something similar to what we find in the 10 small RCT studies that Hitt mentions).
My guess: Choice 1. That is, my guess is for every Boston, where the RCTs show amazing results for charters, there’s a Rhode Island, where the charter sector is unimpressive.
Like Di Carlo, the larger studies of charters are less empirically strong, but they’re not worthless. A study like CREDO NYC, which came out yesterday, and is not an RCT, has findings similar to Hoxby’s RCT. So I would guess that CREDO USA is probably a decent proxy for all charter quality. It’s. A. Guess. I could be wrong. I am often wrong.
c. What I wish
Some charter opponents approvingly quote Macke Raymond of Stanford for her non RCT CREDO study (when it says nationally only 17% of charters are “better”), then denigrate Macke Raymond of Stanford for her CREDO study when it says a particular subset of charters (like NYC or NJ) are unusually good. And vice versa for some charter advocates.
In a fantasy world, folks from very effective (while still highly imperfect!) charters wouldn’t have to invest so much energy just defending their right to exist. If that happened, discussion would move more to what Di Carlo hopes:
…the far more important task of trying to explain the wide variation in measured charter performance in terms of concrete policies and practices, which can inform all schools, regardless of their governance structures.
That’s my fantasy.