On Tuesday morning a study was released by the National Education Policy Center at the University of Colorado, Boulder, that is highly critical of the controversial LA Times’ system of teacher evaluations that was published last summer.
The new report, released by U of Colorado researchers, Derek Briggs and Ben Domingue, found that “the research on which the Los Angeles Times relied for its teacher effectiveness reporting was demonstrably inadequate to support the published rankings.”
After the LA Times folks got an early look at Briggs and Domingue’s study, they rushed out a story of their own regarding the study’s findings.
The headline on Monday’s LAT story read as follows:
Separate study confirms many Los Angeles Times findings on teacher effectiveness
In the sub hed, the Times admitted that the Colorado study raised “…some questions about the precision of ratings as reported in The Times,” but most of the story suggested that the new research was more validating than it was critical.
Okay, well,….now compare those characterizations with the title of the press release for the U of Colorado study:
Research Study Shows L. A. Times Teacher Ratings Are Neither Reliable Nor Valid
Then the press release really revs up:
Based on the results of the Briggs and Domingue research, NEPC director Kevin Welner said, “This study makes it clear that the L.A. Times and its research team have done a disservice to the teachers, students, and parents of Los Angeles. The Times owes its community a better accounting for its decision to publish the names and rankings of individual teachers when it knew or should have known that those rankings were based on a questionable analysis. In any case, the Times now owes its community an acknowledgment of the tremendous weakness of the results reported and an apology for the damage its reporting has done.”
In short, the Colorado study does more than “raises some questions.” It’s an outright frontal attack.
BUT BEFORE WE GO FURTHER, LET’S REVIEW THE BACK STORY TO ALL THIS.
As you will remember this past August the LA Times ranked 6,000 LA Unified School District elementary school teachers into categories ranging from “most effective” to “least effective.” The rankings were based, in the simplest terms, on whether or not the teachers’ students improved, stayed the same, or got worse in their performance on standardized math and English tests. (It’s a little more complicated, but that’s the basic principle.) This criteria for evaluation has come to be known as “value added.”
The Times published the rankings in a searchable database that made public the ranking of the 6000 teachers. This caused the LA teachers’ union, UTLA, to go utterly ballistic. Union prez A.J. Duffy urged his members to cancel their subscriptions to the LA Times. Rallies were held and so on.
Yet, the series of articles, written and reported primarily by Times reporters Jason Song and Jason Felch, jump-started a long-overdue local and national conversation on the subject of merit-based teacher evaluations in a way that nothing else had.
Not surprisingly, the series began winning awards.
ENTER U of COLORADO RESEARCHERS DEREK BRIGGS AND BEN DOMINGUE
In their study, Briggs AND Domingue say that when they attempted to reproduce the Times’ findings, (while also controlling for additional variables that the Times’ researcher did not employ), they got very different results:
For example, when they looked at how the teachers did with reading test outcomes, their findings included the following:
• More than half (53.6%) of the teachers had a different effectiveness rating under the alternative model.
• Among those who changed effectiveness ratings, some moved only moderately, but 8.1% of those teachers identified as “more” or “most” effective under the alternative model are identified as “less” or “least” effective in the L.A. Times model, and 12.6% of those identified as relatively ineffective under the alternative model are identified as effective by the L.A. Times model….
It goes on from there.
The dueling studies first caught my attention when happened to hear U of Colo. researcher Derek Briggs on a segment of the same Monday Patt Morrison show that I had just been on. [See post below.]
Briggs was joined on the segment by the Times’ editor on the teacher evaluations project, David Lauter. For the first half of the segment, Briggs roundly criticized the Times’ findings and methodology. Then, in the following half, Lauter cheerily ignored and/or spun everything that Briggs had to say.
I found the exchange to be very disconnected and perplexing.
You can (listen for yourself here.)
I turns out I was not alone. Education writer Emily Alpert of the Voice of San Diego was similarly flummoxed by the discrepancy between the new study—and the Times take on the new study.
Then late Monday night the National Education Policy Center plus researchers Briggs and Domingue, issued their own unhappy rebuttal to the Times’ article. It began:
Yesterday, on Monday February 7, 2011, the Times published a story about this new study. That story included false statements and was generally misleading….
Look: I have no idea whether or not Briggs and Domingue have a more accurate model for teacher evaluation than the Times does. Or if the truth is somewhere in between.
I do know, however, that if we are continue the important conversation that the Times’ series started, we need to make sure that conversation is fact based.
Monday, it wasn’t.
PS: According to the U of Colorado, their study was embargoed until Tuesday morning, a stricture that the LA Times merrily ignored.
PPS: Both Jason Felch and Jason Song are very good journalists whose work I respect and admire. Thus I can’t help but wonder if this urge to spin the contents of the Colorado study came from above their pay grade.