Sunday, April 3, 2011

I had a glimpse into a Value-Added Classroom, and all I can say is "Go Back"

Do you remember the movie Say Anything, and the joke in Diane Court's valedictory speech, "I saw our future, and all I can say is, 'go back'"?  Well, if value-added is our future, I'm with Diane.  Maybe not as staunchly, but after what I read today, I am leaning that way.

An article in today's AJC starts out:

HOUSTON — Andres Balp’s Texas classroom provides a glimpse of the data-driven future facing Georgia teachers and students.

In his fourth-grade room at Houston’s Lyons Elementary, the focus is clear: measuring exactly how well students are progressing. Children are grouped by how they have done on standardized tests, making it easier for Balp to work with the lower-performing kids. A poster charts daily test scores — black ink for high marks, red for low — to show who’s on track to hit state exam goals. Students follow their own progress too: Taped to each desk is a small square of colored paper with the student’s goal score for a daily 12-question assessment.

Balp, a 17-year classroom veteran, holds the master key to all this data: a three-ring binder filled with graphs showing test scores for every student — and forecasts of how they’re expected to do going forward. It informs the interactions he has with his students.
His income hinges on this data. So does his job. 

As for me, I am still reading everything about Value-Added data and evaluations that I can find.  I tend to think it's okay, with some qualifiers (namely, that it is used, as the article goes on to say, for the teachers to help the students and not as a means to penalize teachers).  But, when I read this article I was struck by the classroom that's described in the lead.  In particular, I have to say that I would not want my children in this room, or any like it.  Nor would I want to teach like this.

I don't want anyone tracking every minute grade like a fantasy baseball stat, and neither do I want every action in any classroom dictated by some metric, as the article would have us believe all good teaching must be.  I am all for the extrinsic motivation of the old-fashioned "gold star" charts, but the red-and-black comments letting students know who's on track to meet state testing requirements?  Doesn't anyone fear that goes a bit too far, perhaps losing the too-much-red-not-enough-black-grades kids to the phenomenon of a self-fulfilling prophecy?  I absolutely believe (or really, know) that we need some data to evaluate where we stand, but this goes too far.

What really put me over the top, though, was the charts on the desk to tell us the daily assessment goals.  Doesn't anyone see anything wrong with these "forecasts" of what is expected of each student, like abstract stock figures, with growth and plateau assessments calculated -- well heck, somewhere I am sure there are actuaries involved, at this rate -- to let us know what to buy and what to sell, and when, perhaps, to exercise our options?  And what, pray tell, are those options?  Do we really want our kids learning, as this would seem to be, by checklist?  Today, you will get a 90% and then you've met your goal.  All learning can cease.  Check, check, check, list is done!  No.  These constant checklists and benchmarks send the WRONG message about learning.  Not all that is worth doing in a day can be quantitatively measured, and I don't want my childrens' days determined by only that which can be quantitatively measured.  Setting daily goals is important, but we need qualitative ones as well, and we don't need checklists on the desks to press home the point that we're all working towards a goal.  What I fear these checklists convey is that we're working towards passing some test, as opposed to learning and growing and becoming better readers, or better at math, or better at writing.  Do tests measure that?  Sure.  But we don't need to frame every single day's goals in terms of tests and cutoff scores.  We need to learn (no, I am not so trite as to say, "for the sake of learning," though I'm close to it) and learn and learn, and the test scores will follow.  "Goal" test scores should not dictate or determine our learning plan. 

I know we're data-driven right now, and I know that in many ways value-added methods provide a much-needed antidote to teachers who don't belong in a classroom.  Yes, it can help us show "cause" to get rid of the obvious bad apples.

I'm just not sure that the good outweighs the bad.  I don't like this classroom of the future one bit.  Not one. Little. Bit.

Here are another few excerpts for you to gnaw on while I head to bed.  We come back to the law of diminishing returns, which I think I've mentioned before on this blog (and if not, which I will certainly discuss in the coming days), what to do when your kids are already at the top, and what to do when the evaluation method fails.  Enjoy. 
Because the data predicts a student’s future performance based on past results, teachers have better insight into how their students should be scoring in class. If a student isn’t on track, the teacher can offer remediation. It also allows teachers to work with their colleagues to address weaknesses revealed by the data. For example, if one fourth-grade teacher isn’t “adding enough value” in math, and another fourth-grade teacher’s students aren’t scoring well in language arts, the two might swap classes for one subject.

...

But in some areas — including classes of highly advanced students and students moving from Spanish to English — a teacher’s true impact isn’t showing up in the test data, she said.

“We do have some principals who say, ‘My best teachers in these certain areas are not being identified as good teachers, but we know they are,’” Stevens said. “We’re trying to run the numbers, investigate and see where this is a problem, and where it is a perceived issue but not a real issue.”
Darilyn Krieger, a physics teacher at Carnegie Vanguard High, a Houston school for gifted and talented students considered one of the best in the nation, said she is proof value-added data can be misleading.

Krieger said last year she earned almost $6,000 because of her student-growth scores. But this year, the data showed her students didn’t learn a year’s worth of information, even though 100 percent are passing and doing college-level coursework.

That’s because the district uses a test that doesn’t measure what her students learned in the current year. Instead, it’s an exit exam designed to measure what they have learned in science over 11 years. Her chief concern is the value-added data’s lack of transparency — it isn’t specific enough to show where her students did not meet expectations.

“I am being measured and told I didn’t do my job, but they can’t tell me what I did wrong,” Krieger said. “I don’t care what my scores were. I don’t care if I get a dollar on this stupid award. I want my kids to do well because they know the material.”

...

Supporters of using student test data to rate and pay teachers almost universally agree that it can’t be the only factor used in decision-making. In Georgia, 50 percent of the evaluation of teachers in “core” subjects — those covered by standardized tests — will be based on the growth scores; the other half will come from classroom observations, lesson plans and student surveys. Core teachers, such as those in math and writing, make up about 30 percent of the educator workforce.


Note:  this article is entitled In a Texas Classroom, Big Lessons for Georgia.  It was written by Jaime Sarrio and appeared as the front-page story on April 3, 2011.  Here's a link, though I am not sure it is a permalink:  http://www.ajc.com/news/georgia-politics-elections/in-a-texas-classroom-895675.html

1 comment:

  1. We have, in my large high school, been driven to madness by the forced march towards data-driven curriculum. This is far worse than the "teach to the test" method that preceded it and worlds away from a problem-solving approach. For, now, we are left to sacrifice instructional days for the sake of generating data that tells us that we don't cover enough curriculum. See the paradox?

    Well, I also have a small collection of math texts from the late 1800's/early 1900's that I draw upon for "interesting" word problems. One day I asked the kids (seniors taking PreCalc) if they felt "prepared" for the next level. They informed me that these interesting problems prepared them more than the "contrived crap" (their words). They were clearly frustrated with our general approach.

    When did we stop attempting to prepare our students for being practical problem solvers and start preparing our students to become robotic test takers?

    ReplyDelete