The Science of Teacher Evaluation Manipulation
Hopefully, another semester has come to a close for you and you’re catching up on some much needed research/sleep. After I’ve doled out grades for my students, I usually get a nice big stack of evaluations of my teaching abilities, filled out by those very same students who squeaked by with a “C-“in my class. At my previous university, it was the ONLY way my teaching was evaluated; for better or worse, no senior faculty or peers ever evaluated my teaching content, style, or skills in the classroom. A whopping 40% of my annual evaluation came from what my students recorded on bubble-sheets and, occasionally, their written comments.
As a social scientist, I have had some general questions about the validity and the reliability of the whole process.  Do students really know a good teacher when they see one? Isn’t this a little bit like letting the inmates evaluate the prison warden? I was glad to know that there has been a ton written on the topic, some of which has been summarized as implying that student evaluations of instructors are “highly reliable” and “at least moderately valid.” Others, however, disagree or call for more research.
As a political scientist, my understanding of power – especially power-through-manipulation- has also lead me to have some additional questions about the evaluation process. Can’t a smart instructor manipulate the system? Fooling a whole bunch of undergrads into thinking you are a better instructor than you are can’t be that difficult, can it? Evaluation-manipulation is widely practiced and discussed, both around the halls of academia and in the extant literature. Colleagues have told me to hand out evaluations on rainy days, provide midterm evaluations (and don’t even read them), only hand out evaluations after providing extra credit points, and some colleagues have even suggested providing donuts on evaluation day. Social science research has shown that these manipulations can work, especially, perhaps, the most controversial manipulation of all: actually boosting grades and “dumbing down” a course to increase your evaluations. As one summary article put it:
students tend to give higher ratings when they expect higher grades in the course. This correlation is well-established, and is of comparable magnitude, perhaps larger, to the magnitude of the correlation between student ratings and student learning (as measured by tests) …Thus, [evaluations] seem to be as much a measure of an instructor’s leniency in grading as they are of teaching effectiveness (Huemer 2005, np).
I’m not sure what to make of this, other than the fact that I now question my good evaluation semesters more (Did I really challenge the students enough? Was I just lenient as a grader?). This brings me to my question for you: is there an alternative to evaluate a professor’s ability in the classroom? I’m not sure but am definitely interested in exploring options.
 Following a practice I saw on Branislav Slantchev’s website a couple of years ago, I try to post my evaluations on my website. The written comments of everyone’s evaluations (mine included) are pretty funny after a glass of scotch. And, if you do read mine, I swear that I didn’t ever wear a midriff shirt to a lecture in 2009.
 The evaluation process hasn’t been bad for me. Perhaps because of my understanding of power/motivations, I actually won a college-wide teaching award. That doesn’t mean I can’t question the process as a sole criteria of teaching, however.
 I have been known to invite my 150+ lecture class all over for Thanksgiving and then hand out teacher evaluations in the same class period– slight bump for my evaluations and no student actually showed up for Thanksgiving. Epic win.