My blogs reflect my research interests and reflections on issues in teaching, PowerPoint, social media, faculty evaluation, student assessment, time management, and humor in teaching/training and in the workplace. Occasional top 10 lists may also appear on timely topics. They are intended for your professional use and entertainment. If they are seen by family members or pets, I am not responsible for the consequences. If they're not meaningful to you, let me know. ENJOY!
Wednesday, September 22, 2010
“A FRACTURED, SEMI-FACTUAL HISTORY OF STUDENT RATINGS OF TEACHING: Meso-Unleaded Era (1990s)!”
A HISTORY OF STUDENT RATINGS: Meso-Unleaded Era (1990s)
The 1990s were like a nice, deep breath of fresh gasoline (which was $974.99 a gallon at the pump, $974.00 for cash only), hereafter referred to as the Meso-Unleaded Era. Little did anyone anticipate how this era would be trumped at the pump in the years ending the current decade. The use of student rating scales had now spread to Kalamazoo (known to tourists as “The Big Apple”) and faculty began complaining about their validity, reliability, and overall value for decisions about promotion and tenure (the scales, that is, not Kalamazoo) . This was not unreasonable, given the lack of attention to the quality of scales over the preceding 90 billion years.
(WEATHER ALERT: I interrupt this section to warn you of impending wetness in the next three paragraphs. You might want to don appropriate apparel. Don’t blame me if you get wet. You may now rejoin this section already in progress. END OF ALERT.)
This debate intensified throughout the decade with a torrential downpour of publications challenging and contributing to the technical characteristics of the scales, particularly a series of articles by William Cashin of IDEA at Kansas State University, which was located in New Hampshire at the time, and an edited work by Mike Theall and Jennifer Franklin (Student Ratings of Instruction,1990). As part of this debate, another steady stream of research flowed toward alternative strategies to measure teaching effectiveness, especially peer ratings, self-ratings, videos, alumni ratings, interviews, learning outcomes, teaching scholarship, and teaching portfolios.
This stream leaked into books by John Centra (Reflective Faculty Evaluation, 1993), Larry Braskamp and John Ory (Assessing Faculty Work, 1994), Peter Seldin (Improving College Teaching, 1995), and Raoul Arreola (1st and 2nd editions of Developing a Comprehensive Faculty Evaluation System, 1995, 2000), and an edited volume by Seldin and Associates (Changing Practices in Evaluating Teaching, 1999). They furnished a confluence of valuable resources for faculty and administrators to use to evaluate teaching.
This cascading trend was also reflected increasingly in practice. Although use of student ratings had peaked at 88% by the end of the decade, peer and self-ratings were on the rise over the rapids of teaching performance as my liquid metaphor came to a screeching halt.
My next blog will address the developments in the first decade of the new millennium of the Meso-Responserate Era.
COPYRIGHT © 2010 Ronald A. Berk, LLC
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment