My blogs reflect my research interests and reflections on issues in teaching, PowerPoint, social media, faculty evaluation, student assessment, time management, and humor in teaching/training and in the workplace. Occasional top 10 lists may also appear on timely topics. They are intended for your professional use and entertainment. If they are seen by family members or pets, I am not responsible for the consequences. If they're not meaningful to you, let me know. ENJOY!
Friday, September 24, 2010
“A FRACTURED, SEMI-FACTUAL HISTORY OF STUDENT RATINGS OF TEACHING: Meso-Responserate Era (2000–2010)!”
A HISTORY OF STUDENT RATINGS: Meso-Responserate Era (2000–2010)
The first decade of the new millennium rode on the search engines of the previous decade. A bunch of publications kicked it off with half a dozen edited volumes on student ratings and faculty evaluation published by Jossey-Bass in their Teaching and Learning series (Ryan, 2000, #83; Lewis, 2001, #87; Knapper & Cranton, 2002, #88; Sorenson & Johnson, 2004, #96) and Institutional Research series (Theall, Abrami,& Mets, 2001, #109; Colbeck, 2002, #114).
There were no planet-shattering technical developments, although there were several software options created specifically for online administration and reporting. Only a trickle (OOPS! Sorry. This is just residual fluid from the previous metaphor.) of articles on midterm formative ratings and other topics appeared. Finally, three books hit the Amazon pages in the last half of the decade: Arreola’s (2007)14th edition of his popular work, Seldin’s (2006) 128th edited volume, and our hero’s (Moi, 2006) psychometric-humorous attempt (see References in next blog).
Most of the activity and discourse on student ratings concentrated on practical issues. There were several trends that continued from the previous decade:
(1) student ratings data were being supplemented with other data, particularly peer review of teaching and course materials and letters of recommendation by each professor’s mommy or daddy, for decisions about teaching effectiveness;
(2) institutions reviewed the quality of their tools to consider either a commercial package, such as THOUGHT, or develop their own “homegrown” scales with online reporting by Academic Management Systems or other support;
(3) the technical quality of many “homegrown” scales prompted me to coin the new psychometric term "putrid"; and
(4) the debate over paper-based vs. online administration grew with cost and student response rates as the deal-breakers. Online packages were available everywhere. The importance of response rates to the validity of student ratings became so critical to the adoption of an online system that this era was named the "Meso-Responserate Era."
We’ve come to the end of this series. I’ve run out of jokes. The last blog will be an epilogue of a few final thoughts and jokes.
COPYRIGHT © 2010 Ronald A. Berk, LLC
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment