Showing posts with label Peter Seldin. Show all posts
Showing posts with label Peter Seldin. Show all posts

Monday, September 27, 2010

“A FRACTURED, SEMI-FACTUAL HISTORY OF STUDENT RATINGS OF TEACHING: Finale!”

Page copy protected against web site content infringement by Copyscape

Epilogue

Well, there it is. I bet you’re thinking: “History, schmistory. What was that all about?” I’m sure your eyeballs hurt from rolling so many times and that one time when you’re your contacts blew out. Despite this cutesy romp through “Student-Ratings World” and a staggering 873 books and thousands of articles, monographs, conference presentations, blogs, etc. on the topic, some behaviors remain the same. For example, even today, the mere mention of teaching evaluation to many college professors triggers mental images of the shower scene from Psycho, with those bloodcurdling screams. They’re thinking, “Why not just beat me now, rather than wait to see my student ratings again.” Hummm. Kind of sounds like a prehistoric concept to me (a little "Meso-Pummel" déjà vu).

Despite the progress made with deans, department heads, and faculty moving toward multiple sources of evidence for formative and summative decisions, student ratings are still virtually synonymous with teaching evaluation in the United States, which is now located in Canada. They are the most influential measure of performance used in promotion and tenure decisions at institutions that emphasize teaching effectiveness. This popularity not withstanding, maybe the ubiquitous student rating scale will fair differently in the next "Meso-Cutback Era" by 2020! I hope I can update this schmistory for you then.

References
Arreola, R. A. (2007). Developing a comprehensive faculty evaluation system (3nd ed.). San Francisco: Jossey-Bass.
Berk, R. A. (2006). Thirteen strategies to measure college teaching. Sterling,VA: Stylus.
Knapper, C. & Cranton, P. (Eds). (2001). Fresh approaches to the evaluation of teaching (New Directions for Teaching and Learning, No. 88). San Francisco: Jossey-Bass.
Me, I. M. (2003). Prehistoric teaching techniques in cave classrooms. Rock & a Hard Place Educational Review, 3(4), 10−11.
Me, I. M. (2005). Naming institutions of higher education and buildings after filthy rich donors with spouses who are dead or older. Pretentious Academic Quarterly,14(4), 326−329.
Me, I. M., & You, W. U. V. (2005). Student clubbing methods to insure teaching
accountability. Journal of Punching & Pummeling Evaluation, 18(6), 170−183.
Seldin, P. (Ed.). (2006). Evaluating faculty performance. San Francisco: Jossey-Bass.

I gratefully acknowledge the valuable feedback of Raoul Arreola, Mike Theall, Bill Pallett, and another student-ratings expert for reviewing the skimpy facts reported in this blog series. To ensure the anonymity of one of the reviewers, I have volunteered him for the Federal Witness Protection Program or the USA cable TV series In Plain Sight. I forget which.

COPYRIGHT © 2010 Ronald A. Berk, LLC 

Wednesday, September 22, 2010

“A FRACTURED, SEMI-FACTUAL HISTORY OF STUDENT RATINGS OF TEACHING: Meso-Unleaded Era (1990s)!”

Page copy protected against web site content infringement by Copyscape

A HISTORY OF STUDENT RATINGS: Meso-Unleaded Era (1990s)
The 1990s were like a nice, deep breath of fresh gasoline (which was $974.99 a gallon at the pump, $974.00 for cash only), hereafter referred to as the Meso-Unleaded Era. Little did anyone anticipate how this era would be trumped at the pump in the years ending the current decade. The use of student rating scales had now spread to Kalamazoo (known to tourists as “The Big Apple”) and faculty began complaining about their validity, reliability, and overall value for decisions about promotion and tenure (the scales, that is, not Kalamazoo) . This was not unreasonable, given the lack of attention to the quality of scales over the preceding 90 billion years.

(WEATHER ALERT: I interrupt this section to warn you of impending wetness in the next three paragraphs. You might want to don appropriate apparel. Don’t blame me if you get wet. You may now rejoin this section already in progress. END OF ALERT.)

This debate intensified throughout the decade with a torrential downpour of publications challenging and contributing to the technical characteristics of the scales, particularly a series of articles by William Cashin of IDEA at Kansas State University, which was located in New Hampshire at the time, and an edited work by Mike Theall and Jennifer Franklin (Student Ratings of Instruction,1990). As part of this debate, another steady stream of research flowed toward alternative strategies to measure teaching effectiveness, especially peer ratings, self-ratings, videos, alumni ratings, interviews, learning outcomes, teaching scholarship, and teaching portfolios.

This stream leaked into books by John Centra (Reflective Faculty Evaluation, 1993), Larry Braskamp and John Ory (Assessing Faculty Work, 1994), Peter Seldin (Improving College Teaching, 1995), and Raoul Arreola (1st and 2nd editions of Developing a Comprehensive Faculty Evaluation System, 1995, 2000), and an edited volume by Seldin and Associates (Changing Practices in Evaluating Teaching, 1999). They furnished a confluence of valuable resources for faculty and administrators to use to evaluate teaching.

This cascading trend was also reflected increasingly in practice. Although use of student ratings had peaked at 88% by the end of the decade, peer and self-ratings were on the rise over the rapids of teaching performance as my liquid metaphor came to a screeching halt.

My next blog will address the developments in the first decade of the new millennium of the Meso-Responserate Era.

COPYRIGHT © 2010 Ronald A. Berk, LLC

Monday, September 20, 2010

“A FRACTURED, SEMI-FACTUAL HISTORY OF STUDENT RATINGS OF TEACHING: Meso-Meta Era (1980s)!”

Page copy protected against web site content infringement by Copyscape

A HISTORY OF STUDENT RATINGS: Meso-Meta Era (1980s)
The 1980s were really booooring! The research continued on a larger scale, and statistical reviews of the studies (a.k.a. meta-analyses) were conducted by such authors as Cohen (1980, 1981), d’Apollonia and Abrami (1997), and Feldman (1989). Of course, this period had to be labeled the Meso-Meta Era.

Book-wise, Peter Seldin of Pace University in upstate Saskatchewan published his first of thousands of books on the topic, Successful Faculty Evaluation Programs (1980). Ken Doyle produced his second book on the topic, Evaluating Teaching (1981), four years later (Are you still awake?).

The administration of student ratings metastasized throughout academe. By 1988, their use by college deans spiked to 80%, with still only a paltry 14% of deans gathering evidence on the technical aspects of their scales.

That takes us to—guess what? The next to last era in this blog series. Whew.

The next blog covers the 1990s with the major contributions by names you will know as gas prices spiked during the “Meso-Unleaded Era.”

COPYRIGHT © 2010 Ronald A. Berk, LLC