Showing posts with label student rating scales. Show all posts
Showing posts with label student rating scales. Show all posts

Tuesday, September 7, 2010

“A FRACTURED, SEMI-FACTUAL HISTORY OF STUDENT RATINGS OF TEACHING: The State of the Art!”

Page copy protected against web site content infringement by Copyscape

State-of-the-Art of Student Ratings
There is more research on student ratings than any other topic in higher education. More than 2,500 publications and presentations have been cited over the past 90 years. Those ratings have dominated as the primary and, frequently, only measure of teaching effectiveness at colleges and universities for the past five decades. In fact, the evaluation of teaching has been in a metaphorical cul-de-sac with student ratings as the universal barometer of teaching performance. And, if you’ve ever been in a cul-de-sac or metaphor, you know what that’s like. OMGosh, it can be Stephen Kingish terrifying.

In surveys over the past decade, it was found that 86% of U.S. liberal arts college deans and 97% of department chairs use student ratings for summative decisions about faculty. Only recently has there been a trend toward augmenting those ratings with other sources of evidence and better metaphors (Arreola, 2007; Berk, 2006; Knapper & Cranton, 2001; Seldin, 2006).

So how in the ivory tower did we get to this point? Let’s trace the major historical events. Hold on to your online administration response rates. Here we go.

A History of Student Ratings
This history covers a timeline of approximately 100 billion years, give or take a day or two, ranging from the age of dinosaurs to the age of Conan O’Brien’s new cable TV show. Obviously, it’s impossible to squish every event that occurred during that period in this series. Instead, that span is partitioned into six major eras within which salient student-ratings activities are highlighted. A blog will be devoted to each of those eras.

References

Arreola, R. A. (2007). Developing a comprehensive faculty evaluation system (3nd ed.). San Francisco: Jossey-Bass.
Berk, R. A. (2006). Thirteen strategies to measure college teaching. Sterling,VA: Stylus.
Knapper, C., & Cranton, P. (Eds). (2001). Fresh approaches to the evaluation of teaching (New Directions for Teaching and Learning, No. 88). San Francisco: Jossey-Bass.
Seldin, P. (Ed.). (2006). Evaluating faculty performance. San Francisco: Jossey-Bass.

My 1st era blog will tackle prehistoric student ratings of the “Meso-Pummel Era.” How did cave men and women measure teaching performance? Their methods were a bit crude, but effective.

COPYRIGHT © 2010 Ronald A. Berk, LLC

Monday, September 6, 2010

“A FRACTURED, SEMI-FACTUAL HISTORY OF STUDENT RATINGS OF TEACHING: A Parody!”

Page copy protected against web site content infringement by Copyscape

WARNING: This new blog series contains buckets of humor, which may not be suitable for all readers. If you have the sense of humor of an avocado or, even worse, a cumquat, this parody is not for you, despite its trailblazing, earth-shattering, Pulitzer Prize-caliber contribution to the teaching evaluation literature. If you fit this description, Buhbye!

ANOTHER BORING HISTORY?
Usually the history of any serious topic triggers the gag reflex in most nonhistorians. However, this topic is different from most. Over the past year, there have been incendiary debates over student rating forms, online administration procedures, and their use and interpretation on several professional listservs, LinkedIn groups, professional blogs, and other electronic and walkie-talkie communications. The reverberations of these debates have been felt on college campuses as far away as Pandora University, where the topic has provoked the verbal equivalent of the Avatar-scale firefight. Rather than fan the flames of this combustible metaphor, I thought this blog series might provide a time-out, refreshing break on this contentious topic, while also shedding some energy-saving light on how this situation evolved.

FRACTURED, BUT FACTUAL TOO!
Do any of you remember “Fractured Fairy Tales” on The Rocky and Bullwinkle Show? “NO!” What about Boris and Natasha? What were you doing? Oh well, it doesn’t matter, youngin’ academic readers. These blogs are written in the same spirit as that cult, politically-satirical cartoon, just without the cult, politics, and cartoon. It is a parody with the bonus of actual events in the history of student ratings. You’ll get a few morsels of content within a humor context. (FACT ALERT: Most of the names, dates, book and scale titles, and survey statistics are correct.)

TWO READER OUTCOMES:
There are two primary outcomes of this series for you:

1. to get a handle on the significant academic activities, research, and major players in the unfolding of the student ratings debate, and
2. to elicit a chuckle or two, maybe a guffaw, in that process.

If you laugh at any time during this series, I hope you will experience one of these physical signs:

a. burst your guts,
b. rupture key internal organs,
c. wet yourself, or
d. spurt your latté or green tea through your nostrils all over your keyboard.

Anything less will be disappointing. (NOTE: Most of the references along my historical path have been omitted to permit more space for jokes. Please refer to my Thirteen Strategies… book for those references and lots more jokes.)

My next blog will begin our historical journey with an overview of the state of the art of student ratings. Hold onto your guts.

COPYRIGHT © 2010 Ronald A. Berk, LLC