Sunday, November 1, 2009

What’s Wrong with Low Response Rates from Online Student Evaluations?

Page copy protected against web site content infringement by Copyscape

There have been cries, yelps, screams, shrieks, screeches, shrills, howls, and other sounds reverberating throughout the academic hallways and byways about low response rates from online administrations of student rating scales. Traditionally, the rates for in-class paper-based administrations following appropriate standardized procedures have been 80%+.

SAMPLING BIAS
First, let’s be Jack Nicholson A Few Good Men “crystal” clear about the problem with low response rates. When ratings dip below 80%, you’re in deeeep trouble! WAIT! That’s not the ending to that sentence. Where did it go? Oh, here it is: sampling bias  increases, so that the summarized ratings will present an unrepresentative, biased picture of teaching performance in a given course. Such ratings could be inflated or deflated due to bias. This is evil, especially when student ratings constitute the only source of evidence on teaching performance, which is the case at many institutions. The results may be useless for either formative or summative decisions.

VALIDITY PROBLEM
The intractable problem is that there is no way to detect the direction or degree of bias. This low degree of rating score validity makes it extremely difficult to interpret the ratings based on the limited sample of students who chose (or self-selected themselves) to complete the scales. Keep in mind, these inaccurate ratings may still yield a high degree of internal consistency reliability, which can be misleading.


REASONS FOR NONRESPONSE
The response rate for online administrations can be half the rate or lower of paper-based administrations. This is a frequent objection to online ratings reported in faculty surveys. This fear of low response rate has deterred some institutions from adopting an online system. The research on this topic indicates the following possible reasons for the nonresponses: student apathy, perceived lack of anonymity, inconvenience, inaccessibility, technical problems, time for completion, and perceived lack of importance (Ballantyne, 2000; Dommeyer, Baum & Hanna, 2002; Sorenson & Reiner, 2003).


That’s the problem. Now how do we fix it? Several institutions have tested a variety of strategies to increase response rate that address several of the aforementioned reasons. These strategies have been suggested by faculty AND students. My next blog will present the Top 10 most effective strategies. Stick around as the plot thickens.


COPYRIGHT © 2009 Ronald A. Berk, LLC

No comments:

Post a Comment