Showing posts with label scoring rubric. Show all posts
Showing posts with label scoring rubric. Show all posts

Friday, March 5, 2010

WHAT CAN EDUCATORS LEARN ABOUT SCORING FROM THE 2010 OLYMPIC GAMES?

Page copy protected against web site content infringement by Copyscape

Based on your Olympic scoring observations and yesterday’s blog, what does any of that have to do with education? NOTHING! Kidding. Here are a few thoughts.

EDUCATIONAL IMPLICATIONS
The process for improving the objectivity of scoring and grading in classroom assessment, evaluation of teaching performance, and program evaluation has been fraught with many of the same challenges, just maybe not a cauldron, but certainly a large bucket.

Student Assessment. The methods have ranged from multiple-choice tests to essay tests to performance tests to student portfolios (to OSCEs with standardized patients in medicine and nursing). The increased precision of the judgmental scoring systems with explicit rubrics has reduced bias and improved the validity and reliability of the scores to measure learning outcomes.

Faculty Evaluation. Among 14 potential sources of evidence to evaluate faculty teaching, most ALL are based on the judgments of students and “informed” professionals. There are more than 10 possible types of bias that can affect those judgments. However, the triangulation of multiple sources can compensate to some degree for the fallibility of each measure. This approach can also be generalized to promotion and tenure reviews based on the faculty portfolio.

Program Evaluation. Any of the preceding measures plus a collection of other quantitative and qualitative sources are used to determine program effectiveness based on specific program outcomes. Tests, scales, questionnaires, and interview schedules provide complementary evidence on impact.

STATE-OF-THE-ART
Unfortunately, let’s face it: Complete measurement objectivity has eluded educators since Gaul was divided into 4.5 parts. It’s an intractable problem in evaluating faculty, students, and programs. Psychometrically, the state-of-the-art of behavioral and social assessment indicates it is just as fallible as Olympic scoring, but it’s the best that we have.

If we could just balance these scoring systems with a speed-teaching, speed-learning, or speed-mentoring, time-based approach, we could come closer to what we observed in the Olympic events. Also, there may be a lower risk of injury and wipe-outs in the classroom.

What do you think? Any ideas on any of the above are welcome. If you come up with the solution, you could receive a medal! However, I’m not sure of the scoring rubric for that medal, but I know it will be fair.

COPYRIGHT © 2010 Ronald A. Berk, LLC

Thursday, March 4, 2010

WHAT DID YOU LEARN ABOUT SCORING PERFORMANCE FROM THE 2010 OLYMPIC GAMES?

Page copy protected against web site content infringement by Copyscape

Beyond the incredible inspirational moments of these Olympic games, I’ve learned sooo much from these athletes and they’ve reinforced thoughts and feelings I already had. I thought I’d share a few thoughts on scoring today.

MEASUREMENT OF ATHLETIC PERFORMANCE
Measurement-wise, we observed 2 basic approaches to the evaluation of athletic performance to award 3 medals: (1) the 3 best times and (2) the 3 highest scores by a panel of judges.

TIME-BASED
The time-based event is the objective “gold standard,” with which no one can quibble, even the 4th place skier who was .02 sec behind the bronze medalist. Unless the stopwatch or clock is miscalibrated or starts at the wrong time, it’s the most accurate measure available.

JUDGMENT-BASED
In contrast to the clock, the judgment-based approach for Olympic sports, such as freestyle skiing with aerials and figure skating, has been a percolating Olympic cauldron of controversy for as long as teachers scored essay exams in ancient Greece. It seems that no matter how detailed the Educational Testing Service-type scoring rubric is structured, for figure skating, in particular, there will still be a deluge of criticism by some who consider the system flawed and unfair, even with a really manly quad jump.

In fact, the NEW figure-skating scoring system has benefitted those skaters (don’t forget to credit their coaches) who can wring every point out of the rubric. Gold medalists Evan Lysacek and Kim Yu-Na played the system with panache. Those skaters who are still struggling to master the system weren’t as successful. It’s kind of skating to the answer key; assessment-driven skating. Hummm. Does any of this ring a bell in our world? As the commentators noted, however, something is missing in the new scoring system. In skating, the program falls short on artistic grounds; it’s all about the technical elements, as silver medalist Evgeni Plushenko would vehemently argue.

Despite the illusion of objectivity in the scoring process, it’s all about human judgment. In fact, the interpretation of each jump, spin, and footwork and the corresponding assignment of points are judgmental, even with video replay. Can bias creep into that scoring by the panel of judges? You bet, but the impact is less pronounced and evident than in previous scoring methods. The current system is probably the best to date.

Can we learn anything from these scoring procedures in the Olympics? How do they relate to education and the testing issues with which we struggle? Those educational implications will be examined tomorrow.

COPYRIGHT © 2010 Ronald A. Berk, LLC