Thursday, December 13, 2007

ITC Best Course Awards

For the second year in a row I volunteered to be on the committee that chooses the Best Online Course and Best Blended Course awards for the annual ITC conference - e-Learning 2008 to be held in St. Pete Beach in February. Yesterday we made our selections and chose one winner in each category. I can't tell you who the winners are just yet, but I can talk a little bit about the process. (Past winners here)

We used a scoring rubric that was new and improved this year. A few of the other board members worked hard on revising the award rubrics and I found these two rubrics to be much better and more helpful in selecting winning courses than in the past. The rubric contains a total of 20 items with each worth 5 points maximum for a total possible score of 100.

I find it interesting how different people using the same rubric can come to very different conclusions about whether what they're seeing satisfies the rubric requirement or expectation. First of all, let me say this committee was comprised of people who very easily came to agreement on which were the best courses, with very little disagreement and a very strong overall consensus that we made the best choices in both of the categories. What I find interesting is how we can come to those same conclusions even though the paths we took were quite different and we tend to value different things more or less than the next person.

For example, my rubric scores for the good courses (IMO) ranged from 70-80. The not-so-good courses scored right around 50 points by my calculations. Some of the other judges thought that the good courses scored in the mid-90s out of 100 points. Last year there was an even larger disparity with one person scoring most of the courses in the 30s and 40s while others scored most of the courses in the 80s and 90s. I always was known as a hard grader, I guess this exercise just proves that some things never change.

Another interesting thing is how certain items tend to take on more importance than their point values would indicate. I think that the judges, myself included, tend to place more importance (than 1/20th of the total score) on things such as accessibility and navigation. For example, giving a score of zero out of five on ADA compliance doesn't seem like a large enough penalty for a course with serious accessibility issues. That course could still score 95 points and be an award winner. However, I think the judges make other adjustments to make sure that doesn't happen.

Another example is where the judges can't find the syllabus for the course. If we can't find it, chances are good the students can't find it, and chances are also good that it isn't there at all. As you can imagine, chances are also not good that we will not pick that course as an award winner. However, the lack of a syllabus itself should only cost the course 5 to 10 points in the scoring. In practice, I think it is a death knell, a deal breaker if you will.

This year we actually had fewer nominations in the Best Online Course category, and a few more than last year in the Best Blended Course category. Even though we had fewer online courses to review, it is my opinion that we had significantly higher quality to choose from this year. I feel that we have a truly outstanding online course award winner, and there are four more that are worthy of a strong honorable mention, even though we don't officially recognize those that just missed being chosen.

No comments: