Tuesday, December 18, 2007

E-Learning the Big Loser at FSU

There's somewhere in the vicinity of 25 football players and at least one tutor and one academic advisor who are losers in the cheating scandal at Florida State University (FSU). Chances are also good that the FSU football team will lose to Kentucky in their lower-tier bowl game coming up at the end of the month. However, I think the biggest loser in all of this is going to be e-Learning in general. (CC Flickr photo by portorikan)

Of course they just had to have cheated on tests in an online course. In our one-size fits all world, that means everyone will be talking about how easy it is to cheat in online courses, as if cheating is somehow unique in online courses or more rampant than in other forms of higher education. I don't believe that, but many people do believe it and they will now have more ammunition as they talk negatively about e-Learning.

As I have read the somewhat sketchy (not very detailed) stories on the Internet about this scandal, I am struck by how none of what I'm reading appears to be specifically related to the fact that these courses were taken over the Internet. A tutor had been apparently (allegedly) helping the athletes cheat on exams, doing homework for them, and writing papers for them. Gee, does that sound anything like Jan Ganglehoff at U of Minn several years ago? Those allegations (truths, actually) were very similar, but had nothing to do with Internet-based education. Did people call for the end of all tutoring? Did people call for the end of all term papers? Did people call for the end of all athletic programs? No, no, and no! But this time it will be different. This time there will be a huge cry for the end of Internet-based courses and programs.

There are some interesting comments (almost 1,000 total comments as I write this, but only a few of them are interesting) at the end of a story about this on ESPN.com. Here's a good example from a source of unknown character and reliability (cbusch17): "I'm not excusing this, but for those of you who haven't been in school in the past 5-10 years or so and don't know anything about them, these internet courses are RIDICULOUS. I graduated from college in 2000, but I'm back a second time right now (at another major Florida university). When I went through the first time, we didn't have these classes. This time through, I took a couple of internet courses, but I quit taking them because EVERYONE CHEATS. It is RIDICULOUS. I have a 4.0 in my major right now, but I had to work harder in those internet classes than any other class I have had. Everyone uses their books or works together to take online quizzes/tests (unsupervised testing!?!? WHAT A JOKE!), and for the very few of us that didn't want to cheat, everyone else's overinflated grades made it almost impossible to do well in the class. I was tearing my hair out in these classes. Like I said, that doesn't excuse anyone, but when you put something like this in front of 18 year-old kids, especially ones who are as busy as student athletes, and under extreme pressure to keep their grades up, what do we expect to happen??? Internet courses are a joke, and another example of U.S. school systems cutting performance to save a buck. P-A-T-H-E-T-I-C." (link)

Another commenter (robert_ingalls) says the following: "Who gives online classes and then expects integrity?? Instances such as these are rampant on every campus and not limited to athletes. Stop giving online tests and no one will cheat on them. In a perfect world everyone would hold themselves to a high standard of integrity, but until then..." (same link as above).

A third one joins in but with a different take (cbn00034): " While I agree that Internet classes can be a bit of a joke it really depends on how the classses are administered. In reality the classes could be some of the better classes w/ a greater level of depth. Those administering the classes need to understand and EXPECT that people will huddle up, use notes, etc. So now it is up to them to create a class that requires people to use their brains in a different manner. Perhaps even have them come in to take their test on a computer bank that is proctered, on a scantron sheet, or a bluebook to thwart cheating if the group testing is not desired. It's all up to the schools and administrators. Wow - accountability for both the school and the students! There's a novel concept!"

The final one that I'll post right now is another post from the first one above (mister P-A-T-H-E-T-I-C.) Now he is agreeing with cbn00034: "cbn - Agree, the internet class system has got to change. Proctored exams would be a start, but right now it is so easy for the schools to just post everything online and wash their hands of it...why would they want to change that by creating more work for themselves? Maybe FSU will start the change, now that they obviously have a reason to..."

First of all, these quotes come from individuals who we cannot possibly know whether we should care about what they have to say. Having said that, I do think they are indicative of the kind of rhetoric that we can expect to swirl around this issue for quite some time now. Some of it pro, but most of it con.

My prediction: online learning will be the real loser here. The pundits will not blame the cheating student-athletes and the tutor, at least not nearly as mush as they blame online learning as the cause of this scandal.

One last note before I close, it is quite ironic that I have been working on another one of my e-Learning Mythbuster questions and that question deals with whether cheating is running rampant within e-Learning. Stay tuned for that.

Monday, December 17, 2007

E-Learning Mythbusters #6


Myth or Reality?


By using the Quality Matters™ (or similar) rubric and a rigorous quality review process, we have sufficiently answered the persistent questions about the quality of online learning.

The embedded SlideCast below takes a little over 9 minutes to explain my take on the answer to this question (click the green play button at the bottom of the slideshow window). I posed this question as part of the e-Learning Mythbusters presentation because I very often hear QualityMatters (TM) being offered as the solution to the persistent questions about whether we are attending to the quality concerns about e-Learning.

Lastly, as I state during the SlideCast, we have used an adaptation of QualityMatters at Lake Superior College for the past three plus years now, and it has been an extremely positive experience overall. See the LSC Peer Review blog for more info.

Thursday, December 13, 2007

ITC Best Course Awards

For the second year in a row I volunteered to be on the committee that chooses the Best Online Course and Best Blended Course awards for the annual ITC conference - e-Learning 2008 to be held in St. Pete Beach in February. Yesterday we made our selections and chose one winner in each category. I can't tell you who the winners are just yet, but I can talk a little bit about the process. (Past winners here)

We used a scoring rubric that was new and improved this year. A few of the other board members worked hard on revising the award rubrics and I found these two rubrics to be much better and more helpful in selecting winning courses than in the past. The rubric contains a total of 20 items with each worth 5 points maximum for a total possible score of 100.

I find it interesting how different people using the same rubric can come to very different conclusions about whether what they're seeing satisfies the rubric requirement or expectation. First of all, let me say this committee was comprised of people who very easily came to agreement on which were the best courses, with very little disagreement and a very strong overall consensus that we made the best choices in both of the categories. What I find interesting is how we can come to those same conclusions even though the paths we took were quite different and we tend to value different things more or less than the next person.

For example, my rubric scores for the good courses (IMO) ranged from 70-80. The not-so-good courses scored right around 50 points by my calculations. Some of the other judges thought that the good courses scored in the mid-90s out of 100 points. Last year there was an even larger disparity with one person scoring most of the courses in the 30s and 40s while others scored most of the courses in the 80s and 90s. I always was known as a hard grader, I guess this exercise just proves that some things never change.

Another interesting thing is how certain items tend to take on more importance than their point values would indicate. I think that the judges, myself included, tend to place more importance (than 1/20th of the total score) on things such as accessibility and navigation. For example, giving a score of zero out of five on ADA compliance doesn't seem like a large enough penalty for a course with serious accessibility issues. That course could still score 95 points and be an award winner. However, I think the judges make other adjustments to make sure that doesn't happen.

Another example is where the judges can't find the syllabus for the course. If we can't find it, chances are good the students can't find it, and chances are also good that it isn't there at all. As you can imagine, chances are also not good that we will not pick that course as an award winner. However, the lack of a syllabus itself should only cost the course 5 to 10 points in the scoring. In practice, I think it is a death knell, a deal breaker if you will.

This year we actually had fewer nominations in the Best Online Course category, and a few more than last year in the Best Blended Course category. Even though we had fewer online courses to review, it is my opinion that we had significantly higher quality to choose from this year. I feel that we have a truly outstanding online course award winner, and there are four more that are worthy of a strong honorable mention, even though we don't officially recognize those that just missed being chosen.

Monday, December 10, 2007

Presentation Proposal Comments

A while back I submitted a proposal to a conference related to teaching with technology. This is the presentation title and abstract (limited to 75 words) that I submitted: (some of you may recognize it as one of my standard presentations)

Web 2.0 Whirlwind--Free Web Tools

There are many new Web applications that are free and easy to use. Many of these services have specific applications in higher education. The presenter will demonstrate these free applications currently being used by students, faculty, and staff. Applications related to digital photos and video, digital music tools, one-to-one and one-to-many communications, web office, and other services are demonstrated. A presentation wiki containing all resources is shared for use after the conference.

This week I received an email that started with the following: "Congratulations! Your session has been accepted for (blah-blah-blah)."

Normally that would be a pretty good email. However, by the end of it I was more than just a little bit offended. Through a pretty good use of technology the conference organizers give you access to a password protected site where you find out what four anonymous reviewers thought about your presentation proposal.

  • 1st Reviewer said nothing.
  • 2nd Reviewer said: "I want to attend this session! :-) "
  • 3rd Reviewer said: "I would like to see the Presentation Abstract expound just a bit more on the types of tools attendees would see or use."
  • 4th Reviewer said: "Better title it assumes to much/doesn't say enough. "Web 2.0 Whirlwind" ?? and "Free Web Tools" is what the presenter will demonstrate; "those free applications currently being used by students, faculty, and staff." To do so Web 2.0 is a given. How will this lead to a discussion and use in the 3.0 - now and in coming future - is also something" (and was apparently cut off for exceeding the word limit)
Don't you just love anonymous reviews? How exactly do you know whether that person has any credibility? Should I care what the person thinks who uses "to" when he means "too" as part of a run-on sentence? Maybe I should, but how do I know?

Below is a copy and paste from the email I sent to the conference organizers:
**********************************

I just wanted to provide a little bit of feedback regarding the feedback I've received from the reviewers of my session proposal.

I'm actually feeling a bit insulted by a couple of the comments. So much so that right now I am inclined to no longer submit proposals for (your conference) in the future.

This is a presentation that I have given many times in many different settings. Twice it has been rated as the best concurrent session at national conferences. After several of these presentations I have been invited to give similar presentations at various schools and organizations. Funny how none of these attendees felt the need to change what my presentation is about as (the 4th reviewer) would like to do.

Additionally, I see little value in the comment from (the 3rd Reviewer) who "would like to see the Presentation Abstract expound just a bit more on the types of tools attendees would see or use." Does this reviewer know that there is a word limit on the abstract? How exactly can someone expound more while remaining within the word limit?

Maybe I'm the only person out here who doesn't appreciate being talked down to by an anonymous reviewer. If I am, then you have nothing to worry about. If there are others who feel the way that I do, then you might want to re-think your system of reviewer comments.

Respectfully submitted, Barry Dahl

(end of email) ******************

Was I making too much of this? Should I just let it slide? Is it just me?

****************************
I was ready to post the item above when I did hear back from the conference organizers. They replied to my email shown above and were very kind and apparently have a thicker skin than yours truly when it comes to receiving feedback that is less than glowing. Although keep in mind that their feedback clearly came from me and not from some anonymous source.

One thing that was very important in their reply was that this was a double-blind review process. In other words, the potential presenter does not know who the reviewers are and the reviewers did not know who the presenter is. This is a little different from what I assumed to be true, but I'm not sure how much it changes things. On the one hand I definitely do not appreciate anonymous reviews when they are only single-blind, as is usually the case. But I'm still not quite sure what I think about the double-blind review. For example, someone might write a boffo presentation description but maybe has a track record of being absolutely dreadful when actually making a presentation. In fact, I think that I see that all the time. I would want to know that it is Joe Blow who is making the proposal because I know that Joe Blow mainly blows smoke and we really don't need to hear from him again - or we'll blow our brains out (that's just a figure of speech, of course).

So now I've had a couple of days to cool down from all of this, but I'm still not quite sure what to think about the whole thing. One thing that I do know is that the conference organizers responded very quickly and professionally to my concerns, and I appreciate that. One more thing is certain - I don't particularly like receiving anonymous reviews where there is no chance for a rebuttal and no chance of assessing the credibility of the source. End of rant. Life goes on.

CC photo by Violator3

Sunday, December 02, 2007

E-Learning Mythbusters #5

This one is sure to tick off a few people. That's really not my intention, but I guess it goes with the territory.

Sure do wish I had a nickel for every time I've heard someone say how much harder online teachers work than those old-fashioned classroom teachers. This is the question I asked during my keynote at the ETOM conference in October. I didn't give them the opportunity to be on the fence; they couldn't say "well, some of them work harder," or any other weasel options. They had to pick a side with their hand-held clickers. True or False?

Online Faculty Work Harder

Below you see the results of the voting. 60% say yes, it's true.


Of course it's true that some online faculty work harder than the off-line faculty members. It's also true that some of the women work harder than the men, that some of old teachers work harder than the young ones, that some of the short people work harder than the tall ones, and that some of the attractive faculty members work harder than the homely ones.

In other words, it doesn't necessarily have anything to do with it.
  • For every one of the really hard working online faculty members I can point out one who looks at online teaching as a break from actually having to do something significant.
  • For every one of the online faculty members who creates and facilitates a highly interactive online course, there is one who does nothing more than create an electronic correspondence course.
  • For every one of the online faculty members who has a great "presence" in their online course, there's another one whose students question whether the person actually exists.
It's absolutely true that some of the very best online instructors work extremely hard at teaching their online classes. It's also true that there are many people who don't fit that definition.

So, here's my take: Highly motivated, highly interactive, and highly engaged faculty work very hard – regardless of the delivery method.

It's also been my experience that the people who work very hard at teaching their online classes also work very hard at the other things they do in life and at work. That's just the way they are, and there's nothing surprising about that.

One closing thought: as I think back on my many years as a student, there is only a handful of faculty members who were really good in the classroom. There were many who were just okay, and there were some who stunk out loud (gee, a bell curve comes to mind.) That small group of outstanding educators consists of the kind of people that you would want to continue learning from - year after year. Those people are few and far between. That "reality" doesn't change and it isn't dependent upon the delivery method.