This new column, entitled "Beyond 'Comparability,' begins:
“Comparability”: an institutional strategy for assuring quality in online and hybrid courses by insisting that the content and, sometimes, the assessments be “comparable” to courses already offered on campus.
As a standard, “comparability,” sounds reasonable enough. After all, this sameness makes it possible to compare the quality of learning outcomes without regard to delivery method. So long as the distant learners get test scores that are comparable to the students on campus, all is well and no further thinking or oversight are needed.
In a similar vein, Richard Clark argued in a classic article that the quality of learning is unrelated to the technology used for teaching. For example, previous meta-analyses of huge numbers of studies of ‘presentation’ (i.e., information from a single source is made available to many students) – these studies had shown that students who are taught by presentation learn just about as much, no matter what the medium of presentation. It doesn’t matter whether they get the presentation via live lecture, video tape, streaming video over the internet, or textbook. Clark also pointed out that the activity of self-paced instruction (SPI) produces substantial learning gains over the activity of presentation. But SPI implemented using paper produces almost as much learning as SPI using computers. According to Clark, using technology for teaching is analogous to a vehicle delivering your groceries to your home. The quality of milk is the same, whether the delivery truck is made by GM or Ford. Technology and quality are completely unrelated, he argued. (Clark, 1983: 445)
But Clark’s analogy is misleading, and his conclusion is the problem with the standard of comparability as well. It’s true that any teaching/learning activity can be implemented with a variety of technologies or facilities. However, for any particular teaching/learning activity, some facilities or technologies are a better fit than others. For example, SPI can be done far more easily and inexpensively with digital technology than with paper; that’s why such tutorials have become more common as computers have become more common and why paper versions of SPI have become almost extinct. The process approach to writing, a pedagogy, spread once computers became common because rewriting is easier with computers. Course activities involving analysis of video (e.g., video recordings of science experiments in action; film clips) became more common when individual manipulation of video became inexpensive and easy.
Once the medium or tools of learning change, it also becomes easier to change who is involved in the course. Obviously distance learning makes it possible to involve not only more students but also students with specific kinds of backgrounds or needs. Equally important, the institution can make different choices about who to use as instructors, or assessors of student work, when those activities can be done online.
Changes in learning spaces and tools can also enable improvements in assessment: self-grading assessments can be administered more readily online, for example.
And the dominos keep falling. When changes in learning spaces and tools enable improvements in the activities, assessment and people, the content and goals of the course, or course of study, can be improved, too. In the early 1980s, for example, Prof. Marvin Marcus of the University of California Santa Barbara, was able to use a new computer lab in mathematics to begin offering the math department’s first minor in applied mathematics, consisting of several on-campus courses and an off-campus internship program in which students applied their skills to solving problems faced by community agencies. A more recent example: the Internet and cheaper international communications have helped Worcester Polytechnic Institute make research abroad become so easy, inexpensive and common that applications research abroad has become a signature activity of that institution.
When universities change technologies and/or facilities (e.g., from campus-bound to hybrid), faculty ought to take a fresh look at learning goals, content, teaching/learning activities, and assessment. The change of facilities will make some goals harder to pursue than before, others easier; some teaching/learning activities easier, others harder; and so on. The problem with “comparability” as a standard is that it discourages faculty from thinking about how they might take advantage of new learning spaces and tools in order to offer more valuable hybrid or online courses of study.
Remember the old tale about the tiger that had been caged since birth. It would roam its cage ceaselessly. One morning it awoke. The bars had been removed. But for a long time, the pacing tiger didn’t notice. It continued to pace within the boundaries of its vanished prison." ....
The column goes on to summarize nine different ways in which online or hybrid formats create the potential for courses that are more valuable and more effective than do campus-bound course formats.
The column ends, in part: "If “comparability” should not be used to provide a quick and easy method of quality assurance, what should we do instead? Our answer is simple: we should evaluate online and hybrid offerings in the same way we ought to assess campus-bound offerings:
- Are we doing the right thing? Use internal and external points of reference to discuss whether the goals are valuable. This will almost always involve comparing ‘apples and oranges’ so it’s important to think carefully about what points of reference to use.
- Are we doing the thing right? Ask whether there is a good alignment between that goal, the teaching/learning activities proposed, and the facilities and technologies to be used to support those activities.
Do you think that we should abandon ‘comparability’ as a standard for quality assurance for online and hybrid programs?