Grazing - a personal blog from Steve Ehrmann

Steve Ehrmann is an author, speaker, and consultant.

Sunday, November 17, 2013

Life experience and academic learning: how should they relate?

George Washington University attracts undergraduates who are exceptionally interested in:
  1. Internships, 
  2. Service learning, 
  3. Study abroad 
  4. Undergraduate research.  
  5. Student organizations 
  6. Athletics.
Should such activities be seen as competing with their academic programs? Or as fuel for academic learning?  

To answer that question, let's start with the goals of an undergraduate education, goals that should span all majors.  Below is one way to categorize the essential learning outcomes of a college education, assembled by the Association of American Colleges and Universities (AAC&U)  through work with hundreds of institutions over the last decade.  (By the way, these are all interlocked; you can't teach much about 'inquiry and analysis' without applying it in one field (e.g., study of science; the arts) or another.

I. Knowledge of Human Cultures and the Physical and Natural World
• Through study in the sciences and mathematics, social sciences,
 humanities, histories, languages, and the arts
Focused by engagement with big questions, both contemporary and enduring

II. Intellectual and Practical Skills, including
• Inquiry and analysis
• Critical and creative thinking
• Written and oral communication 
• Quantitative literacy
• Information literacy

• Teamwork and problem solving 

• Ethical reasoning and action

• Foundations and skills for lifelong learning
Practiced extensively, across the curriculum, in the context of 
progressively more challenging problems, projects, and standards for performance

III. Personal and Social Responsibility, including
• Civic knowledge and engagement—local and global 
• Intercultural knowledge and competence
Anchored through active involvement with 
diverse communities and real-world challenges

IV. Integrative and Applied Learning, including
• Synthesis and advanced accomplishment across general and specialized studies
Demonstrated through the application of knowledge, skills, and responsibilities 
to new settings and complex problems 

Couldn't activities such as undergraduate research, service and the like help strengthen 'intellectual and practical skills, personal and social responsibility, and integrative and applied learning?' And perhaps engaging them with some big questions about  human cultures and the world?

Sounds good, but, in reality, there's a huge gap between formal learning and practical applications. 

Example: Imagine you're looking at your reflection in a bathroom mirror. You see your reflection from the waist up.  

Now imagine yourself backing away from the mirror. (It's a big room). 

As you move further away,  do you see more and more of your body in the mirror until you can see your shoes? the same amount of you? or less and less as you back up until, finally, the top of your head disappears?

When I was first asked that question, I answered incorrectly, a rather alarming outcome considering that:
  • I've walked by mirrors about 100,000 times in my long life. So that's 100,000 times when I should have noticed what happens to my reflection when I move toward or away from a mirror.
  • It worse than that. When I took high school and college physics, I was taught the law that describes how light reflects from a mirror.  If I'd truly understood that law, I'd have known what happens when backing away from a mirror, even if I'd never seen a mirror. 
Moral of the story:  Academic learning and real world experience, separately, can be weak teachers.   Working with people born in other countries while doing an internship or an undergraduate research project with a faculty member, for example, doesn't guarantee that the student will gain valuable academic insights into how teams work, or into other cultures.

This raises two questions for discussion:
1. How is our institution currently helping students harvest academic value from those six kinds of experience? What's working?
2. What more might we do? 

What do you think? (I'll suggest a few answers of my own in a future post)

Wednesday, October 30, 2013

One way for faculty to save time while improving student learning

My friend Ann Ferren  told me this story about a science faculty member who was continually frustrated with how badly his students wrote lab reports.  Each week a new crop of poor writing would be handed in, and he would spend way too much time writing his comments in red ink all over them. But the writing failed to improve.

Inspiration struck.

He then tried the following:
a) He told them he really cared about their writing
b) He distributed the criteria he was using in grading (this is sometimes called a 'rubric')
c) He told them that as soon as their writing earned an "A", they would no longer be required to turn in lab reports.

The results were wonderful.

One quarter of the students got "A"s on their first report. Another quarter of his students got an "A" on their second report. Pretty soon he was saving time and only coaching students who genuinely had problems with writing.

PS.  Quite a few students continued to write the reports without handing them in; by writing a genuinely good report, those students has seen how writing a good report could help them learn from the lab work.

Friday, October 18, 2013

Does it Take More Time to Teach Distant Learners? (No, really...)

Does teaching online take more time than teaching on campus?

For many faculty, the answer seems obvious.  But is the obvious answer correct?

I've personally come across only one research study that went beyond personal impressions.  Years ago Christine Geith and Michele Cometa did a pilot study that worked closely with nine faculty who were each experienced in teaching both campus students and also distant students.

In order to compare 'apples with apples,' each faculty member interviewed was experienced in teaching both distant learners and campus learners in comparable courses,

They began by asking each faculty member which type of teaching was more time-consuming (per student): teaching distant learners or learners on campus. Everyone agreed that teaching distant learners was more time-consuming.

But when these nine faculty each thought about it some more, most decided they'd been mistaken.

Each spent about 45 minutes estimating how much time they'd spent in each of the different activities that make up "teaching a course."

  • One third concluded they'd spent more time (per student) teaching distant learners (as they'd expected);
  • One third concluded they'd spent more time teaching campus learners; and
  • One third decided it was about the same.

From this pilot study, it appears that teaching distant learners isn't necessarily more or less time consuming than teaching students on campus. 

How can that be?

Experienced faculty can adjust elements of the course over a period of years. Eventually, 'time invested per student is more a result of time available.  (The same thing is true of university budgets, for example; the cost of teaching a student is determined by how much money over time has been available to teaching them.  I've written more about this surprising result and its implications in Ehrmann (2010).  When trying a completely new mode of teaching, a faculty member may feel compelled to invest an uncomfortable amount of time per student But after a while the workload can be tweaked until it's tolerable.

The big difference in time invested, surprisingly, was from one faculty member to another.  These nine faculty varied enormously in how much time they spent per student.  

In other words, personal preferences and circumstances have a major effect on how faculty invest their time in each student.  The mode of teaching apparently doesn't.  

Ehrmann, S.C. (2010) “Improving Higher Learning by Taking the Long View: Ten Recommendations about Time, Money, Technology and Learning.”  Change Magazine, Taylor & Francis, September/October, pp. 16-22. (This article was reprinted in the January 2011 issue of Planning for Higher Education, published by the Society for College and University Planning, pp. 34-40.)

Saturday, September 28, 2013

Why are Employers Dissatisfied With Universities?

Employers want more from recent college graduates than they're getting.  Take a look at this table summarizing abilities where employers think recent US college and university graduates should be better.  (I'd guess they'd say the same of graduates with master's degrees and Ph.D.s.)

Obviously, institutions have general education programs that are supposed to teach students how to use knowledge in order to think in these ways.  So why do these gaps persist?  All universities need bigger budgets and brighter incoming students. But that doesn't mean that we can't do better with our current students and our current budgets. I think three mistaken notions are interfering with our ability to graduate more capable students.

Mistaken Notion #1: In order to learn to solve unfamiliar complex problems, students must spend more than 90% of their class time and homework time studying what experts have accomplished or discovered in the past.  
Reality: Suppose you wanted to become physically stronger.  What fraction of your workout time should you spend watching exercise videos?  It does comfort many students and faculty when students spend most of their time learning from experts. But to become capable, a student must work on a startlingly large number of problems, and get good coaching as they do so.
Pick a degree program in which you teach.  How's the current balance between learning from experts versus personally doing the work of the field? (One of our Teaching Day speakers, Anders Ericcson, will address this point this coming Friday, October 4.)

Mistaken Notion #2: To develop a capability such as writing, the student should take (only) one course on that capability. 
Reality: Suppose you needed to be able to lift a 100 pound weight tomorrow.  Would it have been sufficient preparation to lift weights intensively in 2010, and then lift nothing until you needed that skill in 2013? Neural science research demonstrates that mental capabilities decay with disuse, just as physical strength does.
That's why some academic programs at institutions like Stanford are now using a 'spiral' approach to designing courses of study.  Crucial capabilities are periodically developed, a bit at a time, in course after course. Each time the student returns to a concept or way of thinking, it's developed further, applied in a new context, and at a greater level of sophistication.

Theory #3: To succeed in X after graduation, study only X.  (For example, if you want to become a successful chemist in the world, it's sufficient to take only chemistry courses, plus direct prerequisites such as physics and mathematics.)  
Reality:   Notice how many of those gaps about which employers are grumbling are at least partly outside the fields where faculty in a department do their research and publishing. For example:
  • 89% of employers thought that recent college graduates are too weak in written and oral communications.   (To test whether your students are getting enough practice, coaching and instruction in communication, ask some of your employer alumni to judge a random sample of written and oral communications by students who are soon to graduate.  Our Assessment Office can help you with this.  Do your judges see communications capability as a strength of your students?)
  • 81% of employers believed that recent college graduates are too weak in critical thinking and analytic reasoning;  79% were dissatisfied with their ability to apply what they've learned to real world problems; 75% asserted that recent graduates are too weak in complex problem-solving. Real world tasks and jobs usually require knowledge from their major used wisely in combination with other kinds of knowledge.  For example, graduates need to be able to organize in teams, and deal with dysfunctional teams.  How many courses in each degree program offer students practice, instruction and coaching in that area? How would a panel of your alumni rate the skills of current graduates in that area?
  • 75% of employers wanted more from recent graduates in ethics. How many courses in your program contribute to a student's ability to reason and act ethically? 
The Association of American Colleges and Universities' LEAP program has been collecting examples of programs that take a different approach to developing these essential learning outcomes (capabilities). And accreditors (and perhaps the Feds) seem to be gradually moving toward paying attention to how we teach and judge (assess) students' development in these areas; the Lumina Foundation's Degree Qualification Profile is intended to be a tool for such assessment and perhaps regulation.

I suspect that university graduates, their employers, and accreditors will soon be paying more attention to what universities like GW are doing to help graduates do well in all these dimensions of a liberal education, no matter major or graduate program they're in.

Do you agree? Would the changes advocated by AAC&U and others be a step forward for your program, or a really bad idea?

Sunday, September 22, 2013

What Kinds of Teaching Improvement are Most Important?

Which kinds of teaching improvement are most important for the Teaching & Learning Collaborative (TLC) to support over the coming years?  To advance that discussion, I'll suggest three interdependent goals now, and a couple more in an upcoming post:

1. Support learning-centered teaching.  There's a lovely cartoon showing a boy with his dog. The boy is talking to his friend, and boasting that he's taught Spot to whistle.  The friend responds dubiously, "I don't hear him whistling."  The first boy retorts, "I said I taught him. I didn't say he learned it!"

I hope we can help an increasing number of faculty teach in a way that's guided by the actual learning that occurs in their academic programs.  Faculty who understand teaching in this way ('the learning paradigm') believe that virtually every student is capable of excellence, if teaching can guide and stimulate them appropriately.  Because it's not obvious at the start what needs to be done to help students achieve excellence, there's a certain amount of trial and error every term - the instructor tries something and, if it works for some students, do more of it. If it doesn't work for some students, try something else.

  • The alternative view of teaching, sometimes called the 'instruction paradigm', is reflected in the comments of Spot's owner.  Teaching and learning are independent activities.  If you organize and present the content clearly, then you've taught it.  If the students learn it, that's entirely to their credit. If they don't, that's on them, too.  
Of course, it's not simple for faculty (or students) to see whether and how the student is learning, which leads to a second goal for TLC and units like it.

2. Support evidence-guided teaching. Because teaching needs to be guided by actual learning, then the faculty need to gather evidence of how the learning process is going.

More easily said than done.  For example, if the faculty member assesses the wrong thing --  testing whether the student can remember an expert's analysis when the real goal is to help students learn to analyze for themselves -- then scores or grades will mislead both the instructor and the student.

Testing what students have learned is necessary but not sufficient.  Analogy: people learning to bat in the game of baseball. Measuring their batting average, no matter how precisely, won't help them learn to hit better.  A very different kind of feedback is needed for that, e.g., slow motion video of how they swing at a pitch.  Faculty need to learn both how to provide two very different kinds of feedback for themselves and for their students: what the student has learned and how the student is learning.  Few faculty have received any training in the many ways in which these things have been done.  We can help with that.

3. Support faculty collaboration to improve learning.   I mean "faculty collaboration" in two different ways.

The first is illustrated by a research finding about composition courses.  In many evaluations of learning in composition, students are asked to write a composition at the beginning of the term and a second composition on a similar topic at the end of the term.  External graders then assess each paper without knowing when it was written.  The papers written at the beginning and at the end of the writing course often get similar grades. This does not mean that the students learned nothing about writing: over two or more courses, the essays do measurably improve.  The important lesson: the kinds of capabilities useful in life are often so complex and personal that one course often can't do not enough to create improvements large enough to measure.  As a senior once told me about something he'd learned, "I don't think I could have learned how to think that way in any one course, but, over the years, it gradually sank in."

That's why capstone courses and major projects in upper division courses can be so important: as sources of insight for faculty teaching lower division courses on where they're succeeding and where they need to improve.  

The second important kind of collaboration is between faculty and others, e.g.,  instructional designers, assessment experts, publishers and other materials developers outside the university.  I'll choose just one example: instructional designers in a number of departments at GW are currently testing ePortfolio products that faculty can use, individually and as teams, to gather and analyze evidence of student learning.

So those are my first three suggestions for where we should focus our support: learning-centered teaching, evidence-guided teaching, and collaboration to improve learning.

Over the next 5-10 years, what (additional) kinds of teaching improvement do you think that units like TLC should support?

Monday, August 19, 2013

Your eyes and your class; ProfHacker

There are some good blogs and web sites about teaching, and I'll discuss one of them from time to time in Grazing.

ProfHacker is produced by a team of faculty from a number of different colleges for the Chronicle of Higher Education.  The August 19 entry offers useful tips for how to look at your class, based on which of your eyes is dominant (are you more likely to notice what's going on to your right, or to your left? And, once you discover which eye is dominant, how might you adjust?)

Many ProfHacker contributions have to do with practical suggestions for using technology tools in your research (e.g., managing bibliography, visualization) and also in your teaching.

I use a web site called Feedly so that I can see new Prof Hacker entries, and other useful blog posts, as they appear. (I learned about Feedly from ProfHacker.)

Monday, July 29, 2013

Two answers to "What is Teaching?" and their implications for debating MOOCs

Years ago, I had a vigorous debate with a colleague about what good lectures could accomplish.  Frustrated, we paused and exchanged our definitions of 'lecture.'  It turned out we had entirely different things in mind. Once we understood our disagreement about that definition, it turned out we agreed on everything else.

Scholarly observation and empirical research agree: in many disciplines, faculty are split between two opposing views of what 'teaching' and 'learning' mean.
  • Instruction-centric: Folks holding this view believe that knowledge exists independent of students or teachers. They think it's the teacher's job to organize and transmit it (and test it). The student's job is quite different: to learn it.  If a faculty member teaches and 10% of the students learn the material, that shows the faculty member was teaching successfully (otherwise no one would have learned).
  • Learning-centric: Other faculty believe that learning is more like developing a physical capability (like learn to ride a bike) than it is like storing blocks of knowledge in a mental trunk.   For these folks, teaching means doing whatever it takes to encourage and help a student become mentally stronger in their field.  If a student doesn't learn, that means the faculty member didn't successfully teach.   
In my experience, faculty in each group believe the other group's definition is not in the best interests of most students. And they instinctively reject any 'evidence' advanced by the other group: experience shows that the other group is wrong so their evidence must be flawed.  (I'm not above that fight:  I  believe that decades of rigorous academic research demonstrates that learning-centric teaching produces far more capable students than the instruction-centric teaching does.)

We have no statistics about how common each point of view is but most faculty guesstimate that at least half their colleagues teach in an instruction-centric way.

MOOCs are built on the fault line between these two tectonic plates.

Imagine what each group of folks think about MOOCS that are essentially a sequence of video lectures, readings, some quiz questions, and an online opportunity for students to talk with each other about the course.

For instruction-centric folks, that MOOC design captures much of what's important about a college class -- the faculty member's inspiring, clarifying presentation -- and even improves upon the campus norm in at least one respect: the student can watch key parts of the lecture as many times as they need to.

In contrast, for learning-centered folks, that MOOC design ignores most of what could help large numbers of students become mentally skilled (e.g., carefully designed homework assignments that challenge and help students develop higher order thinking skills in their fields; coaching and discussion that power further improvements in what students are able to produce.)

Moral of the story: If two or more people are going to discuss MOOCs, each person ought to start by describing what great teaching looks like, and how it guides student learning.  Perhaps everyone will agree. More likely some participants will have strikingly different views than others. This quick exchange won't change anyone's mind about good teaching, of course.  But it should give each participant essential perspective for understanding other folks' assertions about MOOCs.   

Sunday, March 24, 2013

Should Students Discuss How to Learn and Study?

As I look back on my own days in college, I'm embarrassed to remember how many things I believed about learning that got in my way!

For example, in my freshman advanced calculus course, I assumed that, so long as I could follow what the professor said in class, I was doing fine.

And I never considered going to a prof's office hours. If I wasn't doing all that well in the course, i was hoping he wouldn't notice until I could somehow catch up!

And, really, I didn't think much about learning (that is, improving what I could do, what I could see, and who I was).  Instead, I was more focused on grades and credits.  And even more on my extra-curricular activities (writing a book, serving in student government),  where I could see that I was getting things done, and that I was getting stronger.

I wonder how much more I would have learned if I'd had different beliefs about learning and how to learn...  (Yes, I did go on to get my MIT Ph.D. but I don't seem to remember much from my undergraduate courses.)

That's why I've been assembling a list of common student beliefs about learning.  And I turned some of them into a (first draft) questionnaire that an instructor might administer to students at the start of the semester:

After students answer all the questions, the instructor would start with item #1 and ask for a show of hands (or use clickers and ask them to click). Who strongly agrees or agrees?  Who strongly disagrees or disagrees?

If the class is split on a belief, students would gather in small groups, each of which has at least one student on each side of the question. They'd each summarize their own reasoning or experience.  Then poll the students again to see if there's been any change in their opinions.

Go through the cycle again for each item on the form.  (My first draft has 10 questions, but I can imagine that a faculty member might prune it to, say, 5 items before giving it to students.)

If the course features online discussion, the instructor might administer the questions online and post the results.  Each week the students might discuss one or two of the items online where there had been significant disagreement.

What do you think?
  • Does it make sense to talk with your students about learning and studying at the start of the semester?  
  • I know some instructors just tell students how to study; what do you think of asking them to debate the questions among themselves? 
  • If you like the general idea, what do you think of these particular questions?
If you're at GW and you want to talk, or to connect with other instructors who might also be interested in trying this in the summer or fall, please let me know.

Monday, February 18, 2013

GW Launches 2nd Gen Online Degree Programs

George Washington University has recently launched a second generation suite of online master's programs called the George Washington Digital Community.  I call them  'second generation online programs' because they take advantage of online media to be, in some significant ways, better than a campus-bound program could be.  Patty Dinneen and I wrote "Beyond Comparability" last year to describe a dozen ways in which online and hybrid programs could be qualitatively different from, and superior to, campus-bound programs.  (We didn't mean superior in all ways; simultaneously there will be ways in which those same campus-bound programs would be superior to the online programs.)  Even though the online program would have somewhat different goals and content from a campus-bound counterpart, it's possible to assess their quality, as we suggested.

The first 2Gen Online MBA program I noticed, years ago, was "OneMBA" - jointly designed and offered by five institutions on four continents, it's an Executive MBA in Global Management that teaches courses in which students work in international teams.  And twice a year, students and faculty meet at different spots around the globe to do research (never a spot where one of the campuses are, by the way).  So it's really a hybrid master's program - so much the better.

Now GW's Digital Community has been launched.  It will soon include masters degree programs in four areas: Masters of Business Administration, Masters of Science in Information Systems and Technology, Masters of Science in Project Management, and Masters of Tourism Administration. The new GW online MBA began teaching its first cohort of students last month.  It takes advantage of online and multimedia to transform how students learn, when compared either with on-campus or with the linear, text-based, asynchronous world of the 1Gen online program:
  • The spine of each course is organized around a sequence of high production value, brief videos; these videos seem to strike a great balance midway between the crude talking heads that characterize films of instructors talking to their classes and the expensive video that might be produced for television broadcast.  The camera is positioned to make the learner feel part of a small seminar discussion.  The resolution is crisp.  I know the faculty I saw, and the process really brought out their personalities. These were no wooden talking heads. 
  • At a variety of points within each brief video segment, students can branch off to other resources and tracks, from narrated slideshows and animations that go deeper into a topic, to readings, to other videos, to practice problems, and other web sites. This video clip, after some initial generalities, paints a pretty good picture of this articulate approach.
  • Students can also pop into an asynchronous discussion of an idea or question raised in the video, and then pop back and continue the video.
  • These online courses are paced by extensive use of scheduled real-time online discussions among students and instructor.
  • Videos of recent alumni are being filmed; the alumni explain why students should work hard to learn a particular idea or technique, based on experiences the alumni has had with using that since graduation.
  • The architecture of the system will enable faculty to gradually add more options and tracks to courses, if they wish, so that students with differing goals or needs can get just the instruction or assignments they need, within the same course.
  • The courses are set into a larger digital community that should help students make lasting friendships and intellectual relationships with one another, within and across courses.  Students can also participate in the Digital Community's co-curricular program, which includes (1) First Sundays, a variety-show monthly webinar, featuring guest speakers and reports from various student groups and organizations; (2) Digital Roundtables, periodic small-group facilitated discussions with business and policy leaders from around the world; and (3) 1+1 Mentor Program, matching students with GW alumni mentors with common professional interests and background.
If this works as planned, it may well have a long-term and perhaps unexpected benefit.  I suspect that building community in this way may also lead to deeper bonding among students, and with faculty and the institution.  The cohorts of students may feel a bit less like GW customers, and more like (generous) GW alumni.  Give us a decade or two, and we'll see.

Brief URL for this post:

Wednesday, February 6, 2013

What does "MOOC" mean? No, really, what does it mean?

Did you ever get into a frustrating argument before realizing that you and your opponent were each using the same word but with unspoken, conflicting definitions?

Some years back, for example, an argument about the educational outcomes of lecturing came to an abrupt halt when my colleague and I realized we were each using a different definition of "lecture." I meant an unbroken stream of faculty talking, and students listening. My colleague meant 'everything that a faculty member does in a classroom.'

I've called these terms confusors. Some years ago, I developed a web site where I listed confusors that can derail discussions of higher education, words like "technology," "classroom,""assessment," and "teaching,"  each of which have widely-held, conflicting definitions.  Remember: confusor definitions are rarely right or wrong, even though different people may each deeply believe that their definition is the only correct one.  Confusors are like icebergs - the conflicting definitions are invisible and can rip the bottom out of a conversation.

The newest confusor in higher education dialogue is "MOOC." MOOC is an acronym for four words  - Massive Open Online Courses -- and all four words are confusors.

"Massive" can mean that hundreds, perhaps tens of thousands of learners are registered.  But "Massive" can also mean that the course has been designed to handle huge numbers of users (just as any book can potentially be a best seller).

  • Another confusor, hidden deeper, is "registered."  Some people sign up for 30-40 MOOCs at a time, but never visit any of them. Others may attempt a first lesson, realize that this isn't something they want to do, and depart. It's hard to consider either group as "dropouts" from the MOOC.  But, on the other hand, the cited number of registrations may vastly overestimate the actual usage of the MOOC.

In discussing MOOCS, most people define "Open" as "free." But for others, "open" means inexpensive: students may choose to pay a price for textbooks, assessment and coaching.  And for still other folks, "open" means that the materials are not only free but also can be freely used and adapted by anyone: open source.

"Online": for some people MOOCs are something new, a tiny and very specific type of online learning. Others use MOOC as a virtual synonym for "online learning."

"Course" for some people means that the MOOC's content is comparable to a campus course taught by the same instructor. But for others, the MOOC's length and content may be quite different from anything offered on campus, far shorter than a college course.

People also silently make dramatically different assumptions about learning in MOOCs. Some folks assume that MOOCs combine videos of all the lectures from a campus course with mostly the same instructional materials used on campus. These same folks may assume that drop out rates will be high, and learning relatively shallow. Meanwhile, other folks assume that MOOCs should become engaging and effective; they may assume a combination of online tutors that respond to differences in student needs and performance; a wide range of online source materials; illuminating videos that look like public television; self-grading quizzes on complex ideas; and carefully- designed procedures that give learners valid feedback on sophisticated tasks. These folks may expect high quality earning from MOOCs but also expect that considerable time and money will be required to achieve that goal.

Threading these conflicting assumptions together:

  • Some people are leaping into MOOC development because they silently assume that MOOCs are so much like campus practice that they will be inexpensive and noncontroversial. Tens of thousands of learners will use the materials because the MOOCs are online and free, they assume. These folks may also assume that copyright of published materials will be maintained.  Others, who see MOOCs in that same way, are deeply worried; they see MOOCs as a massive step backward in pedagogy, educationally ineffective for all but a handful of exceptionally talented and self-motivated learners and a potential blot on their institution's reputation. 
  • Other folks silently assume that MOOCs will quickly evolve to be quite different from traditional campus courses: shorter, fostering rich peer-peer collaborative learning, and providing sophisticated feedback from sources other than the instructor.  Such MOOCs may require extensive R&D, and will be expensive to develop. Some of these folks also see MOOCs feeding into an emerging, open source universe of interconnected interoperable instructional resources. Or they may see "MOOCs" becoming not free but still dramatically less expensive than conventional education.

Have you encountered other ways in which the word "MOOC" has become a confusor?

Sunday, January 6, 2013

Disruptive Innovations and MOOCs

We've been bombarded with news stories about the disruptive threat of Massive Open Online Courses (MOOCs).

While MOOCs can be useful, they don't seem likely to be disruptive.  To understand why, let's take a closer look at the definition of a MOOC and then at what we know about disruptive innovations.

What's a "MOOC?"

There are at least five definitions of "MOOC" in use:
  1. A single course that can attract thousands of learners ("Massive"), is completely free ("Open"), and is online ("Online Course"). This was the initial definition of MOOC and is the one I'll be using in this blog post. 
  2. A course that is designed for massive numbers of students, is free and online but doesn't actually attract more than a few dozen students. If an institution offers twelve MOOCs to the world, and only one has more than 20 students, how many MOOCs was that? 
  3. A course that is comparatively inexpensive (the student must buy a book and pay to be assessed or certified), is online, and for which massive enrollments are hoped.  (Personally, I call these MIOCs - Massive Inexpensive Online Courses).
  4. The materials for massive courses (e.g., their presentation materials, texts, tutorials, online discussion tools, auto-graded tests, etc.) which are then adapted by different colleges in order to offer their own, tuition-based, small enrollment courses.  (I'll be writing a future blog post about the possibilities and hazards of this model, which resembles the telecourse strategy that was popular a couple decades ago. 
  5. Any course offered with a learning management system that was originally developed to support MOOCs (definition #1) even when the course itself is a conventional online course: small enrollment, not at all free.
That said, let's just focus on the original definition: #1.  

What kind of education does a MOOC offer?  

Think of a self-help book or a set of instructional videos - you'll get the idea of what instruction without teachers can look like. Or imagine inviting anyone interested in learning to be a nurse to come to a football stadium to watch instructional video about physiology on the Jumbotron.  A MOOC is more than just instructional material so imagine that your learners also have machine-graded tests and the option to talk with other students. And imagine that learners are also given guidelines to use if they choose to assess one another's work.

What happens when people try such an educational experience, in the football stadium or online?

Not surprisingly, most of those who sign in take a peek and then walk away. Only a tiny fraction are motivated enough and prepared enough to work their way through all the materials. 

Is it possible that this experience does more than teach them to cram for a test? Will that experience help them act competently and wisely in the world? In part, that depends on the subject.   

  • COuld someone who studied Algebra 1 that way be adequately prepared to take algebra 2?
  • Could someone who took enough French MOOCs think and speak in French sufficiently well to live in France?  
  • Would someone who learned medicine that way be trustworthy as a nurse in an operating room?
It seems to me that, like self-help books and instructional videos, MOOCs have a role in helping people learn. But it's likely to be a different role than instructional forms that involve more intensive coaching and interaction with experts and peers.  Self-study has its limits.

MOOCs are attracting so much attention because so many people see the tens of thousands of learners signing up, and see a tidal wave, a disruptive innovation that can displace or destroy more expensive forms of learning.  Online music once seemed a crude curiosity, but eventually online music destroyed most record stores.  Are MOOCs the first step in that kind of disruptive innovation?

What Makes an Innovation Disruptive?

The term “disruptive innovation” was popularized by Clayton Christensen.  Christensen demonstrated that, while only a tiny fraction of innovations have that potential to destroy whole industries, such disruptive innovations often have similar life histories.

Here's a summary of that common narrative thread.  
  1. The story begins with a dominant product and the industry creating it.  And now appears an innovation which, compared with the dominant product, is much less expensive but so bad that only someone who can't afford the dominant product might want to use it. People producing or using the dominant product don't see the innovation as a threat.  (Those of us with long memories might think of the computer industry of 1980 and the early microcomputer kits, of records v. CDs, or CDs versus online MIDI music.
  2. The innovation does sell and as it improves and appeals to more people who aren't using the dominant product, income increases. The income funds still more product improvement.  Now the product begins to appeal to marginal customers of the dominant product.  The quality is still far less and the marginal customers aren't going to be missed.  (Imagine if a company came up with an inexpensive way to teach Slavic Languages and other lesser taught languages, good enough so that many people became accustomed to learning languages that way instead of taking college courses.  Most universities wouldn't consider that a threat, even though they were losing some credits they might otherwise have sold.  
  3. Often at a relatively early stage, people in the dominant industry notice the growing innovation and try incorporating it. But the requirements of the innovation are so different that it fails to thrive inside the dominant organizations, even while it is growing outside. (With MOOCs, for example, almost all the 'teaching' effort comes before the students arrive and it requires a team; once students begin, the MOOC is expected to run itself. But universities don't even assign faculty a teaching assistant until the term begins.)
  4. The innovation's expanding stream of revenue and customer acceptance enables continuing improvement and wider markets.  This evolving innovation now appeals to some of the customers in the dominant part of the industry  (imagine the grandchildren of MOOCs could somehow provide effective general education more effectively than survey courses taught by mathematics and physics departments.)  Those new customers help finance even more improvements in the new product. 
  5. The innovation continues to improve and capture new markets.  The "disruption" happens when he dominant industry has lost so much revenue that it can no longer support itself.  Tower Records goes bankrupt. 
The story of disruptive innovation might repeat itself, with the conqueror eventually conquered by a still newer innovation. 

Are MOOCs likely to Disrupt Anything?

MOOCs don't follow Christensen's narrative: they're free and have few options for generating revenue: advertisements sold for exposure to students, and student data used by advertisers. That revenue doesn't seem likely to build to levels that would pay for keeping the first generation of MOOCs up to date, improving the state of the art, and paying for development of a second, larger generation of MOOCs.  

Several years ago, everyone was talking about Second Life as an educational innovation of huge importance.  Hundreds (thousands?) of educational institutions, including GW, invested in creating 'campuses' in this virtual world.  That predicted disruption is one of dozens from just the last few decades that never happened.  There have certainly been some transformative uses of technology that,e even now, are subtly reshaping higher education teaching and economics.  But none of them fit Christensen's model of disruption.

Can MOOCs be Useful?


1. Sharing it forward. Universities are the beneficiary of charity from countless other people: alumni giving gifts, our own teachers, our colleagues, our neighbors...  MOOCs are a way of paying it forward to learners we might not otherwise ever see, some of whom may be very talented and very motivated - perhaps even more talented and motivated than most students we teach every day.  It's the same motivation that might prompt a faculty member to work with his or her undergraduates to write, or improve, a free online open textbook like this one.  There are already hundreds of such instructional resources and the number is growing rapidly.

2.  Marketing a university's strengths to a new population of learners: Suppose GW wanted to begin offering a hybrid degree program in bio-engineering to students in Brazil.  But many students and educators in Brazil might not know about GW's expertise in this field.  So, to spread the word, we might also try creating the world's best Portuguese MOOC in bioengineering and take care to market it in Brazil. .  In our MOOC's materials, we would interleave information about our tuition-based program in Brazil, the one that involves face-to-face interaction, coaching, assessment, and certification from real experts. By striking that contrast, we'd reduce the chance that Brazilian learners would assume that GW offers them only online instruction without teachers. 

3. Testing innovative educational materials or methods in a big way.  I suspect this MOOC potential is being oversold even though it does have some potential.  Imagine testing materials for a freshman course by first trying them in a best-selling self-help book on the same subject.  That would certainly help you see if the materials are exceptionally engaging and easy to use. And it would yield lots more student data than a campus course enrolling only a few dozen students.  But materials that could have worked quite well on campus might fail in a MOOC where learners have less coaching and are likely give up far more quickly. If the goal is to use a MOOC to perfect materials for campus use, the hazard is what researchers call a 'false negative.' To put it another way, materials that might require, say, 10 hours of effort before they perform adequately on campus might require 50 hours of development work in order to perform equally well in a MOOC. (The Annenberg/CPB Project routinely invested $2 million in a course in order to produce materials that would be adequate for distant learners.)


So MOOCs (free online courses) aren't by themselves the harbinger of disruption to the norms of higher education?  What about Massive Inexpensive Online Courses whose materials can be used by individual colleges to teach small classes with less faculty labor? Is that a good idea? potentially disruptive? And what about more conventional online education? How disruptive might that become? I'll address those questions in future posts.

In the meantime, do you agree with the picture I've painted of MOOCs? Do you have different notions of what 'disruption' means?