Grazing - a personal blog from Steve Ehrmann

Steve Ehrmann is an author, speaker, and consultant.

Monday, November 14, 2011

Evaluating a pilot test of PBL in a virtual world

I'm an advisor to Bernard Frischer's fascinating, NSF-funded project to create a virtual version of Hadrian's Villa and test a problem-based approach to organizing undergraduate and graduate courses in that world. Students would familiarize themselves with this recreation of what the vast complex of buildings probably looked like in Hadrian's time, and do research to suggest how people might have used the complex.

In our latest advisory committee meeting, we considered how to evaluate the pilot tests.

It seems to me that, in such tests, a key stakeholder group are faculty who are mildly interested in the possibilities and who want information from the evaluation to decide whether or not to invest the time, and take the risk, of trying it themselves. In this case, these faculty would be, for example, classicists and archaeologists, not educational psychologists. Their interests wouldn't be "Did these course activities improve critical thinking?"  And it's hard to imagine what elements of history they might wonder, "Did students learn this in a deeper and more lasting way than if I'd taught my class as I usually do, spending an hour on Hadrian's Villa?"

Instead they might have questions such as these:

      1. Does the environment provide a huge number of creative possibilities for assignments and class activities, or are there only a few assignments and activities that would work in this space?
      2. When one considers student with different characteristics (e.g,. committed to a major in classics or archaeology? background in multi-player gaming?  interest in or aversion to computers? Gender?) do many kinds of students find these assignments and activities engaging? Or just a small fraction of students?
      3. When external referees look at student projects, do they think most students learned valid insights about history and culture? Or were many students deluded into imagining that whatever they created, no matter how far-fetched, was real?
      4. Were students attracted by their assignments and activities to engage in sustained, cumulative learning over the term? Or did they most ignore it until a frenzied cramming session near the deadline?
      5. Did the student work contribute to the instructor’s research and insight? Is that a likely outcome for future instructors?
      6. Was learning to use the system a burden for many students? Was using it annoying to many students? Or was the interface sufficiently transparent?
      7. Were any satisfactory rubrics or procedures developed to grade student projects?
However, these are questions I imagined. Better yet, I suggested, find faculty who are mildly interested (not those who'd leap at trying this, no matter what), show them a draft list of questions such as those above, and say, "Now, really, what information from the pilot test could persuade either not to try this, or to try it, depending on what the evidence showed? What are the real questions that would influence your decision to alter your course, or develop a new course?"

    No comments:

    Post a Comment