February 5, 2007
Simulations, part 2: a how-to guide
Among the Carnegie study's many recommendations, perhaps the most sweeping and difficult to implement is that legal education “integrate” rather than “add” elements of skill and values. (Tacking ethics courses onto the MBA curriculum, the authors note, is an example of a failed “additive” strategy). It’s a smart way to bypass the common refrain that law schools aren’t “trade schools” because, when properly integrated with traditional law school goals, skills-oriented learning needn’t lack for intellectual rigor or substantive relevance .
Simulations offer an ideal method of integrating skills- and values-oriented learning with doctrinal courses: they operate from a rigorous intellectual framework, they offer consistent and relatively predictable learning goals, and they provide objective evaluation criteria. After the jump I'll describe my own experiences developing training programs for practicing lawyers, broken down into the four steps of research, establishing a framework, creating the experience itself, and developing evaluation criteria.
[see also Simulations, part 1]
Step 1 : Research
Intellectual rigor begins, of course, with research – not at the library, but in the setting to be emulated. Ideally, this might involve an ethnography (especially when the practice is not well-understood) or an “expert-novice” comparison.
In a course I helped develop a few years ago targeting new legal aid attorneys, we sat down with about five well-respected supervising attorneys and asked them to identify key skills that their new lawyers lacked. One such skill turned out to be conducting the initial client interview. Unable to undertake field research (limited time and ethical barriers often makes that hard to do), we relied instead on our informants' collective wisdom. Through discussion and reflection, we arrived at the components of interviewing that were particularly troublesome for novice practitioners. One of them was structuring the interview to spot the viable legal issues present.
Step 2: Establish the framework / learning goals
We then set out to identify exactly what proficient attorneys do differently than novices during the initial interview. (Again, we did this through discussion with our expert panel, not field research, which is considerably less resource-intensive but also potentially misleading or incomplete. In this case it turned out fine, I think.) It turns out that, towards the latter stages of the interview, experts use a technique we dubbed “funneling”: asking a series of questions that narrow from open-ended to yes/no with the goal of confirming or rejecting possible avenues of legal action. Novices, by contrast, tend to “sieve”: ask disconnected questions without a strategy in place, often going in circles.
Step 3: Create the experience
This is what most people would consider the fun part of the design. Here the designers create and assemble the scenario itself, creating facts, rules, characters, motivations, and all of the other elements of story that give the simulation realism and vitality. Probably the easiest way to accomplish all of this is to base the story on a real-life story. Most of the work would then be a combination of gathering facts and removing distracting elements. Key criteria for whether such elements have relevance or not include whether they advance the learning objectives (substantive and skills) as well as whether they add to the credibility and fun.
It's hard to figure out how much detail should go into a scenario. Unsurprisingly, we found that students who lacked substantive knowledge (e.g. recent college graduates lacking legal background) struggled with the simulation as a whole when it assumed familiarity with certain laws, however generic and abstracted. (Students unfamiliar with the law didn't know what information to seek or why). At the same time, a colleague who works on creating curricula for Harvard's Program on Negotiation let me in on a trick: when teaching pure skills, too much substantive familiarity can lead participants to reject the “reality” of the simulation. Thus, running real estate takeover scenarios with our poverty lawyers helped them focus on the skill, not nitpick inaccuracies in the scenario. (They did complain that the alien nature of the scenario made the specific skills less transferable, which is something we'll have to investigate).
Returning to the idea of “integrating” skills with knowledge, I'd like to emphasize that the intellectual rigor of a simulation does not turn on this stage alone. I have seen some complex simulations that deploy sophisticated facts, characters, etc., but lack the framework that's developed during steps 1 and 2. Without that framework in place, students will not know what skills they should be learning, and instructors will not know how to provide consistent feedback to hep get them there. At best, a rigorously constructed scenario lacking a skills framework serves as a rather expensive, albeit fun, fact pattern.
Step 4: Develop evaluation criteria
Objective evaluation standards are the final proof of a rigorously-developed learning experience. In simulations centered on the substantive topic, one area of evaluation might involve, for example, measuring the outcome against some “optimum” and then reviewing the contributions or mistakes each participant made in getting to that outcome. (In many of HPON's simulations, for example, the debriefing guidelines list various options that players might come up with, providing a measure of the total value that the negotiators might have created in the process. New learners of the HPON framework often are surprised by how much value they leave on the table, showing them the way to more advanced negotiation skills).
In terms of evaluating the skill demonstrated by each participant, the framework developed in steps 1-2 return again. A well-defined contrast between “novice” and “expert” provides a natural yardstick against which the instructor can mark the progress of the student. Often, these measures are qualitative, using a rubric to gage skill attainment. For example, one rubric might examine how well the student asks followup questions; a “expert” rating might correspond with “Questions follow a defined strategy as it also accommodates new information provided by the client” while a “proficient” rating might correspond with “Questions generally follow a defined strategy, but attorney loses control when unexpected information arises.”
Conclusion: This is hard work
I hope the preceding description of how one goes about developing a simulation that truly integrates legal skills with substantive knowledge can help dispel the notion that focusing on skills in the law school setting will degrade the curriculum's intellectual content. Taking a research- and fact-based approach to simulations, and specifically learning what the actual skills are that expert lawyers deploy in their practice, would infuse the endeavor with the kind of credibility that is needed to pass muster with a rightfully suspicious law faculty.
In my next post I'll touch a bit more on how simulations might fit into law schools on a systemic level.
-- Gene Koo
TrackBack URL for this entry:
Listed below are links to weblogs that reference Simulations, part 2: a how-to guide: