I am developing a new microsimulation module to help EM clerkship students gain a more realistic exposure to high-acuity patients. Emergent conditions, such as ectopic pregnancy, acute tricyclic overdose, and ST elevation MI, are usually cared for by senior residents and attendings. Rarely are students primarily involved in these cases.
That’s why we at CDEM are developing a microsimulation module, called Digital Instruction in Emergency Medicine (DIEM). The DIEM cases would essentially be online “choose-your-own adventure games” which allow learners to manage sick patients. More true to life, these cases are non-linear in format. This means that you can order an EKG, then ask a few history questions, order a nurse to place an IV, give a sublingual nitroglycerin tablet, and then examine the abdomen. These cases are timed and have critical-action timepoints. This means that you may see the phrase “the attending comes in the room and orders …” if you haven’t performed a particular critical action by a certain timepoint.
Compared to the standard multiple-choice question exams (commonly employed by national licensing organizations), virtual patients are a far superior learning assessment tool. It is better at evaluating medical knowledge, skill, application, and future clinical performance. As an example, such Assessment Virtual Patients (AVPs) are used in the form of a computer-based simulation in step 3 of the USMLE board exam.
AVPs are difficult to create and implement, but provide a useful assessment tool. This article was useful because it helps remind me what comprises an ideal assessment tool. It’s all about validity, reliability, and feasibility. Here are some definitions (which I always have to look up because I can never remember!):
- Content validity – Does the assessment tool test the intended subject?
- Concurrent validity – Does the assessment tool yield results comparably well with an already-established tool?
- Predictive validity – Does the assessment tool predict future learner performance?
- Construct validity – Does the assessment tool accurately test abstract concepts (eg. empathy)?
- Face validity – Does the assessment tool appear to test its intended subject?
- Reliability – Does the assessment tool consistently give the same result?
- Feasibility – Can the tool be implemented without excessive costs and resources?
This review article discusses how AVPs may be an ideal assessment tool, especially compared to standard multiple-choice questions. AVPs come in 3 levels of complexity.
Level 1 AVP
A series of multiple-choice questions which are presented to the learner, based on his/her previous answer. The assessment tool is still functionally linear, but it is tailored based on the user knowledge. It’s hard to read this figure, but you get the sense that starting on the left, you progress down one of the arms for further questions, based on your previous answer.
Level 2 AVP
Similar to Level 1 AVP testing, different multiple-choice questions are presented to the learner based on his/her previous answer. The difference is adaptive testing. That means that the questions become progressively more difficult if the learner is answering questions correctly. This helps distinguishes the poor, average, and stellar students.
Level 3 AVP
This is the most difficulty AVP to design because of the complexity of options. Instead of changing the multiple-choice questions, Level 3 AVPs actually change the patient’s condition based on your previous actions. Although hard to read, you definitely can see the degree of complexity in the branch logic. Step 3 of the USMLE boards and my DIEM cases use level-3 AVPs.
Round J, Conradi E, Poulton T. Improving assessment with virtual patients. Medical Teacher. 2009; 31: 759–63.