In an upcoming issue of the Academic Emergency Medicine journal, there is a glowing review of a collaborative project that I was involved in. If you are a medical student about to do an EM rotation, or serve as a faculty advisor for an EM medical student, feel free to distribute this EM Clerkship Primer (FREE book!) for them to read. [Update 11/21/13: New link for free download PDF] This was the first official project to come out of the Clerkship Directors in Emergency Medicine (CDEM). It was written by 22 established medical educators in EM, led by our fearless leader/ editor-in-chief, Dave Wald. Go, Dave!
Thanks to Dr. Rob Roger’s podcast on EM-RAP Educator’s Edition series, I learned of one of the funniest publications EVER in a medical journal. It was published on April 1, 2009 in JAMA. The article focuses on teaching medical students the essential skill set– how to survive “pimping”.
Pimping traditionally occurs when an attending physician poses a difficult question to a learner in a public forum, such as board rounds or in the operating room. As a student or resident, you know that this will happen during your training, and you should be prepared. If you think of pimping as a form of battle, you will need a good defense, and you should mix it up to be successful.
I am developing a new microsimulation module to help EM clerkship students gain a more realistic exposure to high-acuity patients. Emergent conditions, such as ectopic pregnancy, acute tricyclic overdose, and ST elevation MI, are usually cared for by senior residents and attendings. Rarely are students primarily involved in these cases.
The authors in this study developed and use an instrument to help measure the methodological quality of quantitative studies in medical education. This instrument, the Medical Education Research Study Quality Instrument (MERSQI), was used to show that scores were predictive of manuscript acceptance into the 2008 Journal of General Internal Medicine (JGIM) special issue on medical education.
What is the MERSQI instrument?
The 10-item MERSQI instrument can be divided into 6 domains, each with a maximum score of 3 points. Numbers within parenthesis denote the number of different items assessed in that domain. The maximum score is 18.
- Study design
- Sampling (2)
- Type of data
- Validity of evaluation instrument (3)
- Data analysis (2)
The specific scoring criteria are seen in this table below, from an earlier publication on the MERSQI instrument (Reed DA et al, Association between funding and quality of published medical education research. JAMA. 2007;298(9):1002-1009). This earlier publication showed that a high MERSQI score correlated with successful funding of the study.
- A randomized controlled trial conducted at over 2 institutions
- A response rate of >75%
- Data assessment was done using an objective measure (not self-assessment by study participant)
- When using an evaluation instrument, internal structure validity, content validity, and criterion validity are reported.
- The data is analyzed appropriately and goes beyond just descriptive analysis.
- The outcome measure goes beyond simple survey measures and focuses on instead patient/health care outcomes.
Getting a perfect score of 18 is extremely difficult, as evidenced in this article. In the 100 submitted manuscripts to JGIM, the mean MERSQI score was 9.6 (range 5-15.5). Most manuscripts were single-group cross-sectional studies (54%), conducted at a single institution (78%). Few (36%) reported validity evidence for their evaluation instruments.
The mean total MERSQI score of accepted manuscripts was significantly higher than rejected manuscripts (p=0.003).
- Mean score for ACCEPTED manuscript = 10.7
- Mean score for REJECTED manuscript = 9.0
I have found the MERSQI scoring instrument extremely helpful in helping me design my educational research studies. Methodologic rigor is almost always the Achilles heel of rejected educational research submissions.
Reed DA, Beckman TJ, Wright SM, et al. Predictive validity evidence for medical education research study quality instrument scores: quality of submissions to JGIM’s Medical Education Special Issue. J Gen Intern Med. 2008 Jul;23(7):903-7.
Bedside teaching is a unique educational skill, which academic faculty are often assumed to just know how to do. In the ED, it is especially difficult to do this well, because of crowding and unexpected time-sensitive clinical issues, which create distractions and general chaos. Experientially, unpredictable clinical issues negatively impact bedside teaching. Thus, faculty should be flexible and knowledgeable of basic bedside teaching tenets.
One shared experience amongst all emergency physicians is the “handoff” or “signout” of patients at the end of your shift to the oncoming physician. A recent article in Annals of Emergency Medicine explores and explains how this process can often lead to delays and errors in patient management. Just envision ED handoffs as a high-stakes game of Telephone, which you played as a child.
U.S. medical students traditionally spend the first 3 years of training in a pre-determined curriculum. In their 4th year, however, students have significant flexibility in how they tailor their time. For this last year before residency, they shift from a learner-centered curriculum to a patient-centered curriculum. There is a shift in mentality from “I am here to learn as much as I can about medicine” to more of a “How do I best prepare myself for working in a hospital in my chosen specialty?”