Work in progress: Poster on blogging

I need your help with a project!

My poster on blogging was accepted to the annual UCSF Academy of Medical Educator’s Education Day. Feelings of joy and validation were quickly followed by terror and inadequacy.

In order to get my poster costs reimbursed, I have to get feedback from my co-authors and incorporate that feedback into the poster. As you can see from the poster title on top, I have no co-authors! Since you are all my virtual co-authors, I thought I’d solicit for comments and suggestions.

(more…)

2017-03-05T14:18:34-08:00

Article review: ED crowding and education

HeartStopsm

“The effect of ED crowding on education”

My heart almost stopped when I read this article title in Amer J of Emerg Med. This was the premise of my recently completed study – using a prospective, time-motion methodology. I’m in the process of writing the manuscript. Did I get scooped by my friends at U Penn?

Whew. Fortunately, no. Different methodology.

This study was a cross-sectional study looking at learner assessment of education, using a validated tool called the ER (Emergency Rotation) Score. The results are interesting.

The problem

We know that ED crowding negatively impacts clinical care. How does it impact our teaching of medical students and residents? The ED is traditionally known as a great place for learning how to resuscitate high-acuity patients, to manage and risk-stratify undifferentiated cases, and to perform procedures. Experientially, I feel like I teach less when it gets extremely crowded.

Methodology

Over a 5-week period, 43 residents and 3 medical students prospectively assessed 34 attendings using a simple ER Score tool. There were 352 separate encounters. This validated tool assessed the attending based on 4 domains (teaching, clinical care, approachability, helpfulness) with each domain assessed on a 5-point scale. The scores were correlated with crowding measures (waiting room number, occupancy rate, number of admitted patients, and patient-hours).

ERScoresm
ER Score tool

What was their enrollment scheme?

Upon arrival, the research assistant selected the patient with the most recent admission order where the learner-attending pair was still present in the ED. The learner was asked to fill out the ER Score tool. For each admitted patient case, the research assistant also enrolled a non-admitted patient with a similar triage intake time. The learner for this non-admitted case was also asked to fill out the ER Score tool. The study group intentionally structured this methodology to oversample admitted patients, which they assumed impacted education more than non-admitted patients.

Results

The median score was 16 of 20. ED crowding levels were NOT associated with ER scores or their individual domains.

How fascinating that learners still felt that the quality of teaching and learning in the ED was maintained despite the ED being overwhelmed beyond capacity.

The next step is to follow Kirkpatrick’s model in conducting educational research. In this model, satisfaction/reaction-based studies are the first (lowest) tier. Such studies inherently have flaws based on bias, recall, and halo effect. The next study is to look at more objective measures assessing the impact of crowding on education. Hmm, I better get going on my manuscript.

Kirkpatrick.jpgKirkpatrick’s 4-tiered model to evaluate training and education

Reference
Pines JM, Prabhu A, McCusker CM, Hollander JE. The effect of ED crowding on education. Amer J Emerg Med (2010) 28, 217–220.

 

2016-11-11T19:01:31-08:00

Article review: SAEM Tests

SAEMlogoThis is is a great look back at how SAEM Tests were developed and now used by EM clerkships across the country. Because EM does not have a National Board of Medical Examiners shelf exam, a tremendous effort was made by the authors to create a set of validated questions for clerkship directors to use.

Specifically point serial correlation coefficients (range -1 to +1) were calculated for each question. A high coefficient means a high correlation between the performance on the individual test question and the performance on the overall test. After rewriting 25% of the test questions because of poor correlation coefficients, all current test questions now have a point serial correlation coefficient >0.2. (more…)

2016-11-11T18:43:23-08:00

Hot off the press: Free EM Practice articles

EBMedicineThanks to EB Medicine, “Emergency Medicine Practice” articles from 2007 and earlier are now free! This series is a well-written and practical evidence-based review resource for EM physicians. It’s a great place to start reading about bread-and-butter EM content, especially for medical students and junior residents. There haven’t been too much change in the past 3 years on many of the topics.

(more…)

2016-11-11T19:01:34-08:00

Article review: Glidescope success in difficult airway simulation

GlideScopeSince our department got a Glidescope, it has rapidly become a go-to difficult airway adjunct when intubating patients in the ED. Note: I have no financial ties to Glidescope.

This education article Sim Healthcare is a head-to-head comparison between video laryngoscopy (VL) versus direct laryngoscopy (DL) in a difficult airway simulation model. In this prospective, convenience sample of EM attendings and residents who were all novice operators of VL, the subjects were asked intubate 3 types of mannequin scenarios using a Macintosh curve laryngoscope for DL and a Glidescope for VL.

(more…)

2016-11-11T19:01:37-08:00

Article review: Feedback in the Emergency Department

FeedbackFeedback is important in teaching and learning.

I am constantly surprised by medical student and resident comments that they rarely receive feedback. In contrast, seemingly on every shift, I hear faculty giving little nuggets of feedback – during the oral presentation, during the resuscitation, after a difficult interaction, etc. There must be some disconnect.

This multi-institutional, survey-based, observational study at 17 EM residency programs asked attending physicians and residents about feedback in the ED. The primary outcome measure was overall satisfaction with feedback.

Results

The response rate was 71% for attendings (373/525) and 60% for residents (356/596). Side note: Survey studies are generally inconclusive if response rates are

There was a statistically significant difference between the feedback satisfaction scores (on scale of 1-10 with 10 being highest satisfaction).

  • Attending physicians: 5.97
  • Resident physicians: 5.29

Furthermore, when evaluating the quality of different aspects of feedback delivery, there were statistically significant differences in the satisfaction ratings between the attendings and residents. Overall, attendings felt more satisfied with feedback delivery on various topics than residents were.

  • Quality of positive feedback (50% attendings, 36% residents)
  • ” of constructive feedback (29% attendings, 22% residents)
  • ” of feedback re: procedural skills (48% attendings, 34% residents)
  • ” of documentation (36% attendings, 28% residents)
  • ” of ED flow management (29% attendings, 21% residents)
  • ” of evidence-based decision making (28% attendings, 18% residents)

What is more interesting to me is the discrepancy between what the attendings and residents perceived in frequency of feedback. Specifically, 42% of attendings stated that feedback delivery was being done on every shift. Contrast this to only 7% of residents who felt the same. Why the disconnect? Is it purely misperception?

In re-reading this article, I wonder how this question was phrased though. Was it indeed perception or fact?

Let’s say there are usually 5 residents per attending shift, and the attending gives feedback every shift to at least 1 person. When surveyed, the attending would answer – “Yes, I give daily feedback”. In contrast, because there are multiple learners, residents may not have received daily feedback. By law of averages, residents would have received feedback every 5 shifts.

The data showing that 42% of attendings and 7% of residents were involved in feedback delivery every shift may actually be true (rather than pure perception). This illustrates the trickiness of designing and writing surveys.

Bottom Line

We should be working to improve positive and constructive feedback delivery in the Emergency Department, despite the various obstacles.

Reference
Yarris L, et al. Attending and resident satisfaction with feedback in the emergency department. Acad Emerg Med. 2009; 16:S76–S8.

Also see previous post on Failing at Feedback in Medical Education.

 

2016-11-25T15:43:29-08:00