Our final case of this season, The Case of the Competency Conundrum, outlined a scenario of residency competency committee members who are divided in their approach to a superstar R4 resident, Josh, who has already completed the requirements of his training program. They struggle with competing opinions surrounding competency based medical education (CBME) early advancement principles and the importance of continued exposure/service.

This month, the MEdIC team (Drs. Tamara McColl, Teresa Chan, Sarah Luckett-Gatopoulos, Eve Purdy, John Eicken, Alkarim Velji, and Brent Thoma), hosted an online discussion around this case with insights from the ALiEM community. We are proud to present to you the curated community commentary and our expert opinions. Thank-you to all participants for contributing to the very rich discussions surrounding this case!

This follow-up post includes

  • Responses from our solicited experts:
    • Dr. Brent Thoma (@Brent_Thoma) is an emergency physician and trauma team leader in Saskatoon, Saskatchewan. He is the interim Director of Simulation and Program Director of the Royal College emergency medicine residency program. Brent investigates technological innovations that enhance learning with the goal of helping good people to provide exceptional healthcare. His work is online at CanadiEM.org, Debrief2Learn.org, and METRIQstudy.org.
    • Dr. Teresa Chan (@TChanMD) is an emergency physician, clinician educator and assistant professor at McMaster University, where she is also the Competence Committee chair for the Royal College Emergency Medicine training program. She is the assistant program director of the Clinician Educator Area of Focused Competence program. In her spare time she volunteers with a number of influential FOAM outlets including: ALiEM.com, CanadiEM.org, EMSimcases.com, FemInEM.org, International Clinician Educator blog, Emergency Medicine Cases
  • A summary of insights from the ALiEM community derived from the Twitter and blog discussions
  • Freely downloadable PDF versions of the case and expert responses for use in continuing medical education activities

Expert Response 1: The Stellar Resident: A Good Problem to Have  (Dr. Brent Thoma MD MA MSc FRCPC

Historically, medical trainees were promoted after a pre-determined length of residency and satisfactory performance on a small number of high-stakes summative assessments. It has since been widely acknowledge that this traditional, ‘one size fits all’ style of training program is not learner-centred and does not ensure that trainees have achieved competency in their specialties (1).

The goal of competency-based medical education (CBME) is to shift away from the time-based paradigm and ensure that every medical trainee is prepared for independent practice (2). CBME promises promotion based on a large number of low-stakes formative assessments (3). Promotion decisions are made by a Clinical Competence Committee (CCC) comprising a diverse group of stakeholders who use all available assessment data to ensure that robust decisions are made (4,5). The application of this model in the case of a struggling resident is relatively straightforward. Little has been published to guide the work of a CCC faced with a stellar resident.

The problem of early promotion

Our historical assessment model makes superstar residents like Josh easy to have around. Despite their advanced skillset, they work through residency at the same pace as everyone else and can be counted on to fill shifts and call schedules, present at their quota of teaching rounds, and excel in extracurricular activities.

Early promotion of a resident presents new challenges. As alluded to by Kevin and Karen, programs depend on resident service to meet clinical care and teaching obligations. Quite aside from scheduling concerns, early promotion and graduation of residents also runs the risk of fostering a culture that rewards laser-like focus on collecting the required evaluations to the detriment of scholarly activities, mentorship, and the development of specialized interests. Promotion after achieving the minimal standards could lead to a cohort of graduates who are competent but do not excel.

How should the CCC approach the resident who exceeds expectations?

The responses of the CCC members in this case suggest that this committee has not yet developed guidelines to address residents who are excelling, an issue that should be addressed urgently. The roll-out of a competency-based assessment system without a plan to address the developmental needs of residents who are ahead of the curve can appear disingenuous and may lead to resentment from the star resident and the residency group. Ideally, such guidelines would be developed collaboratively between the resident and institutional leadership to ensure that they balance resident and institutional needs.

Hauer and colleagues have described a developmental model of deliberation that could assist the CCC in deciding how to proceed with Josh’s promotion (4). As opposed to a problem identification model aimed at identifying struggling residents, the developmental model views the educational program as a stairway to mastery. For a resident like Josh with no identifiable deficits, the focus would change from meeting the traditional milestones of residency to the development of mastery in an advanced area consistent with his career goals. This shift would recognize his competence and provide him with something to work toward.

Ultimately, the shift of CCC to a developmental mindset may allow CBME to deliver upon the promise of all residents achieving competence while fostering excellence in those capable of achieving it.


  1. Frank JR, Snell LS, Cate O Ten, Holmboe ES, Carraccio C, Swing SR, et al. Competency-based medical education: theory to practice. Med Teach [Internet]. 2010;32(8):638–45. from:
  2. Iobst WF, Sherbino J, Cate O Ten, Richardson DL, Dath D, Swing SR, et al. Competency-based medical education in postgraduate medical education. Med Teach [Internet]. 2010;32(8):651–6.
  3. Carraccio C, Englander R, Gilhooly J, Mink R, Hofkosh D, Barone MA, et al. Building a Framework of Entrustable Professional Activities, Supported by Competencies and Milestones, to Bridge the Educational Continuum. Acad Med [Internet]. 2017;92(3):324–30.
  4. Hauer KE, Chesluk B, Iobst W, Holmboe E, Baron RB, Boscardin CK, et al. Reviewing Residents’ Competence: A Qualitative Study of the Role of Clinical Competency Committees in Performance Assessment. Acad Med. 2015;90(8):1084–92.
  5. Hauer KE, Cate O Ten, Holmboe E, Boscardin C, Iobst W, Chesluk B, et al. Ensuring resident competence : a narrative review of the literature on group decision-making to inform the work of Clinical Competency Committees. J Grad Med Educ. 2016;8(2):156–64.

Expert Response 2: The Complexities of Clinical Competency Committees (Dr. Teresa Chan, MD FRCPC, MHPE)

Clinical Competency Committees (CCC) were first widely adopted by the ACGME with the implementation of the Next Accreditation System (NAS) in 2013. Despite the years that have passed since this time, there remains wild heterogeneity in the implementation and operation of these committees (1). Canada is now setting forth to take on a similar change with the Competence by Design model.

What are these committees supposed to do?

The intent behind CCCs is to facilitate the decision-making processes around resident competence. The Competency Based Medical Education (CBME) model relies on programmatic data collection throughout residency that results in a mosaic representation of individual resident performance (2–4). The CCC’s job is to analyze this data, spot trends, and make decisions about promotion (1). Contrast this with historical decisions about resident competency, which were time-based (i.e. 3-5 years), rubber stamped by a single person (usually the program director), and predicated on a decision checkpoint (i.e. board exam scores) (5). Ideally, CCC members function as “meta-raters” of the information gathered by the residency program; they aggregate assessments rendered by others (e.g. in-training exam scores, workplace assessments) and analyze this data to make decisions.

What is the theory behind these committees?

In a recent review of the literature, Chahine and colleagues present a theoretical framework that describes decision-making in small groups (6). The authors describe how small groups process trainee data according to orientation, and how this process is moderated by guidelines, time pressures, and leadership (6). The authors suggest that a small group might have a mix of the following three orientations:

  1. Schematic, whereby algorithms that provide an underlying structure for their process of integrating new information are created by a CCC
  2. Constructivist, whereby a shared mental model based on the group’s ideas is created
  3. Social influence, whereby social pressures inform decision-making

Any given CCC probably draws on all three of these orientations during decision-making. In our case, for instance, you can see how the social influences of “service” or “excellence” enter into the discussion around Josh’s promotion. The CCC in our case may lack an appropriate schematic orientation for the promotion of residents. The robust conversation about this superstar resident, however, shows that this group has a strong constructivist orientation. Their willingness to listen to one another’s perspectives will likely help them arrive at a good decision.

How do these committees actually work?

Pragmatically, this case gives us pause to think about how we educators can help shape our systems by changing the focus of a policy or process. A recent study of CCCs in California showed that committees had different paradigms from which they examined data (1). These paradigms are different from the previously mentioned schemas in that they are not algorithms, but are attitudinal leanings that informs the CCC role. Some committees use a problem identification model (i.e. spotting red-flags and using informal information shared within the program) while others use a developmental model (i.e. analyzing larger swaths of real-time data to compare to benchmarks) (1).

Each of the paradigms described above carries with it advantages and disadvantages. A problem-identification framework, for instance, restricts how a committee might act when faced with a rapidly progressing resident, such as Josh. If your system is built on identifying the resident-at-risk and acting as a “floor” for ensuring basic competence, your CCC will struggle with the excelling resident. On the other hand, if your system uses a developmental framework in hopes of identifying the individual learning needs of each resident, then it is less jarring for the members of the committee to conceptualize their role in nurturing the resident who exceeds expectations.

Kevin, one of the senior faculty members in our case, reminds us of the advantages of a developmental framework when he says, “I don’t think this is a discussion of purely pumping out competent residents. We want to train the best!” Noting that Josh has achieved high marks on all established benchmarks, shifting the committee towards what new developmental steps the excelling resident might be challenged to accomplish could be useful in this situation. If the committee wishes, indeed, to “train the best”, they might ask: “How can we support Josh to ensure he continues to excel? Are we adequately challenging him? Could we ask Josh to take on additional duties, such as mentorship or building his research portfolio?” Working with Josh to understand his goals would facilitate this process.

How could these committees work?

As one of the inaugural CCC chairs at McMaster University, I have seen that reality often competes with theory and best practices. We have had several residents request that we review their file for early advancement. We have always found a way to accommodate these requests. Sometimes, there may be discrepancy between a resident’s self-perception and his or her recorded performance. In these instances, the CCC has the potential to provide a helpful bridge between self- and other-assessments and may facilitate self-reflection and improvement.

A recent paper by Hauer and colleagues looked to the literature from education and occupational fields to provide recommendations that we might apply in renovating or starting up a CCC (7). A careful reading of this review suggests that the group processes of the CCC in our present case would benefit from critical examination. Kevin, the CCC chair, may need to be wary of how his dominant style undermines the voices of those more junior to him. As we can read, Allison is uneasy at the end of this case and doesn’t feel she has had the opportunity to be heard. Due to the power differential, it may be useful for Kevin to strategize with Allison on how best to share her knowledge of how CBME is supposed to work, something that she has clearly taken the time to explore in her clinician educator role. For instance, he might want to provide her with some floor time to present a short talk about the theory behind CBME to spur a better conversation between the committee members. He could also charge her with a small task of fact-gathering from other institutions to determine how others have tackled logistical concerns around service provision. Revisiting the CCC’s information sharing processes might be useful; it has been shown time and again in the business literature that groups that share information readily perform best and make the best decisions (8,9).

Kevin may also need to take a step back and re-examine the terms of reference for his CCC. Have they built in schemas to handle rapidly-advancing residents? Is the human resource problem that might be created by Josh’s early graduation a moderator that may prevent implementation of an early advancement plan? Has Kevin himself ensured that he is utilizing best practices in running his committee so that he can make sure they have good information-sharing processes?


In the era of CBME, the competency committee is a phenomenon that is likely here to stay. Reviewing the literature both inside and outside of medical education can inform how we run these committees and how we talent manage our residents


  1. Hauer KE, Chesluk B, Iobst W, Holmboe E, Baron RB, Boscardin CK, et al. Reviewing Residents’ Competence. Acad Med [Internet]. 2015;90(8):1084–92.
  2. Van Der Vleuten CPM, Schuwirth LWT, Driessen EW, Govaerts MJB, Heeneman S. Twelve Tips for programmatic assessment. Med Teach [Internet]. 2015;37(7):641–6.
  3. Schuwirth LWT, Van der Vleuten CPM. Programmatic assessment: From assessment of learning to assessment for learning. Med Teach [Internet]. 2011;33(6):478–85.
  4. van der Vleuten CPM, Schuwirth LWT, Driessen EW, Dijkstra J, Tigelaar D, Baartman LKJ, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34(3):205–14.
  5. Snell LS, Frank JR. Competencies, the tea bag model, and the end of time. Med Teach. 2010;32(8):629–30.
  6. Chahine S, Cristancho S, Padgett J, Lingard L. How do small groups make decisions? Perspect Med Educ [Internet]. 2017;192–8.
  7. Hauer KE, Cate O Ten, Holmboe E, Boscardin C, Iobst W, Chesluk B, et al. Ensuring resident competence : a narrative review of the literature on group decision-making to inform the work of Clinical Competency Committees. J Grad Med Educ. 2016;8(2):156–64.
  8. Klocke U. How to Improve Decision Making in Small Groups. Small Gr Res [Internet]. 2007;38(3):437–68.
  9. van Ginkel W, Tindale RS, van Knippenberg D. Team reflexivity, development of shared task representations, and the use of distributed information in group decision making. Gr Dyn Theory, Res Pract [Internet]. 2009;13(4):265–80.


Curated from the Community (Dr. Alkarim Velji, BSc BEd MD FRCPC candidate)

For our last case of the season we explored questions and opinions around advancement principles in competency based medical education (CBME). Our case revolved around Allison’s experiences as a member of her local competency committee as the group debated whether an exceptional fourth year resident, Josh, should advance earlier than his peers. The committee agreed that Josh was functioning well above the level of his fellow colleagues, had met the requirements of the program and, for all intents and purposes, was already practicing like an attending physician. Allison had suggested that in light of Josh’s achievements he be granted advancement (completion of his residency). The views of the other members of the competency committee, however, didn’t align with hers as they debated several competing agendas. Firstly, despite being a theoretical tenant of the CBME model, early advancement was currently unprecedented within their program. Furthermore, members of the committee argued that despite having achieved his milestones early, Josh would benefit from the further exposure and clinical experience that comes with the final year of residency. The final argument the committee raised centered around the service compenent of residency. Who would cover his shifts if he were to complete his training earlier than expected? As the committee was unable to reach a unanimous decision, the discussion was tabled until their next meeting in 2 months time.

The generated a great discussion on the blog and on twitter. General themes revolved around the principles of CBME, challenges of early advancement and the benefits of further clinical exposure.

Dr. Alexandra Thompson tweeted that in a true competency based system, Josh has been found to be competent and should advance. Otherwise, CBME functions entirely as a ‘stick’ rather than both a ‘stick’ and a ‘carrot’. Several other commentators argued that early advancement could become a slippery slope. Drs. Tanya Selek and Amy Pearson tweeted that without clear, well-defined criteria, early advancement could potentially lead to allegations of favoritism or bias. Furthermore, Dr. Pearson added that a model of early advancement could also increase the stress on residents to perform at a higher level, as progressing “naturally” could be perceived as mediocre or even performing poorly. This could then lead to greater levels of resident burnout.

Dr. Loice Swisher argued that early advancement could theoretically lead to even larger issues. She commented that early advancement could leave a fracture in the department as shifts would be left uncovered and the change could also impact the dynamic of the residency cohort as their peer advances while they are left behind. Additionally, institutional issues could arise when residents graduate early. As Dr. Damian Roland pointed out, recruitment and education are inherently linked. Institutional planning becomes more difficult when a program cannot predict the number of full time positions they need to hire and the number of new residents they need to match if the graduation time frame is inconsistent and unpredictable. Dr. Victoria Brazil added that advancement in Australia occurs at a variety of different times throughout the year but managing the advancement of residents who can complete their training at any point in the year is an onerous and challenging task. Therefore, early advancement would need well-defined criteria as well as buy-in from institutional stakeholders.

Our readers generally agreed that a fine balance between CBME advancement principles and the service component of residency needs to be established. Drs. Swisher, Brazil, and Woods stressed that the experience and clinical exposure with an extra year of training has great benefit even if the learner has achieved all of their EPAs. They added that graduating residents should not just be “good enough” to practice and that “even great residents are guaranteed to make mistakes as staff”, thus furthering the argument for added training time and exposure. Dr. Stuart Hastings shared his experiences transitioning to practice. He spent the first year after graduation navigating through the various aspects of staff life which he was not fully prepared for throughout his residency. Particularly with the current culture of medicine where new attending physicians have little support and feedback, having an extra year of clinical exposure with the added lifeline of a staff supervisor on shift will only help residents improve further. Denying Josh this year of added feedback and support could be a disservice to him.

Other readers debated that once a resident has met the expectations of their training, they should be allowed to progress ahead of schedule. “To hold them back would be like a prison sentence”, argues Dr. Minh Le Cong. Dr. Simon Fleming mentioned a model that would help foster excellence instead of stifling it. He suggested that Josh could take on an ‘acting’ or ‘pseudo’ attending role. This role could theoretically have a built-in level of supervision and review, as suggested by Dr. Minh Le Cong. People learn at different rates and the point of CBME is to move us away from an assembly line model of resident advancement. What’s more, as Dr. BJ Piper stated, despite his mastery of the fundamental skills and knowledge, an extra year of time and experience will help Josh become a more mature and well-rounded physician. Dr. Swisher added that “[she has] come to see the competency committees to be a floor which one should not have a resident go below rather than a ceiling that they have broken through.”


A special thank you to all our readers who participated on our discussion thread:

  • Victoria Brazil
  • Teresa Chan
  • Damian Roland
  • Loice Swisher
  • Rob Woods

And to those who contributed to the discussion via twitter:


Case and Responses for Download

Click here (or on the picture below) to download the case and responses as a PDF (321 kb).

Tamara McColl, MD FRCPC

Tamara McColl, MD FRCPC

Associate Editor, ALiEM MEdIC Series
Emergency Physician, St. Boniface Hospital, WRHA
Academic Lead, Educational Scholarship
Department of Emergency Medicine
University of Manitoba