Discussing Annals EM article: Social Media and Physician Learning

Discussing Annals EM article: Social Media and Physician Learning

SoME and Physician learningExpertPeerReviewStamp2x200I was delighted to see the News and Perspectives piece in this month’s Annals of Emergency Medicine about “Social Media and Physician Learning” (free PDF). I had totally forgotten that Jan Greene, the author, had called to talk with me several months ago. In the piece, she discusses many of the issues with which I struggle:

  • Is peer review good or bad?
  • What is the role of blog and podcast sites in the future of medical education?
  • With the ease of how anyone can be “published” on blogs, how can one decide on the trustworthiness of open educational resources such as FOAM?
  • Can or should social media education practices be held up to the rigorous scientific standards of original research?

Here are some noteworthy quotes:

Do online collaboration and FOAM-like teachings make a difference?

There aren’t a lot of data to support the idea that physicians learning through online collaboration is any better than the traditional ways. A recent review of the topic found 14 useable studies, only 1 of which was randomized, and concluded only that the technique merited further investigation.

Should online teaching methods of vetting medical ideas be held to rigorous scientific methods?

And although some of that work is ongoing, particularly in such venues as the Journal of Medical Internet Research, they [FOAM boosters] acknowledge the field is so new there’s not a lot to go on so far. “That’s the golden ring of social media,” said Bryan Vartabedian, MD, a longtime medical blogger (33 Charts blog) who develops programs in digital literacy at Baylor College of Medicine. “It’s difficult to achieve, and reliable outcomes have yet to emerge.”

Should critical appraisal of the literature be pre- or post-publication? Where should this appraisal take place?

Dr. Vartabedian supports the idea of the democratization of scientific review online, particularly compared with the traditional debate in journal letters to the editor, which are limited. “I don’t need the New England Journal of Medicine or British Medical Journal as a platform for being critical of any study,” Dr. Vartabedian says. “There are other venues for dialogue besides journals. This idea that we need to be having these conversations within the confines of some traditional construct of a journal. We’re on top of the most remarkable shift in modern medical history, and every physician has the capacity to offer their views.”

… Dr. Mesko agrees that traditional peer review doesn’t have to be challenged by the rise of social media. “These processes should be totally separated to make sure content that is academic must keep its academic nature,” he said by e mail. “But when we need information and don’t know who might have the answer for our questions, curated social media channels can be unbelievably useful.”

Learning in the social media arena is going to happen whether the education community wants to or not. How do we deal with this moving forward?

Older professors may find themselves challenged by a student wielding a Twitter post relevant to a clinical choice being made, and they should welcome that discussion, argues Lin. “We have senior faculty who are amazing clinicians and read journals, but then we have this whole group of residents who are immersed in social media, and they do bring an amazing amount of interesting content and different ways of thinking about the literature.”


Aren’t we talking about two different things: Research and Knowledge Translation?

Personally, I believe that we may be talking about two different issues as Dr. Mesko alludes to above. There are original research and scientific discoveries, which are published in peer-reviewed journals. And then there is “knowledge translation” — the interpretation and implementation from journal publication to bedside practice. They have a complementary, yin-yang relationship.

Blogs and podcasts are playing an increasingly important role in the latter in accelerating this knowledge translation process. There should be a scholarly approach in how we assess the quality of educational blogs and podcasts, but this should be different from the standards and approaches taken for original research. So I find it confusing when we try to lump original research with FOAM-like educational endeavors. One is trying to find scientific truths, and the other is trying to teach these truths to frontline providers to improve patient care. Maybe we should be comparing blogs and podcasts to textbooks rather than journal publications instead?

That being said, I think that the FOAM and the social media community have a lot to be gained by learning about what has and hasn’t worked in the traditional publication world (and maybe vice versa):

  • Is peer review perfect? No.
  • Can it be valuable if used appropriately? Absolutely.
  • Is there value in crowdsourced peer review in the form of blog comments? Sure, if there are is a critical mass of high-quality and thoughtful comments.


Expert Peer Review Experiment

This is partly the reason why we on the ALiEM blog are experimenting with an expert peer review system for selected blog posts. We believe we are the first to formally pioneer this process for educational blogs to help vet the accuracy and quality of our content. Because we typically do not receive many comments on the blog, we don’t consistently have crowdsourced feedback. So instead we target experts such as topic-specific authors from cited publications or nationally recognized topic-specific speakers. Many have been gracious in participating. Their comments are included with the original blog post on a pre- or post-publication basis.

See examples of expert peer review comments on clinical content.

Thoughts and comments?

Let’s keep up the discussion momentum of these hotly discussed topics. Would love your thoughts.

Expert Peer Review

A Brief Historical Perspective

Although we all probably think of peer review as the traditional process of a couple of experts poring over a manuscript for a journal, it\'s worth pointing out that we should look a little further than just our own field of medicine (which btw is rather poor at learning many relevant things that other disciplines could teach us). In the field of physics and mathematics peer review has for decades followed a model of massive pre-print review; that is, the new paper is shared in full with much of that scientific community (not just 2 or 3). They read it, debate it back and forth, the author participates in that debate and revises it, until then eventually agreement is high and it is published in final form. There is much less post-publication review (because it’s already done), and also due to the relatively small size of their audience, there doesn\'t need to be, since many of them have seen it already or dug directly into the data (authors typically share databases as well, so it’s harder to obsfucate).

Regarding Peer Review

The original 18th century concept of scientific publishing was that after some initial prepublication discussion at regular meetings of a society of experts (most of whom knew each other well), the piece would be published. (For perspective, this was back in the day before the word “scientist” had been invented). Contemporary experts have repeatedly made the point that in our present system, \"real\" peer review happens after publication, not before, and we should expect that. There may not be so much in the way of correspondence as in physics or math, but every subsequent paper on the topic is a review and critique in a fashion, and hopefully an improvement on the prior paper. Of course in that ideal system, there would also always be replication of the original study, which almost never happens.

Recent research shows that even in the world\'s top journals, basic science research by the world\'s elite researchers can be replicated less than 50% of the time. Think about that sobering number for a minute (which represents the best scenario; the figure for run of the mill journals is probably dramatically less). Nobody in academia or at the NIH revealed that crucial omission; the only reason we know this depressing fact is that venture capitalists got tired of wasting their money investing in \"breakthroughs\" reported in journals like Nature and Science, which turned out to be dead ends when pursued for product development, so they now attempt to replicate them before they invest. And the majority fail the test.

The Ideal

So I would say the ideal review process would start with public presentation and critique (as at scientific meetings but ones hopefully modified in format), and also tedious pre-print review as done in our current journal model, but probably ideally by a lot more than 2 or 3 reviewers. (In other fields reliable and reproducible judgement requires up to a dozen reviewers, which is more like the number in research grant review panels). That\'s not practical in terms of resources. But then there should additionally be vigorous and ample post-publication debate and critique, which would take advantage of the other smart and thoughtful people in the field, which would be summed up and incorporated in some way in the paper. (It used to be a tradition for many surgical journals that at the end of a manuscript they would publish several pages of commentary/critique by a few surgeons expert in that area).

Quality Content Takes Time and Ongoing Discussion

The more I think about it, I don\'t see social media as providing anything truly new in terms of approach or content. It is simply a much improved technological mechanism for providing a prompt public forum for the thoughts of many who did not get to participate in the process before, and who previously did not overcome the barriers of slow correspondence and publication. Increasing their participation would be a good thing. As Dr. Lin points out, smart clinicians and young readers have always been out there, but now they are heard more often. A number of their ideas will be good, some will not, and discussion should help sort that out. The downside of this is that you can\'t arrive at really well thought out and trustworthy conclusions in a matter of minutes, even if you were at the top of your class, so good conclusions about new approaches will always take time.

This new medium also makes it much easier for a charismatic and articulate person to disseminate their (mistaken) views as quickly and widely as their good ones. That might not matter too much except that they are doing a different kind of review. Traditional peer review, as tedious and laborious as it is, spends a lot of time prying out the details of methods, design, etc. looking for the flaws which are so frequent (in fact fatal flaws affect the majority of submissions). At Annals virtually no articles, even very convincing ones, are accepted without additional queries about such \"sausage factory\" questions, which then lead to additional detail and revisions in the final published paper. These additional issues are often not immediately obvious even if you have the full traditional manuscript in front of you, and they cannot be determined at all if you are only going from an abstract or a media report of a meeting.

Michael Callaham, MD
Founding Chair and Professor of Emergency Medicine, UC San Francisco; Editor in Chief of Annals of Emergency Medicine

Michelle Lin, MD
ALiEM Editor-in-Chief
Academy Endowed Chair of EM Education
Professor of Clinical Emergency Medicine
University of California, San Francisco
Michelle Lin, MD
Michelle Lin, MD

Latest posts by Michelle Lin, MD (see all)

  • Stella Yiu

    Hi Michelle,

    I agree that separating research and knowledge translation is important. It is harder to distill vigorous critical appraisal of original research (the process Dr. Callaham alluded to) into tweets or even blogs. That process takes time.

    For knowledge translation, Social Media is the perfect medium. It ranges from ‘Hey look what I found!’ to being in the audience of a great expert at a virtual conference. I am alerted to articles daily in this personal learning space. I still look up the root literature to decide on what I would do – but at least I know they now exist.

    The peer review process would work for Social Media when there is a critical mass of comments. My struggle is how to drive that engagement.

    • Michelle

      YES! The key to success in knowledge translation using social media is having appropriately skeptical life-long learners who do what you do — you still need to put in the wok to look up the root literature to decide for yourself. Learning isn’t supposed to be easy.

      As for engagement I think this partly an issue of a generational divide between digital natives and digital immigrants. Another issue is that the rules and definitions of social media professionalism for physicians are constantly evolving (hence the many institution-specific SoMe guidelines). Engagement can only occur when one feels safe that they aren’t stepping out of the bounds of what is considered acceptable by external standards.

      That’s why we need better role modeling of best practices on HOW to engage and establish your digital footprint by faculty members such as yourself. For me, I live by the Mayo 12-word social media policy:

      Don’t Lie, Don’t Pry
      Don’t Cheat, Can’t Delete
      Don’t Steal, Don’t Reveal

      I’d add:
      Be Nice, Give Credit (ok, it doesn’t rhyme but I’m sticking to it.)

  • Timothy C Peck, MD

    The core of this issue I see is a matter of who’s in the reviewing group and who’s out.

    Peer reviewed journal articles are subject to the opinions of a number of niche groups before they are published – those seeing the data presented at academic conferences and article reviewers. Open Access resources are NOT subjected to the opinions of ‘everyone.’ I’m pretty sure my mom isn’t going to read this post. I’m also pretty sure that there won’t be any nephrologsits reading this post either. This is an EM blog (with some cross over to medical educators in general), and so there are loose borders around the group who get to do the reviewing (commenting).

    I don’t know if we should kick med students out of the reviewing process; I don’t know if we should kick residents out of the reviewing process. I do know, however, that we need to experiment with moving the borders around to find out who gets to be the ‘peers’ in ‘peer reviewed open access material in order to produce the best and most reliable content. The reverse way of thinking about this is, “who gets to be in the ‘crowd’ in the crowdsourcing of peer reviewed print journal articles.” The two processes aren’t too far off from one another – they just define the borders of who is an expert differently.

    It seemed to me an impossible task to control these borders. But a few years ago, I helped organize a group of people to tackle this problem and formed iClickEM.com. We are not quite ready to release our platform to the public, but I will let you know when we are. I hope to achieve a lot of good for the EM community including using the power of crowdsourcing.

    • Timothy C Peck, MD

      I would also point you to Brent Thoma’s site http://boringem.org/foam-sm-index/ which has made significant progress on started to tackling this issue. He has devised something called the Social Medial Index (SM-I) which gives a number value to FOAMed sources which is based on popularity on Twitter/FB/Alexa/etc. The ‘borders’ of this method include everyone visiting the site (Alexa), but also takes into account the borders of communities like those with Twitter accounts, those with Facebook, etc. The possibilities are endless here and I think he’s on to something.

      • Michelle

        I agree – Brent is onto something. Would love to see it maintained and revised over time. More importantly, I hope Brent will publish about this soon (hint, hint Brent).

        I had previously thought about making a digital impact factor based on the traditional journal’s impact factor, JIF (calculated as: average # of citations received per paper published in that journal during the two preceding years). This idea eventually didn’t work since it’s difficult to measure what “citations” might be for blogs and podcasts and the challenging process of manually calculating this on an ongoing basis. Plus, like the JIF, the digital impact factor would be criticized for the ability to “game the system”.

        • Brent Thoma

          Thanks for the shout-out Timothy and Michelle! I look forward to transitioning the Index to be hosted on ALiEM within the next month and getting additional feedback before making a run on a publication.

          And this is a super interesting discussion. I look forward to seeing it continue to evolve as SM/online medical education goes mainstream.

    • Michelle

      Insightful comment, Tim. It indeed boils down to: Who should be the “peer” in peer reviewed publications, and who should be in the “crowd” in the crowdsourcing of peer reviewed publications? The question is what comprises a critical mass of crowd members to be considered a valid representation of the community? Also there are different types of crowdsourcing metrics. There are two types in my mind: the crowd members individually provides a prose review of the content (such as this blog comment thread), or are we talking click-based, web traffic analytics (such as Google Analytics). I find that I too often lump both of these definitions when discussing the pros/cons of crowdsourcing.

      • Megan Ranney

        This discussion is terrific, and overdue. I’m so proud of Annals for publishing this News & Perspective piece. And honored to have Dr. Callaham commenting here!

        My thoughts are mostly about the peer review process. Although I am a firm believer in digital health/FOAMed, I defer to those of you with more experience in the realm. I do, however, feel qualified to comment on research and the ways in which technology will influence it. So, in no particular order of importance:

        1. I agree that original research is different from dissemination of research. And that the role of “peer review” should therefore differ for these two entities. However, many of our current “guidelines” have less-than-clear evidence one way of another (e.g., tpa). There is a lot of leeway for interpretation, both in the original research and in the interpretation/dissemination thereof. FOAMed/SoMe can allow for a nuanced discussion of the various interpretations. But it can also allow (as both Michelle & Mike mention) for domination by particularly charismatic voices. Is this better or worse than domination by a well-funded voice?? I am not sure.

        2. As journals experiment more with #SoMe/online access, I think that there may be more blurring of lines of traditional peer review. I don’t know that our existing model (an article being sent to Mike, then to me, then to a few reviewers who may/may not accept, then having me synthesize reviews, then going back to the author) will be sustainable in the long term. As current digital natives begin to explore research, they will want more crowdsourcing and more immediate responses to their manuscripts.

        3. At the same time, there is a lot of value in using “experts” for peer review. People don’t get labeled this for nothing. Granted, our experts all have their own individual foibles, pet issues, and favorite soapboxes. But they also have tremendous experience, education, and perspective. From a personal perspective, I’m not sure that I could have done as good of a job as a peer reviewer when I was an MS-1 as I can now. (I also expect to continue to improve!)

        (….On the other hand, wasn’t that the argument against Senators being elected by popular vote? … that the “people” wouldn’t make the right choices? Hmm.)

        I am very interested to hear others’ viewpoints, and will be back to read more comments!

        Michelle, thank you as always for leading the way. Appreciate your post and your curation.

        • Michelle

          Megan: Thanks for sharing your experiences as a journal peer reviewer. GREAT questions for which I have no answers. I think we all agree that there are faults in both traditional peer review processes and engaged knowledge translation using social media. It’s only with more honest and open-minded discussions amongst those with differing opinions and perspectives that we can evolve as a profession and academic scholars. Thanks again, Mike and Megan for sharing your thoughts!

  • Salim R. Rezaie

    Sorry I am late to the game, but before I talk about the topics at hand, I just wanted to say I am honored to have Michelle as a mentor and having me blogging on her site. When the term trailblazer gets put out there, she is certainly at the front lines of this and always thinking outside the box. It is just amazing to sit back and watch her work.

    So I am new to FOAM. I was asked to join twitter in February of 2013. I must say I was reluctant and skeptical at first. I thought this was just a silly fad that was the cool thing to do, but then after being on twitter for 2 weeks I met Michelle. She contacted me and we started talking shop at CORD 2013. I have been following her since I was a med student, heard her speak, but never met her in person. Wow, The Michelle Lin asking to meet me? Well the rest is history from there. I am hooked.

    So now to the topics at hand: social media, FOAM, peer-review, and knowledge translation.

    1. The point that I was truly sold and realized that FOAM works was at ACEP 2013. I went to a lecture called the best articles of 2013. We discussed about 10 to 12 articles, most of which were not even 6 months old. There was not a single article discussed, that I had not discussed, read about on a blog, heard on a podcast, or written about myself. Every single thing the speaker was saying, I already knew. What would have taken me probably another 6 months of perusing journals for the game changer articles was already done in real-time.

    2. Original research is different than translation of research to the greater population. But how behind the times are guidelines? How often do guidelines get updated…every three years or so? How many good studies come out in that time that refute the guidelines. This is a real chance to keep current, practice EBM, and translate that information in a quicker fashion.

    3. Social Media is now quickly becoming an educational tool. A tool is only as good as the person who is using it. Just like a peer-reviewed study is only as good as the question that is being asked or the reviewer who is reviewing it. I don’t know, you tell me…do you want a drug company who has a vested interest in a medication or product, that is funding a peer-reviewer or someone who is searching for the ultimate truth with no vested interest doing a peer review? It was already stated 50% of research is wrong. Maybe this is because the system is flawed.

    Ultimately, we need good, original research, but how we peer review it could be more like FOAM & social media…….all I can say is welcome to the conversation!!!!

    • TChanMD

      You’re 100% onto something here… Improvements and changes to the peer review system would be key.

      What is awesome is actually is that PubMed is going to be enhancing their comments section. I think this is amazing.

      This means that we could be doing something like this for every paper we see published… not just for blogs!! 😀

      BUT this article mentions that true *engagement* will be the challenge.

      I think ALiEM is proof of concept that a good engagement strategy, lead by people dedicated to it… who aren’t afraid of trying new things and CQI-ing things until they work. If social media presence is done right (Accompanied with cultural change!) then I think we can do something.

      Peer pressure and peer review are both required forces in implementing evidence based change. If you think about it… We’re blurring the lines between education-advertising-leadership to change behaviour.

      Something to ponder.

      • Michelle

        Wow, how do you KNOW these hot-off-the press news? Post-publication comments on Pubmed Commons. Genius.

        Your and Brent’s amazing job at facilitating comments on the MEdICS cases are prime examples that while comments are great, a thoughtfully facilitated discussion amongst commenters builds a higher-order of engagement and cognitive participation. Great points!

        • TChanMD

          Awwwww shucks Michelle: Thanks!
          So much to learn though.

          I am still puzzled by MEdICS… What is the draw? Who are the people? Why do some people sign their name and others not?

          I think I need to talk to some online sociologists to help us do an online ethnography…. 😀

    • Michelle

      Great points about how FOAM and social media really seems to be a CATALYST in hastening the knowledge translation of scientific discoveries. Guidelines are a perfect example of where social media can help get the news out quicker to frontline providers and learners.

      Also i agree that high-quality social media products and journal publications both depend on the quality and unbiased nature of the writer or reviewer. They both give mere mortals SUPER POWER ABILITIES. “With great power comes great responsibility… “

      • G’day everyone,

        I’ve published a post on this on LITFL here (it includes links not copied across into this comment):

        The article itself features nothing that hasn’t already been discussed over and over on countless FOAM blogs and Twitter conversations, but it is always nice to see these issues being packaged for mainstream dissemination.

        Firstly, yes, we all know that FOAM and social media lacks an ‘evidence-base’ — just like much of medicine and almost all of education, there are no RCTs or similarly high level studies demonstrating benefit. This is partly because the use of social media in medical education is relatively knew, partly because it is difficult to assess, and partly because those who find it useful just want to get on with it. Attempts to define the utility of social media and FOAM in medical education are to be welcomed.

        There is a discussion of peer review, and whether it should be pre- or post-publication. I believe the traditional peer review process used for medical scientific publications is flawed (see Time to Publish Then Filter?, The Wisdom of Crowd Review and Peer review: a flawed process at the heart of science and journals). Yet, until now, in the context of scientific publication, peer review has reminded me of Churchill’s assessment of democracy: “… the worst form of government except all the others”. In my mind, post-publication peer review must be explored — and Michael Callaham’s commentary at the end of the ALIEM post makes a lot of sense on the whole. In some ways, post-publication peer review is already happening in the FOAM world using social media. Blogposts, such as this from the Intensive Care Network, have led to corrections in high profile journals such as the New England Journal of Medicine. Online journal clubs abound and there are entire blogs dedicated to critical appraisal (such as Emergency Medicine Literature of Note and EM Nerd). However, post-publication peer review needs to be formally integrated into the medical scientific publishing model. Indeed, Pubmed Commons is now being trialled.

        I will pull up Michael Callaham on one point though — I think he underestimates the potential impact of social media when he states:

        “The more I think about it, I don’t see social media as providing anything truly new in terms of approach or content. It is simply a much improved technological mechanism for providing a prompt public forum for the thoughts of many who did not get to participate in the process before, and who previously did not overcome the barriers of slow correspondence and publication. Increasing their participation would be a good thing.”

        While I agree that free and open debate — the anvil on which ‘Truth’ is forged — existed before social media, my experience tells me that social media transforms this utterly. We must not underestimate the impact of technology (after all weren’t the discovery of fire, the invention of the wheel and the Gutenberg press simply technological advances?) Social media has revolutionised my capacity to track and participate in cutting edge controversies and discussions of the merit of breaking research. It enables me to exchange ideas with some of the brightest minds in my field anytime and anywhere. This is amplified even more so for those in remote locations and for those with limited resources. Importantly, whether you like social media or not, it is here and it is here in a big way. Increasingly social media is integrating into day-to-day living and anyone interested in the exchange of information ignores it at their peril. The game has changed.

        Michelle also hits the nail on the head with her comment that scientific publication and knowledge translation are different things (together with my colleagues Paul Young and Dashiel Gantner, I have an editorial for Critical Care and Resuscitation on the role of social media in knowledge translation currently in press). FOAM is not scientific research — it is a way of disseminating, discussing, dissecting and deliberating over the products of that research — as well as issues where research findings do not apply, or do not exist. FOAM is more akin to editorials and commentary articles in journals (which are usually not peer reviewed but written by invitation) than to the research articles themselves. Pre-publication peer review can be of use in FOAM for fact checking and so forth, but it also runs the risk of diluting arguments and opinions before they’ve had a chance to live or die in the melee of truly open discussion. What we should strive for in FOAM, however, is scholarship — appropriate referencing to journal articles and the FOAM that came before.

        Over to you.


        PS. Impact factors suck!

  • Michael Callaham

    It’s been exciting to read these intelligent and thoughtful
    comments… gives me real hope for the change to come.

    I’ve spent about half of my research person-years doing studies
    on peer review, which have almost exclusively shown that the process is
    very flawed and our traditional efforts to improve it don’t work. (Don’t choose
    this topic as a career path, it’s not the fast track to promotion). I’ve also
    attended every International Peer Review Congress for the past 30 years, where
    all these topics get debated at great length by editors and researchers and
    more research is presented about the many weaknesses of the traditional peer
    review system. The pace of change is exceedingly slow, and the present process
    very labor intensive.

    One of many reasons we need a better system is that so much more
    material is now published, and there is so much opportunity to mislead. The
    institutions of peer review have not been very proactive; journals proclaim high
    standards of quality, transparency, etc. but essentially almost never enforce
    them. The funding agencies have not taken their responsibilities seriously
    either, by demanding that any research arising from their support be submitted
    for publication to journals that meet specific rigorous quality standards. More
    importantly, no one demands replication – in fact if you submit a replication
    study (supposedly the backbone of the scientific process) it will probably be
    rejected because it’s “not news”. If our published research was more
    reliable, and if major study findings had all been replicated a few times, we’d
    all have to spend much less time interpreting the papers and debating whether
    we can trust what they claim. (Think how much human energy would have been
    conserved on alteplase alone!)

    I do think this technological change will dramatically speed up the evolution of peer review, since the barriers to access are so much lower. A larger pool of reviewers can be an enhancement if there are mechanisms in place to sort out the wheat from the chaff. My personal model in this are the customer reviews on Amazon – all the mechanisms
    they already have in place would work wonderfully for both submitted and
    published articles, including their ability to track a group reputation for individual reviewers. The other component we will need is carefully designed research to identify which methods of the peer review of the future work best, before we implement and institutionalize them. (The current methods were mostly just implemented and then not much reviewed or changed subsequently). There is a chance to do it really right this time, I look forward to seeing the new models in action,watch their evolution, and see their effectiveness carefully measured.

    • Hi Michael
      Great to see your commentary on ALIEM.
      I’m really looking forward to seeing progress in peer review – fingers crossed for a new dawn.
      I also totally agree that the lack of replication in medical research is a travesty – industry recognises the importance of this, yet we seem not to.

    • TChanMD

      Hi Dr. C:
      Thanks for your comments and responses! I think a big part of the opportunity for social media is the post-production phase. If a great article is launched, and no one reads it… Well, it can’t change practice.

      Brent Thoma has been a big advocate of how KT needs to be done – as an imperative. Especially if you truly believe the answer to be the ‘truth’.

      We are starting to see some of this manifest now… the CRASH2 investigators have been quite avid in their translation of their work into sizable and actionable items. Each publication on TXA seemed to lead towards making it very easy for people to walk up to their hospitals and say… We need to buy this, stock this, and give it (within 3 hours). That sort of coordinated, action oriented research is warranted and needed.

      But similarly, we must make sure we do not forget the explorers and the scientists looking at discovery work. I think the KT of their material needs a much broader platform, and a more articulate discussion.

      For instance, with my own work: To be published soon in the Journal of Graduate Medical Education (JGME) will be a q ualitative study about consultations in the ED and how residents perceive their relationships changing over time. Well… That’s harder to operationalize. So my job for doing the KT will be different – it will be to create some sort of translation mechanism (perhaps another MEdICS case… or to revisit our first), to bring new perspective to people’s viewpoints. To pose questions. To engage. To make you think.

      I guess, what we need to think about…
      What does great knowledge translation look like?
      How do you do it?
      Who does it?

  • Michelle

    More GREAT comments over at Life in the Fast Lane! http://lifeinthefastlane.com/twitter-annals-em/ The more comment threads the better. Love it that we are increasingly growing the discussion audience.