Skip to content

Discussing Annals EM article: Social Media and Physician Learning

SoME and Physician learningExpertPeerReviewStamp2x200I was delighted to see the News and Perspectives piece in this month’s Annals of Emergency Medicine about “Social Media and Physician Learning” (free PDF). I had totally forgotten that Jan Greene, the author, had called to talk with me several months ago. In the piece, she discusses many of the issues with which I struggle:

  • Is peer review good or bad?
  • What is the role of blog and podcast sites in the future of medical education?
  • With the ease of how anyone can be “published” on blogs, how can one decide on the trustworthiness of open educational resources such as FOAM?
  • Can or should social media education practices be held up to the rigorous scientific standards of original research?

Here are some noteworthy quotes:

Do online collaboration and FOAM-like teachings make a difference?

There aren’t a lot of data to support the idea that physicians learning through online collaboration is any better than the traditional ways. A recent review of the topic found 14 useable studies, only 1 of which was randomized, and concluded only that the technique merited further investigation.

Should online teaching methods of vetting medical ideas be held to rigorous scientific methods?

And although some of that work is ongoing, particularly in such venues as the Journal of Medical Internet Research, they [FOAM boosters] acknowledge the field is so new there’s not a lot to go on so far. “That’s the golden ring of social media,” said Bryan Vartabedian, MD, a longtime medical blogger (33 Charts blog) who develops programs in digital literacy at Baylor College of Medicine. “It’s difficult to achieve, and reliable outcomes have yet to emerge.”

Should critical appraisal of the literature be pre- or post-publication? Where should this appraisal take place?

Dr. Vartabedian supports the idea of the democratization of scientific review online, particularly compared with the traditional debate in journal letters to the editor, which are limited. “I don’t need the New England Journal of Medicine or British Medical Journal as a platform for being critical of any study,” Dr. Vartabedian says. “There are other venues for dialogue besides journals. This idea that we need to be having these conversations within the confines of some traditional construct of a journal. We’re on top of the most remarkable shift in modern medical history, and every physician has the capacity to offer their views.”

… Dr. Mesko agrees that traditional peer review doesn’t have to be challenged by the rise of social media. “These processes should be totally separated to make sure content that is academic must keep its academic nature,” he said by e mail. “But when we need information and don’t know who might have the answer for our questions, curated social media channels can be unbelievably useful.”

Learning in the social media arena is going to happen whether the education community wants to or not. How do we deal with this moving forward?

Older professors may find themselves challenged by a student wielding a Twitter post relevant to a clinical choice being made, and they should welcome that discussion, argues Lin. “We have senior faculty who are amazing clinicians and read journals, but then we have this whole group of residents who are immersed in social media, and they do bring an amazing amount of interesting content and different ways of thinking about the literature.”


Aren’t we talking about two different things: Research and Knowledge Translation?

Personally, I believe that we may be talking about two different issues as Dr. Mesko alludes to above. There are original research and scientific discoveries, which are published in peer-reviewed journals. And then there is “knowledge translation” — the interpretation and implementation from journal publication to bedside practice. They have a complementary, yin-yang relationship.

Blogs and podcasts are playing an increasingly important role in the latter in accelerating this knowledge translation process. There should be a scholarly approach in how we assess the quality of educational blogs and podcasts, but this should be different from the standards and approaches taken for original research. So I find it confusing when we try to lump original research with FOAM-like educational endeavors. One is trying to find scientific truths, and the other is trying to teach these truths to frontline providers to improve patient care. Maybe we should be comparing blogs and podcasts to textbooks rather than journal publications instead?

That being said, I think that the FOAM and the social media community have a lot to be gained by learning about what has and hasn’t worked in the traditional publication world (and maybe vice versa):

  • Is peer review perfect? No.
  • Can it be valuable if used appropriately? Absolutely.
  • Is there value in crowdsourced peer review in the form of blog comments? Sure, if there are is a critical mass of high-quality and thoughtful comments.


Expert Peer Review Experiment

This is partly the reason why we on the ALiEM blog are experimenting with an expert peer review system for selected blog posts. We believe we are the first to formally pioneer this process for educational blogs to help vet the accuracy and quality of our content. Because we typically do not receive many comments on the blog, we don’t consistently have crowdsourced feedback. So instead we target experts such as topic-specific authors from cited publications or nationally recognized topic-specific speakers. Many have been gracious in participating. Their comments are included with the original blog post on a pre- or post-publication basis.

See examples of expert peer review comments on clinical content.

Thoughts and comments?

Let’s keep up the discussion momentum of these hotly discussed topics. Would love your thoughts.

Expert Peer Review

A Brief Historical Perspective

Although we all probably think of peer review as the traditional process of a couple of experts poring over a manuscript for a journal, it\'s worth pointing out that we should look a little further than just our own field of medicine (which btw is rather poor at learning many relevant things that other disciplines could teach us). In the field of physics and mathematics peer review has for decades followed a model of massive pre-print review; that is, the new paper is shared in full with much of that scientific community (not just 2 or 3). They read it, debate it back and forth, the author participates in that debate and revises it, until then eventually agreement is high and it is published in final form. There is much less post-publication review (because it’s already done), and also due to the relatively small size of their audience, there doesn\'t need to be, since many of them have seen it already or dug directly into the data (authors typically share databases as well, so it’s harder to obsfucate).

Regarding Peer Review

The original 18th century concept of scientific publishing was that after some initial prepublication discussion at regular meetings of a society of experts (most of whom knew each other well), the piece would be published. (For perspective, this was back in the day before the word “scientist” had been invented). Contemporary experts have repeatedly made the point that in our present system, \"real\" peer review happens after publication, not before, and we should expect that. There may not be so much in the way of correspondence as in physics or math, but every subsequent paper on the topic is a review and critique in a fashion, and hopefully an improvement on the prior paper. Of course in that ideal system, there would also always be replication of the original study, which almost never happens.

Recent research shows that even in the world\'s top journals, basic science research by the world\'s elite researchers can be replicated less than 50% of the time. Think about that sobering number for a minute (which represents the best scenario; the figure for run of the mill journals is probably dramatically less). Nobody in academia or at the NIH revealed that crucial omission; the only reason we know this depressing fact is that venture capitalists got tired of wasting their money investing in \"breakthroughs\" reported in journals like Nature and Science, which turned out to be dead ends when pursued for product development, so they now attempt to replicate them before they invest. And the majority fail the test.

The Ideal

So I would say the ideal review process would start with public presentation and critique (as at scientific meetings but ones hopefully modified in format), and also tedious pre-print review as done in our current journal model, but probably ideally by a lot more than 2 or 3 reviewers. (In other fields reliable and reproducible judgement requires up to a dozen reviewers, which is more like the number in research grant review panels). That\'s not practical in terms of resources. But then there should additionally be vigorous and ample post-publication debate and critique, which would take advantage of the other smart and thoughtful people in the field, which would be summed up and incorporated in some way in the paper. (It used to be a tradition for many surgical journals that at the end of a manuscript they would publish several pages of commentary/critique by a few surgeons expert in that area).

Quality Content Takes Time and Ongoing Discussion

The more I think about it, I don\'t see social media as providing anything truly new in terms of approach or content. It is simply a much improved technological mechanism for providing a prompt public forum for the thoughts of many who did not get to participate in the process before, and who previously did not overcome the barriers of slow correspondence and publication. Increasing their participation would be a good thing. As Dr. Lin points out, smart clinicians and young readers have always been out there, but now they are heard more often. A number of their ideas will be good, some will not, and discussion should help sort that out. The downside of this is that you can\'t arrive at really well thought out and trustworthy conclusions in a matter of minutes, even if you were at the top of your class, so good conclusions about new approaches will always take time.

This new medium also makes it much easier for a charismatic and articulate person to disseminate their (mistaken) views as quickly and widely as their good ones. That might not matter too much except that they are doing a different kind of review. Traditional peer review, as tedious and laborious as it is, spends a lot of time prying out the details of methods, design, etc. looking for the flaws which are so frequent (in fact fatal flaws affect the majority of submissions). At Annals virtually no articles, even very convincing ones, are accepted without additional queries about such \"sausage factory\" questions, which then lead to additional detail and revisions in the final published paper. These additional issues are often not immediately obvious even if you have the full traditional manuscript in front of you, and they cannot be determined at all if you are only going from an abstract or a media report of a meeting.

Michael Callaham, MD
Founding Chair and Professor of Emergency Medicine, UC San Francisco; Editor in Chief of Annals of Emergency Medicine
Michelle Lin, MD
ALiEM Editor-in-Chief
Academy Endowed Chair of EM Education
Professor of Clinical Emergency Medicine
University of California, San Francisco
Michelle Lin, MD
Michelle Lin, MD

Latest posts by Michelle Lin, MD (see all)