I have been frustrated (in a good way) by the recent social media discussions (see BoringEM.com) about how social media content is viewed with a skeptical eye by medical educators, academicians, and professionals because of the lack of formal quality-control mechanisms.
Common questions from skeptics:
- “Is it peer-reviewed?”
- “How can I tell that it is a quality blog post?”
These are reasonable questions to ask. These questions, however, can not be answered with traditional answers.
- Blogs with greater web traffic most likely have higher-quality content by word of mouth and external linking from other websites.
- The power of the crowd “course-corrects” for errors, such as Wikipedia.
Still these are not very satisfying answers. Can we do better?
Social media-based medical education (FOAMed = Free Open Access Meducation) has gained much popularity through a grass-roots approach, but now faces a glass-ceiling effect. Learners gravitate towards it, but traditional educators still shy away from it. Blogs often fall short when compared to the gold standard of print journals, which have a formal peer-review editorial system for quality control. This remains the gold standard despite known faults and biases in the system.
So this past weekend, I was frustrated into action!
I experimented with several blog models (example) to create a more formal peer-review process. The basic premise is that the power of the crowd should not be undervalued, as demonstrated by the star ranking system on commercial sites such as Amazon, Yelp, and Netflix. Think about the last time you revised your purchases based on reviews. Similarly, our star-rating system, which will be seen at the top of our blog posts, can now help readers assess the quality of each blog post, to assist less discriminating readers regarding content quality.
To make this work, I hope that our readers will help the FOAMed community by rating each blog topic that they read using the following criteria, which mirrors similar metrics for journal manuscripts:
Under this form will be a public link to the Demographics results (sample Google Docs sheet), in case people are interested.
In this past weekend’s experiment, I received 6 peer reviews in the first 4 hours, which included 1 medical student, 1 resident, 3 practicing physicians, and 1 paramedic from 2 countries. (Thanks to those who responded!) How amazing is that? Contrast that to print journals who typically have 2-3 peer reviewers read your manuscript.
Now the question is: How many crowd-sourced reviewers will be needed to demonstrate an adequate quality-control process? I don’t know. The more the better, I assume.
Update 5/31/13:
Due to lackluster results (very few people actually assessed the articles), we discontinued this project and removed all of the star-rating headers upon transitioning to this WordPress blog.