Skip to content

Social Media Index: Controversy and Evolution


Hand holding a Social Media 3d SphereThe Social Media Index was moved from BoringEM to ALiEM on the morning of Thursday, November 21st. The increased exposure for my previously obscure little prototype got it a lot of attention. By that afternoon Dr. Scott Weingart (@EMCrit) had weighed in with an audio response critical of the index and requested that EMCrit be removed. This set off a lively discussion on Twitter as a good chunk of the FOAM community got in on this important discussion.

This is why an index for FOAM will help learners, educators, and researchers:

This was partly in response to Dr. Weingart’s audio response:

Two Goals for the Social Media Index

As mentioned in my audio response, the two goals I had for the index were to start a conversation and to measure the impact of FOAM in a way that is useful for FOAM learners, educators, and researchers.

Goal 1: This was met during the Twitter conversation three days ago. If you missed it, Tessa Davis (@TessaRDavis) and Teresa Chan (@TChanMD) did a spectacular job of curating the discussion into some consensus points posted today: Lessons Learned from an Impromptu Twitter Consensus Conference on Blog Design.

Goal 2: Some members of the FOAM community argue that the second goal is either not worthwhile or not being met by the index. While I disagree, I do think there is room for improvement. In particular, I think Dr. Weingart’s criticism of the index for using ordinal variables was spot on. I took that feedback into account and revised the index as outlined today on the Social Media Index page.

I expect that in the coming days the FOAM community will continue to weigh in on this topic and we will come up with a standard way to respond to requests to “opt-out” of the index. We look forward to everyone’s responses as we continue to move FOAM forward.

Brent Thoma, MD MA
ALiEM Associate Editor
Emergency Medicine Research Director at the University of Saskatchewan
Editor/Author at
Brent Thoma, MD MA
  • Jason Sanders

    Some things to consider:

    Redundancy of the index components? Are they overlapping somehow and dragging the index scores in a certain direction? I suspect ALiEM has a higher facebook presence due to demographics and energy behind this avenue of engagement.

    Each component of the index having equal weights or unequal weights? Determining where the weights come from? This impacts which components we feel are or should be most associated with tbe outcome represented by the index.

    Using relative values by standardizing to the number of included blogs vs absolute values? Relative values alter the interpretation to ones of relative presence on the web instead of absolute presence.

    Similarly, rather than using relative values for the outcomes (the rank number), present the raw index score. The raw score can also be mapped back to a Z score for overall standardization.

    The functional form of each component? What mathematical alterations if any should be performed on the raw data? And why?

    Benchmarking the index against a known standard? I know a standard doesn’t exist to measure FOAM, which is a reason to develop the index, but potentially see how it correlates with other known markers of web presence.

    Once the methodology is tested more we can begin using the index as a research tool, as Brent suggests.

    • Brent Thoma

      Hey Jason,

      Have you jumped over to the most recent update of the SM-i? It actually did take a few of these points into account.

      Specifically, the raw values are now presented in the chart and there is no longer any ranking for each component. Each component score is calculated based on an absolute proportion of the highest score for that component and then the component scores are added together. The only mathematical alteration that was performed to the raw values was on the Alexa score. Because it is inverted (low scores are ‘higher’) and has huge ranges of values the math was a bit more complicated and square roots were used. This made the values distinguish between sites in a more meaningful way.

      Anyways, take a look. I’d be interested an your answers to your own questions for how it can be done better!


      • Jason Sanders

        Hi Brent,

        I just saw the actual equation, sorry for not noting that before. I agree with the changes, especially the switch to absolute values. The other questions still remain though. I think some of these issues could be examined one by one in a sensitivity analysis to see how labile the raw scores are to changes in the calculation. Mapping the final raw scores onto a Z-score might be useful for normalization seeing as the scores are skewed, but that’s a different question again.

        As you initially stated, this should be guided by overall goals for the use of the index. Then the math follows to make sure the index is as robust as possible to meet those goals.

        The evolution will be great to follow.


  • emcrit

    Brent, Very good response; especially the part about SM-i as a research tool. I will stop grumbling about EMCrit’s presence, I’m just not going to look.

  • Michelle

    Great explanation, Brent. My perspective on this all revolves around moving education towards a more learner-centric approach, rather than the outdated teacher-centric approach. In reality, you need both the learner and teacher for education to occur, but too much focus has been on the latter (I’m partly guilty for that as well). With that preface, here are some comments:

    1. It’s about the LEARNER.

    2. FOAM seems to me to be increasingly a community OF educators rather than a network FOR education. It once was a fringe idea that has quickly taken hold and now we consist of 200+ social media sites. If FOAM wants to remain and ONLY remain an informal community of likeminded educators, then that’s great. But can we do better? Can our collectively great content help learners in their own education? Without something more formalized, a little more structured, and studied, it’ll be hard to get buy-in from medical schools, residencies, and CME organizations. I personally would love to see FOAM grow more, and we can only do that if we have a solid foundation to build upon.

    3. I hadn’t thought about the issue of SM-i being a platform so that those of us writing FOAM content can bring this back to our departments to show that we are indeed actually doing something with our academic time. From a sustainability model, I’m not sure how long I can personally keep up this endeavor on my own negative-dollar budget. I’m losing money but gaining so much more personal and professional fulfillment. Something like a SM-i might help me balance this a bit.

    4. If I were a junior medical student interested in learning about EM or a residency director new to FOAM who wants to incorporate it into her curriculum, the FOAM list of 200+ sites would be quite daunting. Most learners DON’T have access to people who already navigate the FOAM resources. So why NOT have a multisystem, crowdsourced listing that shows at least what sites are most visited and most active? Not all sites will fit what they want in their personal learning environment (PLE), but learners are smart enough to figure things out with this framework to start with. I see this index as more welcoming new learners who want to use FOAM as a learning resource rather than something that pits FOAM producers against each other to up their rank.

    5. The SM-i undeservedly has been compared to the impact factor for journals. The impact factor, by the way, only factors in (1) number of times an article is cited by other journals and (2) the total number of citable items in the journal. Journals often “game the system” because they benefit from up’ing their impact factor score. It in fact doesn’t factor in the end-user, the readers. The SM-i instead factors in global end-user behavior in a crowdsource type of way… and we all know the power of the “crowd” (see Wikipedia).

    6. Scott- what do you have against Facebook? I have lots of lurking followers from Russia, Eastern Europe, and India/ Pakistan. I think we should be sure that quality educational material should be on as many platforms as possible to reach all potential learners.

    Again, it’s all about the LEARNER. Let’s reset our perspective.

    Just some of my initial thoughts. Looking forward to see how this evolves.

    • Craig, CEN, CCRN

      It’s always interesting how different people can look at the same thing and see something completely different.

      I see the index as moving it AWAY from the learner. That was always one of the strengths of Web 2.0 and the broadcast like way it spreads. It’s put out to the world and the world tunes in what they want/need.

      I see the index as making it more about the producer/educator and less about the learner. But I admit up front I tend to be cyclical about things like that.

      Maybe the issue is more about how it’s being sold/marketed vs the tool. As I’ve looked and thought about it, it could be a convenient tool for a new blogger/podcaster etc to see what works for others. To make their voice a little easier to be heard through the static.

      In the end it’s a tool, just like many others out there. The cynic in me says that the personal learning concepts that brought us here in the first place will become what we were moving away from. It’s just happening faster now than it used to.

      How about adding to the index a way to evaluate how an established site/person is mentoring others and bringing their content out to the world.

      • Michelle

        Thanks for commenting, Craig. Your insights are always welcome and help us avoid the traps of a “group think” where the momentum of the group takes us completely in the wrong direction.

        Agree that the “marketing” of the SM-i didn’t go as well as I had hoped. It was indeed meant as more of a tool for learners. It’s not perfect, but it’s at least a framework for novice learners and early FOAM producers to look at.

        I think the issues we are talking about here are good problems to have. This means that FOAM has outgrown its fringe- / novelty-status and is showing itself to be a substantial player in medical education. We have to evolve to keep up, while also trying to maintain our core missions.

        I like your suggestion that being a “citizen of FOAM” should also include paying it forward with mentorship. Would be very interesting to measure. Current mentorship models in academia are hard to quantify. It currently just lives on a CV with a list of your mentors and mentees. Doesn’t mean though that we can’t brainstorm ways to address this!

      • Brent Thoma

        Agree with Michelle – thanks for the insightful comments.

        Regarding additional metrics like mentoring others – I think it’s a spectacular idea and would love to incorporate it! I have a heck of a lot of people to thank for supporting me when I started out and continually helping to improve my content (Michelle and Mike Cadogan being on the top of a very long list).

        Quantifying that is the tough part. If you have any ideas I’d love to hear them. I really do hope this index evolves / helps to launch something more rigorous.

  • Joel Topf

    I think the use of twitter and facebook is very gameable. People can buy followers and likes. For these components have you considered leveraging the Klout Score? Or something similar?

    • Michelle

      I haven’t heard much about the Klout Score, although I presume that this is also gameable. Also I don’t think there’s an easy way to find out other people’s Klout Score (requires me logging into Klout to find my score). That being said, I think this may be worth pursuing. Thanks for the great idea.

      “Gameability” is going to be an ever-persistent flaw of any ranking system whether it be a social media index or an journal impact factor. Question is whether we can keep minimize it. Thanks for commenting!

    • Brent Thoma

      Thanks for the comment.

      I hear what you’re saying. That would certainly be ‘gaming’ the system. I see that as a big trade-off on this type of thing – the more transparent we make it the easier it is to game; the more secretive the sketchier it looks. Regardless, I think people try to ‘game’ most systems. The best way to combat that in my mind will be to continue to develop better tools to evaluate quality. As we do, a ranking system like this will become less and less relevant.

      Regarding Klout, I agree with Michelle that I don’t understand it completely. I thought it did a similar thing – looked at followers and interaction among various social networks. If that is the case, wouldn’t it be possible to game it in the same way?

      • Michelle

        Hmm, here’s a crazy thought. An extreme option would be to JUST go with Alexa, which is a generally accepted global index of website popularity and impact. It’s ranking methodology is super-secret and thus less prone to gaming. I think it’s established enough that people don’t find it too “sketchy”.

        • Brent Thoma

          Interesting thought. Whatever their methods are though, they seem to pay a lot of attention to SEO optimization. Don’t Forget the Bubbles and BoringEM are disproportionally high on it relative to all other indicators. I do think this is more robust 🙂