Social Media Index (SMi)

Social Media Index (SMi) 2017-06-23T18:25:55+00:00

Hand holding a Social Media 3d Sphere

Social Media Index (SMi) – BETA

Academics have created indices to compare both scholars (h-index) and journals (Impact Factor) within a given field. The websites that contribute to the creation of Free Open Access Medical Education (FOAM) do not have a comparable parameter. This makes assessing the impact of each website challenging for both content producers and their supervisors. Additionally, learners may find it difficult to distinguish between reputable and unproven websites.

The Social Media Index (SMi) was developed to address these problems. It is a comparative index derived from three easily obtainable indicators including:

  1. Alexa Rank (of the website)
  2. Twitter Followers (of the most prominent editor)
  3. Facebook Likes (of the website’s page)

For each website, these 3 indicators are normalized and added together to give a score ranging from 0 to 10. The details of the derivation and validation of the SMi are outlined in a freely available article in the Western Journal of Emergency Medicine.

SMi - westjem

(DOI: 10.5811/westjem.2015.1.24680)1

Update: Removal of PageRank

Changes in the online world have required further revision to the SMi formula since data collection for the WestJEM article in 2013-14. Specifically, Google has quit updating PageRank which prevents its inclusion in future versions of the SMi. As outlined in the WestJEM article, Alexa correlated quite highly both with metrics of journal impact (when applied to medical journals) and PageRank. Additionally, almost every website had a value. For these reasons we have elected to double the weight of the Alexa Rank in the calculation to replace the contribution of PageRank. The calculation of the Alexa Rank component has also been modified slightly by fixing the lowest ranking websites to a value of 0 for that indicator (this change increases the range of values and makes it more consistent with the other variables). In the future we will be monitoring how these changes affect the SMi in the interests of making it a helpful and user-friendly metric.

Incorporating these changes, the formula used in this version of the SMi is:

SMi equation

Where max = maximum value, min = minimum value, site = value for a particular website.

Notes on the latest update

Thanks to Dr. Puneet Kapur (@kapurp), data for this version of the SMi was collected in November, 2016 by an automated computer program. The rankings from previous iterations (June 2015, November 2016, and June 2016) are included. For those interested in the ongoing development of the SMi, data collection for a study evaluating its ability to predict blog post quality is complete and should be published sometime in 2017.


As this is still a BETA version, we would appreciate your assistance in correcting any errors (Did we miss an Alexa Rank, Twitter Account, or Facebook page? Is our data inaccurate?). Feedback on the SMi can also be provided by commenting on this page or tweeting @Brent_Thoma.

June 15.Nov 15.June 16.Nov 16.HyperlinkAlexaTwitterFacebookSocial Media Index 
1111Life in the Fast Lane1468618900196569.4
3333Academic Life in EM28842214251201776.62
4544EMS 12 lead1231875172921058756.1
6665Dr Smith ECG9891758674309275.81
131076Emergency Medicine Cases7134673669153435.65
9987REBEL EM8673071047640615.52
161298Intensive Care Network1291531999957535.35
2320129EM Docs561184313018735.33
5151010St. Emlyns14961151156030605.17
15141412Ultrasound Podcast18227001085113414.88
19171613EM Lit of note1662274785012564.84
25221715EM Basic1835898845812924.81
18131515The Poison Review1503948553112684.81
20231817The SGEM2025669558125414.8
22181918EM Updates150226665018244.76
43354119First 10 EM80423418805674.74
NRNR2720Core EM1650934119944074.72
17282021Resus ME2682763127706764.59
26262522Ultrasound of the Week165209834779014.58
32292823Taming the SRU163857827698074.51
28272624ECG Experts88509925802159464.45
21252225Free Emergency Talks6888844711835274.33
NRNR2127Critical Care Practitioner Podcast357402351868024.26
NRNRNR28The Bottom Line320901057294584.22
31313128EM Lyceum3312424119942024.22
42413330Emergencypedia 173924746618044.21
41432331Critical Care Practitioner357402328648024.12
34473432ED Exam544004926229903.94
45333234Everyday Medicine11562515105069203.79
37443736PEM Blog380466910262673.61
72536137EMJ Club408708511172473.58
39484639Rogue Medic1998306459803.45
68627541Western Sono412908111471213.44
7215142Dont Forget the Bubbles24701889254015233.39
116374844Anaesthesiology and Critical Care989175189703.33
33494946PedEM Morsels1409361302703.25
NRNR3948Flight Bridge ED1692697314985683.21
29635251RAGE podcast35238641475103.16
55464252Sinai EM759055710841213.15
49394052EM Curious1078156623301003.15
36596254Broome Docs2304496560503.14
66545755EM Res Podcast2470188930663353.13
35615656KI Docs2590560667703.12
NRNR5557Intensive Blog2041258367503.1
69565958ICU Rounds2470188917864123.04
52386359Tox Talk2470188927702213.02
94827860Resus Review1658024127302.95
54605361Trauma Professionals blog2542088311602.94
NRNR4562Pharmacy Joe3728510604402.92
58424463A life at risk145845304763372.88
51655864The PEMNetwork117297752394322.86
70787065Rolobot Rambles4509593653902.84
1408010266Emergence Phenomena119475332802.8
NRNR7667HEFT EMCast3430246307702.79
40666667EM IM Doc4196964447902.79
NRNR8769Ditch Medics182681919021772.77
85779270Quiz EM1687733013710312.75
NRNRNR71The Bottom Line4219408281002.67
79698371Expensive Care2470188912421032.67
59768274EM Tutorials262010790102.63
871456874EM in 53399340153002.63
38555477Emergency Medicine Ireland7170179570902.6
76577378Sinai EM Media3564944135402.57
46858178Manu et Corde4742391230402.57
NRNR7278RCEM FOAMed Network8179371624602.57
80726982Edwin Leap5358983270002.55
NRNR10185Clinical Monster339514492902.51
1481529786EM Medicine PharmD4569252138202.46
103748686PEM ED7414716333702.46
104899388Flipped EM Classroom5024959119602.38
977510889Emergency Education444296085902.36
86738489Hennepin Ultrasound24701889548582.36
NRNR6791Humanizing Intensive Care12866576547902.34
77649895Wessix Intensive Care Society7843108192302.3
108939696Rural Docs18625392667702.24
90676597The EDE Blog23525911537262.21
64957498The Sono Cave7379067115902.2
11210612998SoBro EM8580129149102.2

*Sites without an Alexa Rank score were given the maximum rank (the same as the lowest ranked website).

Thoma B, Sanders J, Lin M, Paterson Q, Steeg J, Chan T. The social media index: measuring the impact of emergency medicine and critical care websites. West J Emerg Med. 2015;16(2):242-249. [PubMed]
  • Carlo D’Apuzzo

    Good idea even though, in my opinion, blogs are not journasl and their first aim si sharing ideas and experiences more than teaching. You condsider only 60 among the over 200 emergency medicine blogs categorized into the FOAM and all in English language. It would be nice to extend this evaluation to all members of the FOAM family Regards

    • Brent Thoma

      Hey Carlo!

      Thanks for the comments.

      I think the aim of a blog is set by its writer. Certainly they can be used for sharing ideas and experiences, but in some cases (like ALiEM) their purpose is to provide resources for community learning. Personally, I have a meded blog (BoringEM) in addition to my personal blog ( so that I can meet both purposes.

      I agree that the index is incomplete. I am very aware of the amazing work that LITFL has done putting that list together and I am currently making plans to utilize it. Expect the next update to either (1) include everyone or (2) to list detailed exclusion criteria. While I’ve moved it out of ‘pilot’ phase it is still a work in progress!

      Look forward to further discussion!

      • Michelle Lin, MD

        Thanks for sharing your excellent points. You beat us to the punch! Brent’s hard at work working with LITFL to bring in all the cataloged blogs.

        Personally, when you are talking about “peer review” value, journals and blogs should definitely be separate discussions. When talking about “impact” though, maybe journals and blogs are closer together than we think. Both are producing content (broadly defined) for the purpose of getting information to readers/learners. So perhaps a “digital impact factor” measure might be worth pursuing. Haven’t seen anyone else try to quantify this and I applaud Brent for pushing this idea ahead. There’s no perfect measure, but we, as FOAM content producers, at least should try before external entities try to devise measures for us…

        • Carlo D’Apuzzo

          Michelle, Brent thank you both for your prompt and kind replies.

          I think there is still a strong debate about the role of blogs and websites in medical education. As a FOAM enthusiast I strongly support this spontaneous and direct way to share our everyday ED experience and debate new insights in emergency medicine all over the world.. Each of us can give his contribute. To find a way to measure what is worthy to follow can help us to do better for our patients and I thank Brent for his attempt to get the FOAMed more transparent.

  • emcrit

    Folks,Can you discuss the decision to reduce the individual variables to ordinals rather than keeping the individual variables intact? Also, you may want to recheck the pagerank on nnt (not that I don’t love those guys).

    • Graham Walker

      I hereby stand by my website’s PageRank! 😉

      • Brent Thoma


        Agree with Graham, the PageRank I get for TheNNT (and BestBETS) using my two standard methods (the Google PageRank Checker and a Widget) is 6. Are you finding something else?

        Regarding the conversion of variables to ordinals – I would be super interested in hearing your thoughts on how we could do it better. The reason I went in that direction so far was because I needed a way to weigh each of the variables more equally so they could be amalgamated into the SM-i.

        However, your question got me thinking about better ways to do that. ie – I could present the actual values in the chart above then normalize each component on a scale of 0 to 200 based on the value of the top ranked site. These normalized, continuous, component scores could be added to calculate each SM-i. That would present the data more concretely, allow amalgamation into a SM-i AND prevent the data loss that results from rank-listing resulting in more accuracy. The max score would still be 1000, but only for a site that is tops on all 5 parameters.

        I’ll play around with what that looks like later today. In the meantime I’d appreciate any thoughts you have on that proposal or something different.

        Thanks for the feedback!

        • emcrit

          I see what you did now, if there was a tie, i.e. NNT and best bets, youput them both as 1 and then eliminated 2. You may want to consider what that does when compared to alternative. As to the rest, wait for the post.

          • Brent Thoma

            Exactly. PageRank is the only one that happens with because with such a blunt indicator there are a lot of ties.

            And I look forward to it – I think some method of comparative analysis is helpful for both producers and users of FOAM, but want to make it as good as possible.

            Thanks for the comments.

        • Michelle Lin, MD

          Yup, I’m looking forward to Scott’s comments. Only through iterative revisions can we move towards a more accurate measure. I’m curious what Brent’s numbers would look like if you change the ordinal/rank-based approach to a more normalized approach. Great discussion.

  • emcrit

    I think this is a much better version of the SM-i. I would still like to be removed if possible.

    • Ryan Radecki

      Perhaps the maths are improved, but I still don’t think this serves the learner that profoundly.

      It’s just a snapshot of current popularity – which, again, probably only has a loose association with quality. This doesn’t recognize good content until _after_ everyone else has recognized the good content and started linking/following it. Perhaps it’s reliable for a new user to discover some decent content … but it’s measuring a ghost of past momentum. If we start directing new trainees to this, it’ll persistently prevent good, new content from gaining a deserved foothold in the EM mindshare.

      • Brent Thoma

        Have you listened to the rationale in the related post that went up this morning?

        I agree that it will be a lagging indicator, however, reputations aren’t made overnight. Harvard, NEJM and LITFL have been around longer than other institutions in their fields and that gave them a head start, but it does seem to respond to new entries. REBEL EM is only a couple weeks old and it’s half way up the index. If it had a pagerank (this takes more time than the other indicators but not a large amount in the grand scheme of things) it would have already been at #16. I would be concerned by a system that allows new pages to shoot up and down like crazy – consistency is part of quality.

        • Ryan Radecki

          I don’t think REBEL EM is a reasonable citation for “newness” – Sal’s been around for a year, been involved with the FOAM community pretty heavily, and was launched in a sense from ALiEM. I still say this is a delayed measure of content popularity.

          • Michelle


          • Ryan Radecki

            Alternative? I don’t think there’s a human or automated method capable of devouring all currently published content and grading it without limitations.

            I’m simply playing Lucifer’s Advocate and pointing out this an interesting tool for measuring current popularity, but I wouldn’t necessarily generalize it to more than an indirect proxy for quality.

          • Brent Thoma

            That’s true, but it is an example of a site moving up very quickly. I’d be interested in how fast you think it should be able to happen? There is going to be some lag with almost any indicator – these are probably faster than most. BoringEM isn’t even 1 year old – that’s still quite young in the scheme of things.

      • Leon Gussow


        I don’t agree. I think all the sites mentioned have been very active in recognizing good new blogs/podcasts and recommending them to their followers. Without that effort, and given the incredible increase recently in the number of new projects, it would be more difficult for those just starting to find valuable innovative sites.

  • Chris Cresswell

    Should we have a measure of podcast downloads? Looks like a big chunk of FOAM that isn’t being counted. Are we overvaluing FB and G+ given that so relatively few use these media? But great work so far! Thx

    • Brent Thoma

      Thanks for the comment!
      That would be great, the only problem is that I don’t know an accessible way to get that information.
      While podcasts are not explicitly tracked, they do seem to do quite well through followership on the other indicators – the top site and 5/10 of the top 10 are podcasts.
      As this evolves we’ll be doing some factor analysis and looking more closely at the contributions of all of the metrics. Hope to see it get better 🙂

      • Chris Cresswell

        Understood. Gr8 work Brent. Thanks for the reply

  • Casey Parker

    Hi Brent
    I can see why this might be of interest to some folk. However, to my mind it is really just a type of score board – which smacks of a competitive environment. I don’t think that is what we are! I hated the alpha-male driven, egotistic & competitive nature elf Med School, and this reminds me of those days…
    What I like most about the FOAM world is collaboration. I love the fact that I can get in touch with somebody who is an expert / leader in an area of knowledge and have them collaborate with me on projects.
    Collaboration is key – this index cannot measure it, and it just might be harmful to intra-community collaboration and cross-pollination of ideas?

    I would love to see an index / map of “collaboration” across the blogosphere – that would be awesome to see how the community works to bring ideas together across the globe / specialties / generations etc.


    • Brent Thoma

      Hey Casey,

      Thanks for the reply.

      You are certainly correct in that it is a type of a score board. While it might foster some competition, I think the benefits it offers outweigh the drawbacks of that. Some of the benefits:
      (1) easy way for new FOAMites to find quality content. It may not be overwhelming for us at this point, but it still is overwhelming for people looking for a place to start. The lists at LITFL are nice but they do not help you to find the top sites. This does.
      (2) helps the FOAMites who need to prove that what they are doing is valuable. By creating the ranks contributors can show how their site is developing and where it fits amongst similar sites. At some point this might be important for peoples careers. Further, if we continue with no way to recognize those that are doing such great work we might have more difficulty attracting people who will know that their efforts will be better recognized if they do something else.
      (3) helps organize the sites for research purposes. It is hard to study FOAM blogs and podcasts without knowing who to ask. This list helps – stay tuned for more on how I am going to use it that way.

      I would definitely be interested in something to measure collaboration, but I am really at a loss to think of how that might look. Additionally, it would not replace this index for any of the three items I listed above.

      Those are my thoughts anyways. I hope they make some sense. We are going to do some statistical analysis on a more complete version of the index in the new year.

      • Any plans to update the SM-I? EM Cases is missing!

        • Brent Thoma

          Hey Anton!
          Absolutely. We actually have 4 data points from early this year and 1 from July with >150 websites. Unfortunately, we’re submitting it for publication and won’t be able to publish our new data (and formula) until after that happens.
          Fingers crossed!

  • Hi Brent. Great this is still moving forward. Is it possible to work out which blogs/podcasts/resources are mentioned by other blogs etc.? If so – should it make a difference if the those in the top 5/10 mention you. Is an index metric that avoids confounding for this possible…..?

    Just speculated – not sure relevant or worthwhile!

    • Brent Thoma

      Hey Damian,

      It is still moving forward but the above list is super old. We’ll be publishing an updated version along with the article in WestJEM. We’ll likely need to modify the PageRank component as it seems to have been abandoned.

      Re which blogs/podcasts/resources are mentioned by other blogs – to a degree that is how PageRank and (I believe) Alexa work so that is why they are included. I don’t have a more robust way to do it than that and am not going to try to get into the broader game of web analytics. With Google and Alexa already in the game I can’t imagine that going anywhere.

      I’m not sure what you mean by confounding??

      Thanks for the thoughts!


      • Thanks (sorry not the the clearest post!). Should have realised there probably was an element of that already. Point I was making about confounding is the effect of the biggest sites dominating the market and effecting other rankings. Does a mention on LIFTL falsely affect rankings regardless of quality? Relating backs to Leon Gussow’s comment of a year ago – the FOAM community are inherently friendly does this promote elevation in ranking regardless of the actual content of the blog or is it (and the more likely) stuff that really isn’t good isn’t mentioned and therefore this affect won’t effect the rankings.

        Ultimately I think this is a minor point and well done for moving forward on this. Looking forward to the article!

  • Ian Apollos Justl Ellis

    Could you consider a running 6 month or so average as well? This would decrease the amount of fluctuations and allow for a better sense of blogs with the most consistent impact. I, for one, really like the general idea of the rankings because it is really tough for a new learner to figure out which resources to choose from and this at least gives a starting point.

    • Brent Thoma

      Hey Ian,

      Thanks for the comment!

      A 6 month average is an interesting idea. There are a few reasons we haven’t tried something like that, the most obvious one being that we only run it every 3 months or so. However, having played with these numbers quite a bit, I can tell you that there aren’t a lot of dramatic fluctuations over time.

      To address that concern in this version I’ve included the rankings over the past 2 versions (rather than just the previous version) so people can compare the fluctuation.

      Hope that is helpful!


  • Check – 60+ hrs of FOAMed for medical students and residents.