Social Media Index (SMi) 2017-06-27T14:53:22+00:00

# Social Media Index (SMi)

Academics have created indices to compare both scholars (h-index) and journals (Impact Factor) within a given field. The websites that contribute to the creation of Free Open Access Medical Education (FOAM) do not have a comparable parameter. This makes assessing the impact of each website challenging for both content producers and their supervisors. Additionally, learners may find it difficult to distinguish between reputable and unproven websites.

The Social Media Index (SMi) was developed to address these problems. It is a comparative index derived from three easily obtainable indicators including:

1. Alexa Rank (of the website)
2. Twitter Followers (of the most prominent editor)
3. Facebook Likes (of the website’s page)

For each website, these 3 indicators are normalized and added together to give a score ranging from 0 to 10. The details of the derivation and validation of the SMi are outlined in a freely available article in the Western Journal of Emergency Medicine.1

## Feedback?

As this is still a beta version, we would appreciate your assistance in correcting any errors (Did we miss an Alexa Rank, Twitter Account, or Facebook page? Is our data inaccurate?). Feedback on the SMi can also be provided by commenting on this page or tweeting @Brent_Thoma.

## Update: Removal of PageRank

Google deprecated the PageRank metric after our original publication in 2013. Because of this, PageRank will no longer be included in future versions of the SMi.

Fortunately, our data in the 2013 WestJEM paper suggests that the Alexa score has a high correlation to PageRank. Because of this, we have elected to double the weight of the Alexa score in the SMi calculation to replace PageRank’s contribution.

We have also modified the calculation of the Alexa component of the SMi. This change fixes the lowest ranking websites to a value of 0, which increases the range of values. The effect of this change makes the score more consistent with the other components.

We will continue monitoring how these changes affect the SMi in the interest of making it helpful and easy to use.

Incorporating these changes, the formula used in this version of the SMi is:

SMI_tt(site) = 5 xx ( ( 1/log(tt(Al\exa)_tt(site)) – 1/log(tt(Al\exa)_(max))) / ( 1/log(tt(Al\exa)_(min)) – 1/log(tt(Al\exa)_(max)))) + 2.5 xx ( log(tt(Twit\ter)_(tt(site)+1)) / log(tt(Twit\ter)_(max))) + 2.5 xx ( log(tt(Facebo\ok)_(tt(site)+1)) / log(tt(Facebo\ok)_(max)))

Where max = maximum value, min = minimum value, site = value for a particular website.

1.
Thoma B, Sanders J, Lin M, Paterson Q, Steeg J, Chan T. The social media index: measuring the impact of emergency medicine and critical care websites. West J Emerg Med. 2015;16(2):242-249. [PubMed]
• Carlo D’Apuzzo

Good idea even though, in my opinion, blogs are not journasl and their first aim si sharing ideas and experiences more than teaching. You condsider only 60 among the over 200 emergency medicine blogs categorized into the FOAM and all in English language. It would be nice to extend this evaluation to all members of the FOAM family http://lifeinthefastlane.com/emcc-blog-update-2013/. Regards

• Brent Thoma

Hey Carlo!

I think the aim of a blog is set by its writer. Certainly they can be used for sharing ideas and experiences, but in some cases (like ALiEM) their purpose is to provide resources for community learning. Personally, I have a meded blog (BoringEM) in addition to my personal blog (brentthoma.mededlife.org) so that I can meet both purposes.

I agree that the index is incomplete. I am very aware of the amazing work that LITFL has done putting that list together and I am currently making plans to utilize it. Expect the next update to either (1) include everyone or (2) to list detailed exclusion criteria. While I’ve moved it out of ‘pilot’ phase it is still a work in progress!

Look forward to further discussion!

• Michelle Lin, MD

Thanks for sharing your excellent points. You beat us to the punch! Brent’s hard at work working with LITFL to bring in all the cataloged blogs.

Personally, when you are talking about “peer review” value, journals and blogs should definitely be separate discussions. When talking about “impact” though, maybe journals and blogs are closer together than we think. Both are producing content (broadly defined) for the purpose of getting information to readers/learners. So perhaps a “digital impact factor” measure might be worth pursuing. Haven’t seen anyone else try to quantify this and I applaud Brent for pushing this idea ahead. There’s no perfect measure, but we, as FOAM content producers, at least should try before external entities try to devise measures for us…

• Carlo D’Apuzzo

Michelle, Brent thank you both for your prompt and kind replies.

I think there is still a strong debate about the role of blogs and websites in medical education. As a FOAM enthusiast I strongly support this spontaneous and direct way to share our everyday ED experience and debate new insights in emergency medicine all over the world.. Each of us can give his contribute. To find a way to measure what is worthy to follow can help us to do better for our patients and I thank Brent for his attempt to get the FOAMed more transparent.

• emcrit

Folks,Can you discuss the decision to reduce the individual variables to ordinals rather than keeping the individual variables intact? Also, you may want to recheck the pagerank on nnt (not that I don’t love those guys).

• Graham Walker

I hereby stand by my website’s PageRank! 😉

• Brent Thoma

Hey,

Agree with Graham, the PageRank I get for TheNNT (and BestBETS) using my two standard methods (the Google PageRank Checker and a Widget) is 6. Are you finding something else?

Regarding the conversion of variables to ordinals – I would be super interested in hearing your thoughts on how we could do it better. The reason I went in that direction so far was because I needed a way to weigh each of the variables more equally so they could be amalgamated into the SM-i.

However, your question got me thinking about better ways to do that. ie – I could present the actual values in the chart above then normalize each component on a scale of 0 to 200 based on the value of the top ranked site. These normalized, continuous, component scores could be added to calculate each SM-i. That would present the data more concretely, allow amalgamation into a SM-i AND prevent the data loss that results from rank-listing resulting in more accuracy. The max score would still be 1000, but only for a site that is tops on all 5 parameters.

I’ll play around with what that looks like later today. In the meantime I’d appreciate any thoughts you have on that proposal or something different.

Thanks for the feedback!

• emcrit

I see what you did now, if there was a tie, i.e. NNT and best bets, youput them both as 1 and then eliminated 2. You may want to consider what that does when compared to alternative. As to the rest, wait for the post.

• Brent Thoma

Exactly. PageRank is the only one that happens with because with such a blunt indicator there are a lot of ties.

And I look forward to it – I think some method of comparative analysis is helpful for both producers and users of FOAM, but want to make it as good as possible.

• Michelle Lin, MD

Yup, I’m looking forward to Scott’s comments. Only through iterative revisions can we move towards a more accurate measure. I’m curious what Brent’s numbers would look like if you change the ordinal/rank-based approach to a more normalized approach. Great discussion.

• emcrit

I think this is a much better version of the SM-i. I would still like to be removed if possible.

Perhaps the maths are improved, but I still don’t think this serves the learner that profoundly.

It’s just a snapshot of current popularity – which, again, probably only has a loose association with quality. This doesn’t recognize good content until _after_ everyone else has recognized the good content and started linking/following it. Perhaps it’s reliable for a new user to discover some decent content … but it’s measuring a ghost of past momentum. If we start directing new trainees to this, it’ll persistently prevent good, new content from gaining a deserved foothold in the EM mindshare.

• Brent Thoma

Have you listened to the rationale in the related post that went up this morning?

I agree that it will be a lagging indicator, however, reputations aren’t made overnight. Harvard, NEJM and LITFL have been around longer than other institutions in their fields and that gave them a head start, but it does seem to respond to new entries. REBEL EM is only a couple weeks old and it’s half way up the index. If it had a pagerank (this takes more time than the other indicators but not a large amount in the grand scheme of things) it would have already been at #16. I would be concerned by a system that allows new pages to shoot up and down like crazy – consistency is part of quality.

I don’t think REBEL EM is a reasonable citation for “newness” – Sal’s been around for a year, been involved with the FOAM community pretty heavily, and was launched in a sense from ALiEM. I still say this is a delayed measure of content popularity.

• Michelle

Alternative?

Alternative? I don’t think there’s a human or automated method capable of devouring all currently published content and grading it without limitations.

I’m simply playing Lucifer’s Advocate and pointing out this an interesting tool for measuring current popularity, but I wouldn’t necessarily generalize it to more than an indirect proxy for quality.

• Brent Thoma

That’s true, but it is an example of a site moving up very quickly. I’d be interested in how fast you think it should be able to happen? There is going to be some lag with almost any indicator – these are probably faster than most. BoringEM isn’t even 1 year old – that’s still quite young in the scheme of things.

• Leon Gussow

Ryan:

I don’t agree. I think all the sites mentioned have been very active in recognizing good new blogs/podcasts and recommending them to their followers. Without that effort, and given the incredible increase recently in the number of new projects, it would be more difficult for those just starting to find valuable innovative sites.

• Chris Cresswell

Should we have a measure of podcast downloads? Looks like a big chunk of FOAM that isn’t being counted. Are we overvaluing FB and G+ given that so relatively few use these media? But great work so far! Thx

• Brent Thoma

Thanks for the comment!
That would be great, the only problem is that I don’t know an accessible way to get that information.
While podcasts are not explicitly tracked, they do seem to do quite well through followership on the other indicators – the top site and 5/10 of the top 10 are podcasts.
As this evolves we’ll be doing some factor analysis and looking more closely at the contributions of all of the metrics. Hope to see it get better 🙂

• Chris Cresswell

Understood. Gr8 work Brent. Thanks for the reply

• Casey Parker

Hi Brent
I can see why this might be of interest to some folk. However, to my mind it is really just a type of score board – which smacks of a competitive environment. I don’t think that is what we are! I hated the alpha-male driven, egotistic & competitive nature elf Med School, and this reminds me of those days…
What I like most about the FOAM world is collaboration. I love the fact that I can get in touch with somebody who is an expert / leader in an area of knowledge and have them collaborate with me on projects.
Collaboration is key – this index cannot measure it, and it just might be harmful to intra-community collaboration and cross-pollination of ideas?

I would love to see an index / map of “collaboration” across the blogosphere – that would be awesome to see how the community works to bring ideas together across the globe / specialties / generations etc.

C

• Brent Thoma

Hey Casey,

You are certainly correct in that it is a type of a score board. While it might foster some competition, I think the benefits it offers outweigh the drawbacks of that. Some of the benefits:
(1) easy way for new FOAMites to find quality content. It may not be overwhelming for us at this point, but it still is overwhelming for people looking for a place to start. The lists at LITFL are nice but they do not help you to find the top sites. This does.
(2) helps the FOAMites who need to prove that what they are doing is valuable. By creating the ranks contributors can show how their site is developing and where it fits amongst similar sites. At some point this might be important for peoples careers. Further, if we continue with no way to recognize those that are doing such great work we might have more difficulty attracting people who will know that their efforts will be better recognized if they do something else.
(3) helps organize the sites for research purposes. It is hard to study FOAM blogs and podcasts without knowing who to ask. This list helps – stay tuned for more on how I am going to use it that way.

I would definitely be interested in something to measure collaboration, but I am really at a loss to think of how that might look. Additionally, it would not replace this index for any of the three items I listed above.

Those are my thoughts anyways. I hope they make some sense. We are going to do some statistical analysis on a more complete version of the index in the new year.

• Any plans to update the SM-I? EM Cases is missing!

• Brent Thoma

Hey Anton!
Absolutely. We actually have 4 data points from early this year and 1 from July with >150 websites. Unfortunately, we’re submitting it for publication and won’t be able to publish our new data (and formula) until after that happens.
Fingers crossed!
Brent

• Hi Brent. Great this is still moving forward. Is it possible to work out which blogs/podcasts/resources are mentioned by other blogs etc.? If so – should it make a difference if the those in the top 5/10 mention you. Is an index metric that avoids confounding for this possible…..?

Just speculated – not sure relevant or worthwhile!

• Brent Thoma

Hey Damian,

It is still moving forward but the above list is super old. We’ll be publishing an updated version along with the article in WestJEM. We’ll likely need to modify the PageRank component as it seems to have been abandoned.

Re which blogs/podcasts/resources are mentioned by other blogs – to a degree that is how PageRank and (I believe) Alexa work so that is why they are included. I don’t have a more robust way to do it than that and am not going to try to get into the broader game of web analytics. With Google and Alexa already in the game I can’t imagine that going anywhere.

I’m not sure what you mean by confounding??

Thanks for the thoughts!

-Brent

• Thanks (sorry not the the clearest post!). Should have realised there probably was an element of that already. Point I was making about confounding is the effect of the biggest sites dominating the market and effecting other rankings. Does a mention on LIFTL falsely affect rankings regardless of quality? Relating backs to Leon Gussow’s comment of a year ago – the FOAM community are inherently friendly does this promote elevation in ranking regardless of the actual content of the blog or is it (and the more likely) stuff that really isn’t good isn’t mentioned and therefore this affect won’t effect the rankings.

Ultimately I think this is a minor point and well done for moving forward on this. Looking forward to the article!

• Ian Apollos Justl Ellis

Could you consider a running 6 month or so average as well? This would decrease the amount of fluctuations and allow for a better sense of blogs with the most consistent impact. I, for one, really like the general idea of the rankings because it is really tough for a new learner to figure out which resources to choose from and this at least gives a starting point.

• Brent Thoma

Hey Ian,

Thanks for the comment!

A 6 month average is an interesting idea. There are a few reasons we haven’t tried something like that, the most obvious one being that we only run it every 3 months or so. However, having played with these numbers quite a bit, I can tell you that there aren’t a lot of dramatic fluctuations over time.

To address that concern in this version I’ve included the rankings over the past 2 versions (rather than just the previous version) so people can compare the fluctuation.