Has the tide turned? A new culture for Responsible Metrics is on the horizon

Katrine Sundsbø reflects on the UK Forum for Responsible Metrics event, held on the 7th February 2018.

The topic ‘responsible metrics’ has gone from hot to boiling after RCUK signed DORA(San Francisco Declaration on Research Assessment) Wednesday 7th February. This means that they, as a funding organisation, are committing to good practice in regards to the use of metrics in research assessment. The timing of the event by UK Forum for Responsible Metrics on Thursday 8th February could therefore not have been better.

Read full post

Meeting report: summary of day 1 of the 2018 European ISMPP Meeting

The 2018 European Meeting of the International Society for Medical Publication Professionals (ISMPP) was held in London on 23–24 January and attracted nearly 300 delegates; the highest number of attendees to date. The meeting’s theme was ‘Advancing Medical Publications in a Complex Evidence Ecosystem’ and the agenda centred around data transparency, patient centricity and the future of medical publishing. Delegates were treated to two keynote addresses, lively panel discussions, interactive roundtables and parallel sessions, and also had the chance to present their own research in a poster session. Continue reading “Meeting report: summary of day 1 of the 2018 European ISMPP Meeting”

Otherwise Engaged: Social Media from Vanity Metrics to Critical Analytics

Rogers, R. A. (2017). Otherwise Engaged: Social Media from Vanity Metrics to Critical Analytics. International Journal of Communication : IJoC, 11.

Vanity metrics is a term that captures the measurement and display of how well one is
doing in the “success theater” of social media. The notion of vanity metrics implies a
critique of metrics concerning both the object of measurement as well as their capacity
to measure unobtrusively or only to encourage performance. While discussing that
critique, this article focuses on how one may consider reworking the metrics. In a
research project I call critical analytics, the proposal is to repurpose alt metrics scores
and other engagement measures for social research and measure the “otherwise
engaged” or other modes of engagement (than vanity) in social media, such as
dominant voice, concern, commitment, positioning, and alignment. It thereby furnishes
digital methods—or the repurposing of platform data and methods for social research—
with a conceptual and applied research agenda concerning social media metrics.

When evaluating research, different metrics tell us different things

Science has long been accepted by policy makers as valuable; however, recently scientists and research institutions have been asked for evidence to justify their research. How this evidence is provided is grounds for lively debate.

Scientific peer-review based on human judgement is time consuming and complex. As a result, it has become commonplace to make assumptions around the quality of research based on indicators of reuse by other academics – the number of citations the corresponding articles receive. Citation impact is used as a proxy for quality in this way, though there are manifold issues around this proxy. Should negative citations count? Are all citations of equal merit? There is likely static noise on a large scale. Continue reading “When evaluating research, different metrics tell us different things”

Altmetric-supported research: 2017 in review

2017 saw an explosive growth in the number of researchers investigating Altmetric’s data, with some pretty cool results! Thanks to the 30+ publications, presentations, and theses/dissertations that researchers have released, we’ve learned (among many other things) that:

  • The percentage of research discussed online doubled between 2011 and 2015;
  • For ornithology research, higher Altmetric Attention Scores are correlated with a 112% increase in citation rate; and
  • In chemistry, publishing Open Access leads to more online attention for female authors

Here are some of the many Open Access* research outputs resulting from studies published on our data this year. Continue reading “Altmetric-supported research: 2017 in review”

Where are the rising stars of research working? Towards a momentum-based look at research excellence

Traditional university rankings and leaderboards are largely an indicator of past performance of academic staff, some of whom conducted the research for which they are most famous elsewhere. Paul X. McCarthy has analysed bibliometric data to see which research institutions are accelerating fastest in terms of output and impact. The same data also offers a glimpse into the future, helping […]

Tracking new attention to older publications

While altmetrics are often praised for their ability to show attention in “real time”, to complement traditional citations that tend to take a few years to accrue, they also have the ability to surface attention to older publications. For example, the frighteningly titled “Occurrence of virulent anthrax bacilli in cheap shaving brushes” published in the Journal of the American Medical Association in 1921 received news attention in 2017.

Continue reading “Tracking new attention to older publications”

New white paper asks, “How can we reform promotion and tenure practices to promote open access?”

George Mason University Press and the Open Scholarship Initiative (OSI) have just published the OSI 2017 Promotion & Tenure Reform Workgroup Report, which explores how professional advancement scenarios–promotion and tenure, grant applications, and so on–might be reimagined to better incentivize open access, open data, and other “open” scholarly practices.

Continue reading “New white paper asks, “How can we reform promotion and tenure practices to promote open access?””

“The feels” – Sentiment analysis for altmetrics

One of the central aspects of what we do at Altmetric is processing and subsequently storing large quantities of data, whether we are talking about publication meta-data or online attention, in its various formats (news, Facebook or Twitter posts, etc). This allows us to occasionally have a bit of fun in doing  our own research to test assumptions and hypotheses that we or others may hold.

Continue reading ““The feels” – Sentiment analysis for altmetrics”

4:AM Hack Day retrospective

This year’s hack-day was cast as a do-a-thon. Although the origin of the term ‘hackday’ (and ‘hackathon’) borrows the word ‘hack’ from computer programming (in the “messy prototype” sense), the value in a hack-day doesn’t lie in the code that lays strewn around at the end of the day, but the ideas that were explored. Stacy and I hosted this year’s, and from the start we decided that we should broaden the appeal, encouraging anyone who had an idea or a desire to explore them, to participate.

Over the course of the Altmetrics17 Workshop and 4:AM Conference, it became clear that many of the ideas flying around concerned societal hurdles much as they did technical ones. Two ideas that came out particularly strongly were, “Can we fairly make assessments based on social media interaction when various social groups, in particular women, are subject to mistreatment online?” and “How do we track the outputs of people who don’t participate in ‘mainstream’ scholarly publishing norms?”.

We started the day with a roster of ideas, submitted from conference attendees, remote attendees, and participants in the hack day. After several rounds of votes and discussion, we whittled the submissions into four groups:

1: Using “motionchart” to plot the Reddit data from Crossref Event Data.

2: How do you capture metrics on research outputs that don’t have a DOI?

3: What would feminist approaches to altmetrics look like?

4: What proportion of scholarly links shared on Reddit are or have open access versions, according to the oaDOI API.

Over the course of the day we talked, hacked, wrote and ate. Pleasingly, we had a remote participant who did some interesting work into Reddit.

Given the sheer number of ideas that were generated and refinements that were made, we could never have hoped to cover all of the topics initially suggested at the start of the do-a-thon. The do-a-thon working document shows all the ground we covered exploring just the four above!

Toward the end of the day, each group showed their findings. Some of these projects will run into longer pieces of research, and some participants are already talking about the next steps. I have summarised each project below from the notes and contributions, and hope that we’ll see longer write-ups and research projects that had their roots at 4:AM.

Using “motionchart” to plot the Reddit data from Crossref Event Data

From Ola Andersson: “This is the visualization implementation of Reddit event data that Hans Zijlstra and I developed. It is displaying Reddit data taken from the Crossref Event Data. The visualization tool, motionchart, is developed by Google. Take a moment and play around with the tool components and see different ways of visualizing the event data. By looking at the source HTML code in the web browser, it is possible to see how the tool can be used with a combination of javascript and HTML.”

How do you capture altmetrics for peer reviewed content that isn’t assigned a DOI or other indicator? How can we improve coverage as a community?

Many publications don’t have a DOI or similar identifier, and most altmetrics aggregators need some kind of identifier to accurately track when content is mentioned online. The lack of DOIs could be because of a lack of funding resources (in some countries, funds compete for basic needs), technical skills, interest, awareness or understanding.

A government or institution-led policy would help communicate the importance of, and possibly mandate the use of, DOIs. There is a role for DOI Registration Agencies, such as Crossref and DataCite. Crossref in particular has an active and growing community outreach effort and are working with increasing numbers of parties to assign identifiers and take on the responsibilities necessary to make them work.

There are a number of outstanding questions: Why do publishers who are aware of Crossref not assign DOIs for publications like Masters and PhD theses? Can we identify where these researchers are publishing so we can bring them on-board? How feasible is it to assign DOIs to different types of output, including presentations, proceedings, and preprints with DataCite and Crossref. The infrastructure is already in place. Could it be used more widely, and if so, why not already?

Even if general best practice were applied across the board, there are also other potentially confounding factors that should be taken into account. We need some kind of analysis of disciplines so we can try to quantify subject-specific skew. We need to talk to professional societies, agencies and researchers to get their insight about the lack of uptake. They in turn might be able to identify significant publications which may form a representative corpus of currently unregistered works.

DOIs aren’t the be-all and end-all. If there is a better, easier way to track content, for example some kind of stable URL, would that be a reasonable replacement for a DOI for this kind of tracking?

Some tentative solutions were suggested: DOI mediation with local country agencies to bring the message into places that Crossref doesn’t currently touch. Allow organisations to pool resources, bringing the ability to assign DOIs to new places. Better co-ordination of different publications or departments of a given institution, to improve consistency of approach.

This is a sizeable question, and the notes, which you can find in the working document, go into more detail.

What would feminist approaches to altmetrics look like?

Much of this group’s time was spent rereading canonical feminist theory (cf. Crenshaw’s “intersectionality” corpus) and discussing how it could apply to understanding and providing altmetrics and, relatedly, how female academics participate in engagement given the challenges of erasure and harassment endemic to online life.

We discussed some purely technical means of addressing various challenges for altmetrics: automated detection and flagging or suppression of altmetrics data that contains harassing mentions; identifying and reporting on gender parity for departments and organizations’ research, vis a vis altmetrics and support given for online engagement training; and crowdsourcing the identification of useful and not useful (i.e. harassing) mentions in altmetrics data.

We also took an hour to investigate whether gender studies research was disproportionately the target of high-profile, trollish Twitter users (who we won’t name here). We did this by exporting from Altmetric Explorer all papers mentioned by one such user in particular, then used a quick-and-dirty script to query the Altmetric API for the related publisher-assigned subjects for the journals that published those papers.

After plotting the related subjects (seen above), we realized that gender studies research – being interdisciplinary – often is not categorized under the “gender studies” subject area by publishers, or is published in other disciplinary journals. So, while our initial approach disproved our hypothesis, we’re going to continue tinkering to confirm the subjects most likely to be targeted by trolls who share research online.

What proportion of scholarly links shared on Reddit are, or have, open access versions, according to the oaDOI API?

Reddit was mentioned a few times during the conference and, although it’s not a novel source it piqued some people’s interest. Both Altmetric.com’s Explorer and Crossref Event Data capture Reddit links to articles, so we collected data from both sources.

All Reddit links from September that could be identified as a DOI were collected from each source, and submitted to the oaDOI service API. Note that this isn’t an official canonical source, and it somewhat conflates different versions of the same article. But this prototype was an interesting experiment, and there was value both in getting an approximate headline number and in connecting scholarly APIs.

The oaDOI API gave a value of “true” (when the content at the DOI was recognised as Open Access, or there was an alternative free version available) “false” or “unknown” when the DOI wasn’t recognised.

The headline figures were that using the Reddit links from Crossref Event Data, 23% of links were Open Access and 76% were not. Using the links from Altmetric.com, 21% were Open Access and 58% were not, with 20% unknown. Given that we didn’t have much time to look into the detail, it was interesting to note that data derived from the two sources generally agreed, and that most links shared on Reddit were not Open Access by any available measure.

This gave rise to some further conversations about possible hypotheses.

One suggestion was that the people sharing the links in question were members of institutions with access. If this were true, it would mean that Reddit, which is a very large site with many members and specialised communities (“subreddits”), had cliques of people discussing articles that only they had access to and the broader public did not. This raised questions about the behaviour of people using a public medium to discuss “private” content, challenging the idea of “citizen scientists”, suggesting instead closed communities.

Another suggestion was that much of the posting might be attributable to promotional link spam from publishers, for whom a link on Reddit is both valuable and has no opportunity cost.

Both hypotheses could be tested by looking at the pool of authors who post links, the degree of discussion attached to each, and the cliquiness of the network of people who participate in these discussions. These could be measured against the license of the content being discussed.

Also notable was Bianca Kramer’s remote participation in our group’s topic, which she documented at https://github.com/bmkramer/4amhack_reddit_OADOI

The discussion document is available here, and is both more detailed and more fragmented!

See you next year!

Overall, we had a lot of fun (and based on the feedback we’ve received from do-a-thon participants, it seems they did, too!) We look forward to continuing our exploration at the 5:AM do-a-thon!

We’d like to thank the conference organisers and the Social Media Lab at the Ted Rogers School of Management at Ryerson University for hosting us and giving us a great opportunity to dig into these ideas.

Joe Wass & Stacy Konkiel

Tracking Real World Changes Inspired by Story

Measuring the audience response to a journalistic story usually means counting page views and unique visitors, yet that assessment falls short.  At my Gannett newspaper in Westchester County, we wanted to know more:  What happened when we started asking questions (about a story)? After we published a story? When a key decision-maker or a crowd of people saw that story?

http://www.mediaimpactproject.org/blog/tracking-real-world-changes-inspired-by-story

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑