PLOS Collaborates on Recommendations to Improve Transparency for Author Contributions

In a new report, a group convened by the US National Academy of Sciences and including a dozen journal editors reflects on authorship guidelines and recommends new ways to make author contributions more transparent.

What does it mean to be author number seven on a twenty-five–author article?

Establishing transparency for each author’s role in a research study is one of the recommendations in a report published today in the Proceedings of the National Academy of Sciences (PNAS) by a group led by Marcia McNutt, President of the National Academy of Sciences. The recommendations issued by this group, which included one of us, were adapted based on community feedback and peer review from an original draft presented as a preprint. PLOS supports the recommendations for increased transparency and has already put some of them in practice. Continue reading “PLOS Collaborates on Recommendations to Improve Transparency for Author Contributions”

Meeting report: summary of day 1 of the 2018 European ISMPP Meeting

The 2018 European Meeting of the International Society for Medical Publication Professionals (ISMPP) was held in London on 23–24 January and attracted nearly 300 delegates; the highest number of attendees to date. The meeting’s theme was ‘Advancing Medical Publications in a Complex Evidence Ecosystem’ and the agenda centred around data transparency, patient centricity and the future of medical publishing. Delegates were treated to two keynote addresses, lively panel discussions, interactive roundtables and parallel sessions, and also had the chance to present their own research in a poster session. Continue reading “Meeting report: summary of day 1 of the 2018 European ISMPP Meeting”

Scholarly maps, recommenders & reference managers at Crossref Live 17

I recently attended the Crossref Live17 event in Singapore. I discovered that these events often have a heavy publisher presence, who make up most of their membership.

Still, I am a bit of a doi nerd, and I have long enjoyed watching Crossref webinars to understand what goes on in the background for dois to work (hint, it helps a lot for troubleshooting broken links in our discovery services) and recently started playing with their Crossref event data API, so it was a good opportunity to attend a non-librarian conference. It helped that it was held just a stone’s throw away from where I work and needed no registration fees to attend. I really enjoyed it, and am still thinking about what was presented days after the event, particularly the discovery implications. Continue reading “Scholarly maps, recommenders & reference managers at Crossref Live 17”

Infographic: What Is ORCID?

Helen Eassom Author Marketing, Wiley In November 2016, Wiley signed ORCID’s Open Letter and became the first major publisher to require ORCID iDs for submitting authors.  Since then, more than 830 journals have adopted



Knowledge Networking in Scientific Practice

Technology is being incorporated more and more into our daily lives. Social media platforms allow researchers to easily connect with one another and to simply find citations or resources. Artificial intelligence and big data make it relatively easy to obtain the information scientists need to move forward with their project. With the extended push to publish data, large amounts of data can be mined allowing for disparate studies to be combined, bigger patterns to be identified, and potentially further reaching conclusions to be made. With this comes the demand for researchers to, not only stay knowledgeable and on-top-of current research, alongside publishing their own articles. Knowledge networking, a way of compiling and sharing info, can help researchers find their way through the mounds of data and resources in order for these conclusions to be made.

Finding information, be it particular facts or a specific citation, is usually associated with finding the right publication – book or journal to reference – containing the needed information. With an internet based search, instead of discovering information, research has become more about filtering and constructing search queries into something useful. Open access makes it easier to obtain and share knowledge, intellectual resources, and data, but being able to parse through and distinguish between relevant and irrelevant information is crucial.

Knowledge networking, a way of compiling and sharing info, can help researchers find their way through the mounds of data and resources in order for these conclusions to be made.

Knowledge networking is a dynamic process in which knowledge is distributed and developed through increasing access to information and augmented by a community. There are a few different types of communities that exist that enable the spread of knowledge, each of which are expanded on below: publishing/networking, academic platforms, and specialist communities.

Publishing and networking

Journal publications are still the standard format for scientific dissertation and discourse. The way content is published has drastically changed over the past decade, where sharing raw data, presentations, and preprints is becoming more common practice.

Many online publication databases like PubMed and Web of Science, though not necessarily exclusively open access, have search functions that allow the tracking citations, parsing metadata, and filter for authorship. Some also allow users to track a certain topic and get email notifications of new publications. Branching out of the model of user based searches and instead using a more metadata approach, Semantic Scholar, an academic search engine, utilizes artificial intelligence with the goal to “connect the dots between disparate studies to identify novel hypotheses and to suggest experiments which would otherwise be missed.” Similarly, F1000Workspace uses algorithms to cater searches and identify important papers within the field as well as way to organize references and share documents with other researchers.

It is important for scientists to track and communicate with each other, especially when trying to establish a collaboration with another research group, as they need to connect personally as well as intellectually. Author IDs, like those from ORCID, aid in tying the work in a publication to a subject specialist and can be valuable in linking projects to people. In addition to ORCID, social media sites like LinkedIn and even Twitter connect researchers together. Even though these platforms are more geared toward job searches and visibility respectively, they can be valuable in easily connecting people.

It is important for scientists to track and communicate with each other, especially when trying to establish a collaboration with another research group, as they need to connect personally as well as intellectually.

Academic community platforms

Within academic spheres, a myriad of software tools are used to connect researchers and to aid in data hosting and paper writing. Universities frequently use internal private services that require authorization, like DropBox, due to their security. Platforms like Figshare and many other repositories host data and large databanks for any discipline. Many open access data banks, like the Protein Data Bank (PDB), which holds structural information on a protein, are required to be used before publishing a paper ensuring that the data is available for future use.

There are community based platforms like ResearchGate, with forum-like spaces to ask research related questions. On it, individuals can be linked together on projects and publications can be linked, and interesting papers that are hidden behind paywalls can be requested for directly from the author. Site members can follow a research interest, in addition to following individual members. ResearchGate indexes self-published information on user profiles to suggest members to connect with others who have similar interests.

Specialist communities

There are highly specific communities/academic platforms available that cater for specialist interests, such as MalariaWorld. These platforms allow all attention to focus on solving a very specific problem. Moreover, with the drive towards collaboration, the identification of experts within a given field is helpful.  Such specialist communities allow individuals who have a special skill set to be identified and helps with networking communities.

Connecting the networks

Creating a sufficient knowledge network is a significant undertaking. However, when creating a platform an organization does not have to reinvent the wheel necessarily. Instead of each group defining their own metadata algorithms, their own ways of conducting social media, and inventing new methods of commenting or Q&A sections, perhaps what is needed is the combination of these (micro-)services to incorporate the best of what already exists. A significant resource for knowledge networking in this case would not be a singular organization or software that’s able to do it all, but one that links together the best at each service to get experts disseminating information.


Opening up the black box of peer review

I recently participated in a workshop hosted by the University of Kent Business School – the subject was whether metrics or peer review are the best tools to support research assessment. Thankfully, we didn’t get embroiled in the sport of ‘metric bashing’, but instead agreed that one size does not fit all and that whatever research assessment we do, while taking account of context, needs to be proportionate.

There are many reasons why we want to assess research – to identify success in relation to goals, to allocate finite resources, to build capacity, to reward and incentivise researchers, as a starting point for further research – but these are all different questions, and the information you need to answer them is not always going to be the same.


What do we know about peer review?

In recent years, while researchers and evaluators have started to swim with the metric tide and explore how new metrics have value in different contexts, ‘peer review’, i.e., the qualitative way that research and researchers are assessed, is (a) still described as if it is one thing, and (b) remains a largely unknown ‘quantity’.  I am not sure if this is ironic (or intentional?) or not, but there remains dearth of information on how peer review works (or doesn’t).

Essentially, getting an expert’s view on a piece of research – be that in a grant application, a piece submitted for publication to a journal, or work already published –  can be helpful to science.  However, there is now significant body of evidence that suggests that how the scientific community organises, requests and manages its expert input may not be as optimum as many consumers of its output assume.  A 2011 UK’s House of Commons report on the state of peer review concluded that while it “is crucial to the reputation and reliability of scientific research” many scientists believe the system stifles innovation and “there is little solid evidence on its efficacy.” Indeed, during the production of the HEFCE commissioned 2015 Metric Tide report, we found ourselves judging the value of quantitative metrics based on the extent to which they replicated the patterns of choices made by ‘peers’. This was done without any solid evidence to support the veracity and accuracy of the peer review decisions themselves; following a long-established tradition for reviews on the mechanics of peer review to cite reservations about the process, before eventually concluding that ‘it’ remains the gold standard. As one speaker at the University of Kent workshop surmised, “people talking about the gold standard [of peer review] maybe don’t want to open up their black boxes.” However, things might be changing.


Bringing in the experts at right time

In grant assessment, there is increasing evidence that how and when we use experts in the grant selection and funding process may be inefficient and lack precision, see for example: Nature; NIH; Science and RAND. Several funding agencies are now experimenting with approaches that use expert input at different stages in the grant funding cycle and to different degrees – the aim being to encourage innovation, while bringing efficiencies to the process, including by reducing the opportunity for bias and practically, reducing the burden on peers, examples of this are Wellcome Trust Investigator Award grants; HRC Explorer grants; Volkswagenstiftung Experiment grants; and Velux Foundation Villum experiment.


Opening peer review in publishing

In the publishing world, there is considerable momentum towards the adoption of models in where research is shared much earlier and more openly.  Preprint repositories such as bioRxiv and post-publication peer review platforms, such as F1000Research, Wellcome Open Research, and soon to be launched Gates Open Research and UCL Child Health Open Research, enable open commenting and open peer review respectively as the default. Such models not only provide transparency and accelerate access to research findings and data to all users but they fundamentally change the role of experts – to one focused on providing constructive feedback and helping research to advance – even if they don’t like or agree with what they see! Furthermore, opening up access to what experts have said about others’ work is an important step towards reducing the selection bias of what is published and allowing readers more autonomy to reach their own conclusions about what they see.


Creating a picture of the workload

Perhaps the most obvious ways in which ‘peer review’ is currently broken is under the sheer weight of what publishers, funding agencies and institutions are asking experts to do. Visibility around a contribution presents the opportunity for experts to receive recognition for the effort and contributions they have made to the research enterprise in its broadest sense – as is already underway with ORCID – thus providing an incentive to get involved. And for funding agencies, publishers and institutions, more information about who is providing the expert input, and therefore where the burden lies, can help them to consider who, when and how they approach experts, maximising the chance of a useful response, and bringing efficiencies and effectiveness to the process.

The recent acquisition of Publons by Clarivate is a clear indication of the current demand and likely potential for more information about expert input to research – and should go some way to addressing the dearth of intelligence on how ‘peer review’ is working – and actually works.

ScienceOpen launches MyScienceOpen

Research network has announced the launch of MyScienceOpen, a professional networking platform designed for the modern research environment. By leveraging the power of ORCID, MyScienceOpen is an integrated profile where academics can visualise their research impact through enhanced author-level metrics.

ORCID iDs @ Temple


Last year on the blog, we introduced ORCID, a non-profit organization that provides persistent, unique identifiers to researchers across the globe. ORCID iDs help ensure that researchers get credit for all their scholarly contributions.

While there are a number of different researcher identifiers out there (including ResearchID and Scopus Author ID), we recommend that all Temple researchers register for an ORCID iD. It’s free and it takes less than a minute to sign up.

There are currently 3,364,764 live ORCID iDs. Sixteen publishers, including the American Chemical Society, PLOS, and Wiley, now require that authors submit ORCID iDs at some point in the publication process. And if you think ORCID is just for scientists, you’re wrong. Cambridge University Press has begun integrating ORCID iDs into their book publishing workflows, and Taylor & Francis is currently undertaking a pilot project to integrate ORCID iDs into their humanities journals.

Researchers can use their ORCID iD profile to highlight their education, employment, publications, and grants. They can even add peer review activities. The American Geophysical Union, F1000, and IEEE are just three of the organizations that currently connect with ORCID to recognize the work of their peer reviewers.

In order to get a better sense of who is using ORCID at Temple, we looked for researchers with publicly available ORCID profiles who note “Temple University” as their current place of employment. We found 205 ORCID iDs that matched this criteria. Of those, the Lewis Katz School of Medicine has the highest number of researchers with ORCID iDs at Temple. The College of Science and Technology has the second highest number, with faculty from Physics, Chemistry, and Biology being well particularly well represented. The College of Liberal Arts has the third-highest number of ORCID iDs, thanks in large part to the Psychology department. A handful of researchers in the Fox School of Business, the College of Engineering, and the College of Education have also signed up for ORCID iDs. The overwhelming majority of researchers with ORCID iDs at Temple are faculty members. Some postdoctoral fellows have ORCID iDs, but very few graduate students do.

Because filling out one’s ORCID iD profile is optional, and profiles can also be set to private, our data is incomplete, and probably underestimates the true number of individuals at Temple with ORCID iDs. Nonetheless, it is exciting to see that researchers in almost all of Temple’s schools and colleges have signed up for ORCID iDs. We’re confident that this number will continue to grow in the future.

Temple Libraries is proud to be an institutional member of ORCID.

Connecting the Loop Between iDs and DOIs – Figshare in New Partnership with ORCID

Today, our portfolio company Figshare announced that ORCID is now using Figshare to showcase its resources, documentation, staff contributions, and other publicly available materials.

ORCID’s vision is a world where all who participate in research and innovation, from imagining to building and managing, are uniquely identified and connected to their contributions across disciplines, and borders, and time. ORCID provides an identifier for individuals to use with their name as they engage in research and innovation activities. Since launching in 2012, ORCID has become the de facto standard for researcher identifiers.  Over three million researchers globally have registered for an ORCID iD, which they can use to uniquely identify themselves and ensure reliable connections with their affiliations and contributions.

From today, all ORCID’s publicly available materials will be accessible in a customised Figshare portal and all files uploaded will be issued a DOI, a persistent identifier that enables access, reuse, and citation, and will make use of Figshare’s viewer technology, which supports over 650 different files types, to display presentations, videos and documents in the browser. The ORCID portal will make the organisation’s resources and documents more discoverable to the wider research community with rich metadata and digitally preserved.

“We’ve been working with Figshare since the beginning,” said Laure Haak, Executive Director of ORCID. “We are excited to be connecting the loop between iDs and DOIs and providing a persistent and accessible home for our documents.  We hope this can serve as a useful example to the research community of interoperability in action, fueled by identifiers.”

Mark Hahnel, CEO and Founder of Figshare said:

“Figshare were launch partners with ORCID and we are big supporters of their work sharing similar goals. We are very happy to extend our partnership as persistent identifiers are vital to the flow of accurate research information.”

The portal  can be viewed now at

The post Connecting the Loop Between iDs and DOIs – Figshare in New Partnership with ORCID appeared first on Digital Science.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑