Making Peer Review More Open

Traditional peer review relies on anonymous reviewers to thoughtfully assess and critique an author’s work. The idea is that blind review makes the evaluation process more fair and impartial–but many scholars have questioned whether this is always the case. Open review has the potential to make scholarship more transparent and more collaborative. It also makes it easier for researchers to get credit for the work they do reviewing the scholarship of their peers. Publishers in both the sciences and the humanities and social sciences have been experimenting with open review for almost two decades now, but it is only recently that open review seems to have reached a tipping point. So what exactly is open review, and what does it entail? Continue reading “Making Peer Review More Open”

Should reviewers be expected to review supporting datasets and code?

by John Helliwell, Emeritus Professor of Chemistry University of Manchester and DSc Physics University of York (@HelliwellJohn) Introduction For the meeting entitled “Transparency, Reward, and Innovation in Peer Review in the Life Sciences” to be held on Feb. 7-9, 2018 at the Howard Hughes Medical Institute in Chevy Chase, Maryland (http://asapbio.org/peer-review) I have been asked by […]

Should scientists receive credit for peer review?

by Stephen Curry, Professor of Structural Biology, Imperial College (@Stephen_Curry) As the song goes – and I have in mind the Beatles’ 1963 cover version of Money (that’s all I want) – “the best things in life are free.” But is peer review one of them? The freely given service that many scientists provide as validation […]

New tool to identify fakes in the peer review process

Combating fake peer review

Fake reviews continue to be a serious concern in medical publishing, putting data integrity and trust in the scientific community at risk. As recently reported by Retraction Watch, a new tool designed by Clarivate Analytics will be available in December 2017 to help journals identify fake reviews and prevent publication of articles that rely on them.

Fake peer review has been responsible for the retraction of over 500 articles to date and the issue has caused some journals to review their policy of requesting reviewer nominations from authors. However, many journals still retain this policy as recruiting peer reviewers is becoming increasingly difficult and time-consuming. While some fake reviewers may be easy to identify, in some cases it is more difficult. The new fraud prevention tool can be used at multiple points during the submission and review process, and looks at 30 different factors that can help to identify fake profiles, impersonators and unusual activity.

Upon identification of a possible fake review, the journal is alerted and the editor or publisher is then able to decide whether to investigate further and whether to accept the article for publication. It is anticipated that early identification of possible fake reviews during the submission and peer review process will reduce the number of retractions and help to protect the reputation of medical publishing.

——————————————————–

Summary by Philippa Flemming, PhD from Aspire Scientific


Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

The post New tool to identify fakes in the peer review process appeared first on The Publication Plan for everyone interested in medical writing, the development of medical publications, and publication planning.

Gender Bias in Peer Review: An Interview with Brooks Hanson and Jory Lerback

Earlier this year, an American Geophysical Union analysis of peer review in its journals revealed evidence of gender bias, with women being less likely to be invited to review than men despite being more likely to be the first author of an accepted paper. In this interview, Brooks Hanson (Senior Vice President, Publications) and former Data Analyst, Jory Lerback describe the original study and the AGU’s efforts to address this bias.

The post Gender Bias in Peer Review: An Interview with Brooks Hanson and Jory Lerback appeared first on The Scholarly Kitchen.

Cross-Journal Initiative Helps Manuscripts Take Flight

All properly executed science deserves to be published as quickly as possible. One common frustration of scientists related to publication speed is the review-rejection cycle that in action resembles a cross between cycling on a hamster wheel and jumping through a hoola-hoop. To offer authors a way out of this cycle of delay, PLOS launched a journal transfer initiative earlier this year that provides authors an alternative to starting from scratch for papers not initially accepted by a subset of PLOS journals. Continue reading “Cross-Journal Initiative Helps Manuscripts Take Flight”

DOI for Peer Reviews

Peer ReviewsMost journals follow peer review process to assess and select manuscripts for publication. Peer reviews can provide you with information on the strengths and weaknesses of your paper. The reviewers are either chosen by the publishers or suggested by the author. They should be unbiased and expert in the subject area they are reviewing. Reviewers…
read more

The problem with scientific publishing

And how to fix it

Periodical journals have been the principal means of disseminating science since the 17th century. Over the intervening three-and-a-half centuries journals have established conventions for publication — such as insisting on independent (and usually anonymous) peer review of submissions — that are intended to preserve the integrity of the scientific process. But they have come under increasing attack in recent years. What is wrong with scientific publishing in journals, and how can it be fixed? Continue reading “The problem with scientific publishing”

Do peer review models affect junior doctors’ trust in journals?

Every day doctors make decisions on how to treat their patients based on evidence published in medical journals. The fact that these treatment decisions affect the wellbeing and quality of life of real people reflects the extent to which published literature is trusted, at least by the medical profession.

The only requirement for publication is that the research undergoes peer review; a system that we know is not perfect. It is because of the recognized flaws in the current system that new models of peer review have been developed to address some of them. The world of publishing has embraced these concerns and there’s not a conference or meeting that goes by without at least one discussion on what’s wrong with peer review and what we should be doing to fix it.

Awareness among practicing physicians

A conversation with my clinical co-authors highlighted that, while there is significant ‘angst’ about peer review in some fields, these concerns are going unnoticed by practicing physicians. We wondered how far this was true and whether it really mattered. This prompted our survey, recently published in Research Integrity and Peer Review, which asked trainee doctors whether they were aware of different peer review models and how far they trusted the contents of various medical journals.

There is a belief that if an article is peer reviewed and published it can be unquestioningly viewed as valid.

Unsurprisingly, the doctors we surveyed trusted familiar journal names such as the Lancet, BMJ, and NEJM. They paid little attention to the type of peer review model adopted by a journal and had little interest in open peer review, where the names of the peer reviewers are known to the authors (and vice versa). They also expressed little desire to scrutinize  peer review reports themselves in journals that operate open peer review; ironic given that open peer review was pioneered in the field of medicine to increase transparency and accountability, but also unsurprising.

Why does this matter?

Our study suggests that peer review is important to our respondents because publication is not seen as part of an evolving self-correcting process. There is a belief that if an article is peer reviewed and published it can be unquestioningly viewed as valid.

For journal editors and publishers this highlights their responsibility to deliver on these expectations by focusing on the quality of peer review, not just on the speed and efficiency of the process.

The study also raises many other broader questions: should there be an alternative approach to peer review in medicine? Should systematic reviews of medical research consider the peer review model? Should those who write evidence based clinical guidelines for junior doctors do the same? Should doctors be given training on how to assess peer review reports? How realistic or fair is it to add peer review to an already stretched medical curriculum? What is the value of opening peer review if the end user does not look at it?

We acknowledge that our survey was of a small and selective sample of doctors in training. Nonetheless, it offers a first insight into how a specific community views peer review innovations. We hope it will stimulate more interest among the medical community on how medical research is peer reviewed and validated and the publishing industry to think about medical specific innovations that meet the expectations of practicing doctors.

The post Do peer review models affect junior doctors’ trust in journals? appeared first on BioMed Central blog.

Opening up the black box of peer review

I recently participated in a workshop hosted by the University of Kent Business School – the subject was whether metrics or peer review are the best tools to support research assessment. Thankfully, we didn’t get embroiled in the sport of ‘metric bashing’, but instead agreed that one size does not fit all and that whatever research assessment we do, while taking account of context, needs to be proportionate.

There are many reasons why we want to assess research – to identify success in relation to goals, to allocate finite resources, to build capacity, to reward and incentivise researchers, as a starting point for further research – but these are all different questions, and the information you need to answer them is not always going to be the same.

 

What do we know about peer review?

In recent years, while researchers and evaluators have started to swim with the metric tide and explore how new metrics have value in different contexts, ‘peer review’, i.e., the qualitative way that research and researchers are assessed, is (a) still described as if it is one thing, and (b) remains a largely unknown ‘quantity’.  I am not sure if this is ironic (or intentional?) or not, but there remains dearth of information on how peer review works (or doesn’t).

Essentially, getting an expert’s view on a piece of research – be that in a grant application, a piece submitted for publication to a journal, or work already published –  can be helpful to science.  However, there is now significant body of evidence that suggests that how the scientific community organises, requests and manages its expert input may not be as optimum as many consumers of its output assume.  A 2011 UK’s House of Commons report on the state of peer review concluded that while it “is crucial to the reputation and reliability of scientific research” many scientists believe the system stifles innovation and “there is little solid evidence on its efficacy.” Indeed, during the production of the HEFCE commissioned 2015 Metric Tide report, we found ourselves judging the value of quantitative metrics based on the extent to which they replicated the patterns of choices made by ‘peers’. This was done without any solid evidence to support the veracity and accuracy of the peer review decisions themselves; following a long-established tradition for reviews on the mechanics of peer review to cite reservations about the process, before eventually concluding that ‘it’ remains the gold standard. As one speaker at the University of Kent workshop surmised, “people talking about the gold standard [of peer review] maybe don’t want to open up their black boxes.” However, things might be changing.

 

Bringing in the experts at right time

In grant assessment, there is increasing evidence that how and when we use experts in the grant selection and funding process may be inefficient and lack precision, see for example: Nature; NIH; Science and RAND. Several funding agencies are now experimenting with approaches that use expert input at different stages in the grant funding cycle and to different degrees – the aim being to encourage innovation, while bringing efficiencies to the process, including by reducing the opportunity for bias and practically, reducing the burden on peers, examples of this are Wellcome Trust Investigator Award grants; HRC Explorer grants; Volkswagenstiftung Experiment grants; and Velux Foundation Villum experiment.

 

Opening peer review in publishing

In the publishing world, there is considerable momentum towards the adoption of models in where research is shared much earlier and more openly.  Preprint repositories such as bioRxiv and post-publication peer review platforms, such as F1000Research, Wellcome Open Research, and soon to be launched Gates Open Research and UCL Child Health Open Research, enable open commenting and open peer review respectively as the default. Such models not only provide transparency and accelerate access to research findings and data to all users but they fundamentally change the role of experts – to one focused on providing constructive feedback and helping research to advance – even if they don’t like or agree with what they see! Furthermore, opening up access to what experts have said about others’ work is an important step towards reducing the selection bias of what is published and allowing readers more autonomy to reach their own conclusions about what they see.

 

Creating a picture of the workload

Perhaps the most obvious ways in which ‘peer review’ is currently broken is under the sheer weight of what publishers, funding agencies and institutions are asking experts to do. Visibility around a contribution presents the opportunity for experts to receive recognition for the effort and contributions they have made to the research enterprise in its broadest sense – as is already underway with ORCID – thus providing an incentive to get involved. And for funding agencies, publishers and institutions, more information about who is providing the expert input, and therefore where the burden lies, can help them to consider who, when and how they approach experts, maximising the chance of a useful response, and bringing efficiencies and effectiveness to the process.

The recent acquisition of Publons by Clarivate is a clear indication of the current demand and likely potential for more information about expert input to research – and should go some way to addressing the dearth of intelligence on how ‘peer review’ is working – and actually works.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑