Building trust through badging

Anisa Rowhani-Farid and Adrian Barnett recently published the second version of their Research Article in which they compared data-sharing in two journals and whether badges was associated with increased sharing. In this guest blog, Anisa Rowhani-Farid describes what motivated her in her work and the results of her research.

Prior to the appearance of scientific journals in the 17th century, researchers were hesitant to share their findings with others. The pace of scientific advancement, however, changed radically with the establishment of the printing press which led to the development of scientific journals in 1665 when Henry Oldenburg of the Royal Academy of Sciences launched the publication Philosophical Transactions of the Royal Academy of Sciences. When the Royal Society was established in 1660 it had the motto Nullius in verba, which means ‘take nobody’s word for it’. From the very beginning, science was about verifying facts, it was about being open with data.

Continue reading “Building trust through badging”

“Interactivity in scientific figures is a key tool for data exploration and the scientific process”

Last summer we launched our interactive figures initiative with plotly. Since then, we have published 22 interactives figures in seven articles across two platforms (F1000Research and MNI Open Research). Our collaboration with ploty (and Code Ocean) was also covered in a recent Nature Toolbox article. Continue reading ““Interactivity in scientific figures is a key tool for data exploration and the scientific process””

Deep into the heart of open science

The AAAS Annual Meeting takes place in Austin from 15 to 19 February and F1000 will be there. We will try to get to the heart of open science while deep in the heart of Texas. The practice of open science is defined as “the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods.” Continue reading “Deep into the heart of open science”

How best to fund knowledgebases – an author and reviewer in conversation

A recent Research Article published by Chiara Gabella (CG), SIB Swiss Institute of Bioinformatics, and colleagues explored how best to fund knowledgebases, which are relied on by many life scientists as highly accurate and reliable sources of scientific information. There are many questions about how to fund these, in her article Chiara uses UniProtKB as a case study, a knowledgebase run by the UniProt Consortium – a collaboration between the European Bioinformatics Institute (EMBL-EBI), the SIB Swiss Institute of Bioinformatics and the Protein Information Resource (PIR). Chiara’s article was openly reviewed by Helen Berman (HB),  Rutgers, The State University of New Jersey, who also works on a knowledge base – the RCSB Protein Data Bank

Continue reading “How best to fund knowledgebases – an author and reviewer in conversation”

When evaluating research, different metrics tell us different things

Science has long been accepted by policy makers as valuable; however, recently scientists and research institutions have been asked for evidence to justify their research. How this evidence is provided is grounds for lively debate.

Scientific peer-review based on human judgement is time consuming and complex. As a result, it has become commonplace to make assumptions around the quality of research based on indicators of reuse by other academics – the number of citations the corresponding articles receive. Citation impact is used as a proxy for quality in this way, though there are manifold issues around this proxy. Should negative citations count? Are all citations of equal merit? There is likely static noise on a large scale. Continue reading “When evaluating research, different metrics tell us different things”

Looking at the future of science communication in Germany

Tatyana Dubich was one of the organisers of this year’s N2 Science Communication Conference that took place last month to encourage science communication among early career researchers in Germany. In this guest blog, she gives us a round-up of some of the discussions that took place during the conference.

How could we improve science communication? If you were to ask a scientist, a journalist or an artist, you would most likely get completely different answers from each. All of them, however, would probably agree that efficient science communication is crucial. The N2  Science Communication Conference in Berlin brought together 160 participants that included scientists, journalists and artists in order to discuss and learn from each other. The N2 organizational board aimed to actively involve doctoral researchers and provide them with a set of effective tools to increase the impact of the scientific message. Continue reading “Looking at the future of science communication in Germany”

So long static – we now support interactive Ploty figures in our articles!

Plotly enables users to create dozens of different types of charts, networks and maps, all of which have a basic-level of interactivity where you can zoom, pan, rotate (for 3D plots) and hover your cursor over a data point to see its value(s). However, you can add also more interactivity by adding one or more advanced interactive features to your figures such as dropdown options, sliders and dynamic animations.  

To mark the introduction of this new feature, we are reducing Article Processing Charges* for all articles with at least one Plotly figure by 50%. The submission deadline for the APC reduction is December 31st 2017. You can find more information about this and how to submit your Plotly figure at the end of this post.

We plan to feature published interactive figures throughout the year, including a roundup of the best visualizations in late January 2018 on our and Plotly’s blogs.

“Plotly is thrilled to be working with F1000Research as scientific publishing transitions to interactive, online graphics. Plotly charts keep the data and chart intrinsically linked – a major improvement over submitting charts as static image files. Open research is the future and Plotly is proud to lend cutting edge tools to open science publications.” Jack Parmer, Plotly CEO

Finding clarity in interactivity

Scientific publishing made the transition to the web almost two decades ago, and yet we still treat online articles as if they have the same physical limitations as their printed equivalents. We even still use terms such as ‘papers’ and ‘preprints’ to refer to works that only exist online. The same is of course true for elements within articles such as figures, which remain in the same static state since William Playfair drew the first statistical charts in 1786.

The entire purpose of a scientific figure is to help readers understand. When information is visualized graphically it is much easier to comprehend than a table densely packed with numbers or a long tract of text. However, biological and environmental systems are complex and it’s often difficult to represent in a static 2D object. This is especially true if the research involves many variables or large quantities of data. Many readers of scientific articles will have struggled to decipher over-plotted charts, or network graphs crowded with hundreds of nodes all labelled in an unreadable font size so the figure fits with the paper’s margins.  Being able to zoom, filter, and hover over individual data points to see their values, address these challenges and help readers to properly explore data at a much finer scale.

A very densely packed gene expression heatmap. Thankfully, you can select smaller regions of the chart, or use the zoom options in the top right, to get a better idea of what the heatmap shows. Chart from Plotly: https://plot.ly/ipython-notebooks/bioinformatics/

 

The same data, different visualizations

Interactive and animated figures have other advantages over their static counterparts. If there are several ways to visualize your data, you no longer have to choose just one; if you want to demonstrate how different input values affect a model’s outputs, you can achieve this graphically; and if you want to represent the interplay of many variables, you can make use of dynamic changes in the size, color, shape, and location of data points over time.

This last point is most famously demonstrated by Hans & Ola Rosling’s Gapminder visualization, a dynamic graph showing the changes in life expectancy, income per person and population size for almost every country in the last 215 years. The graph helps tell a rich demographic story, of human progress and inequality in the global distribution of that progress.  Their use of color, size changes and movement help us to emotionally engage with the data, which in turn helps us appreciate the real-world processes that it represents. Hans’ Gapminder lectures might not have racked up the tens of millions of views had it been static (plus it would probably have to be split into several graphs to makes sense). Scientific articles are becoming increasingly difficult to read; used appropriately, interactive figures have the potential to help counteract this trend. This is especially true for communicating findings to policy makers and the wider general public.

A Plotly version of the Gapminder visualization, which shows global changes in wealth, health and population (represented by bubble size) over the last 55 years (N.B. the original visualization covers the last 215 years). Chart created by Plotly.

 

Partnering for flexibility and scalability

We are excited to partner with Plotly to help researchers visualize their data without the traditional constraints. Some scientific publishers, including us, have experimented with publishing interactive figures before; we even went as far as publishing the first ‘living’ figure. However, these efforts were custom-built attempts that were either scalable but not flexible, or flexible but not scalable. Plotly, which launched the same year as F1000Research, has built a platform that excels at both with elegant aesthetics as an added bonus. So, we are leaving it to the data visualization experts and focusing our efforts on supporting their tech.

 

Make your data tell a story

We look forward to seeing your creative ways of visualizing your data using this new feature for our articles. In case you needed some inspiration, Plotly’s Modern Data and Medium blogs showcase lots of scientific and non-scientific interactive charts.

Instructions and FAQs for creating and submitting interactive Plotly figures to F1000Research can be found here.

*Articles over 8000 words of main text will still incur the long article surcharge of $1000. For the definition of ‘main text’, see our Article-Processing Charge page. Articles already published on F1000Research can be updated to include interactive figures, but they will not be eligible for the APC reduction.

Knowledge Networking in Scientific Practice

Technology is being incorporated more and more into our daily lives. Social media platforms allow researchers to easily connect with one another and to simply find citations or resources. Artificial intelligence and big data make it relatively easy to obtain the information scientists need to move forward with their project. With the extended push to publish data, large amounts of data can be mined allowing for disparate studies to be combined, bigger patterns to be identified, and potentially further reaching conclusions to be made. With this comes the demand for researchers to, not only stay knowledgeable and on-top-of current research, alongside publishing their own articles. Knowledge networking, a way of compiling and sharing info, can help researchers find their way through the mounds of data and resources in order for these conclusions to be made.

Finding information, be it particular facts or a specific citation, is usually associated with finding the right publication – book or journal to reference – containing the needed information. With an internet based search, instead of discovering information, research has become more about filtering and constructing search queries into something useful. Open access makes it easier to obtain and share knowledge, intellectual resources, and data, but being able to parse through and distinguish between relevant and irrelevant information is crucial.

Knowledge networking, a way of compiling and sharing info, can help researchers find their way through the mounds of data and resources in order for these conclusions to be made.

Knowledge networking is a dynamic process in which knowledge is distributed and developed through increasing access to information and augmented by a community. There are a few different types of communities that exist that enable the spread of knowledge, each of which are expanded on below: publishing/networking, academic platforms, and specialist communities.

Publishing and networking

Journal publications are still the standard format for scientific dissertation and discourse. The way content is published has drastically changed over the past decade, where sharing raw data, presentations, and preprints is becoming more common practice.

Many online publication databases like PubMed and Web of Science, though not necessarily exclusively open access, have search functions that allow the tracking citations, parsing metadata, and filter for authorship. Some also allow users to track a certain topic and get email notifications of new publications. Branching out of the model of user based searches and instead using a more metadata approach, Semantic Scholar, an academic search engine, utilizes artificial intelligence with the goal to “connect the dots between disparate studies to identify novel hypotheses and to suggest experiments which would otherwise be missed.” Similarly, F1000Workspace uses algorithms to cater searches and identify important papers within the field as well as way to organize references and share documents with other researchers.

It is important for scientists to track and communicate with each other, especially when trying to establish a collaboration with another research group, as they need to connect personally as well as intellectually. Author IDs, like those from ORCID, aid in tying the work in a publication to a subject specialist and can be valuable in linking projects to people. In addition to ORCID, social media sites like LinkedIn and even Twitter connect researchers together. Even though these platforms are more geared toward job searches and visibility respectively, they can be valuable in easily connecting people.

It is important for scientists to track and communicate with each other, especially when trying to establish a collaboration with another research group, as they need to connect personally as well as intellectually.

Academic community platforms

Within academic spheres, a myriad of software tools are used to connect researchers and to aid in data hosting and paper writing. Universities frequently use internal private services that require authorization, like DropBox, due to their security. Platforms like Figshare and many other repositories host data and large databanks for any discipline. Many open access data banks, like the Protein Data Bank (PDB), which holds structural information on a protein, are required to be used before publishing a paper ensuring that the data is available for future use.

There are community based platforms like ResearchGate, with forum-like spaces to ask research related questions. On it, individuals can be linked together on projects and publications can be linked, and interesting papers that are hidden behind paywalls can be requested for directly from the author. Site members can follow a research interest, in addition to following individual members. ResearchGate indexes self-published information on user profiles to suggest members to connect with others who have similar interests.

Specialist communities

There are highly specific communities/academic platforms available that cater for specialist interests, such as MalariaWorld. These platforms allow all attention to focus on solving a very specific problem. Moreover, with the drive towards collaboration, the identification of experts within a given field is helpful.  Such specialist communities allow individuals who have a special skill set to be identified and helps with networking communities.

Connecting the networks

Creating a sufficient knowledge network is a significant undertaking. However, when creating a platform an organization does not have to reinvent the wheel necessarily. Instead of each group defining their own metadata algorithms, their own ways of conducting social media, and inventing new methods of commenting or Q&A sections, perhaps what is needed is the combination of these (micro-)services to incorporate the best of what already exists. A significant resource for knowledge networking in this case would not be a singular organization or software that’s able to do it all, but one that links together the best at each service to get experts disseminating information.

 

Opening up the black box of peer review

I recently participated in a workshop hosted by the University of Kent Business School – the subject was whether metrics or peer review are the best tools to support research assessment. Thankfully, we didn’t get embroiled in the sport of ‘metric bashing’, but instead agreed that one size does not fit all and that whatever research assessment we do, while taking account of context, needs to be proportionate.

There are many reasons why we want to assess research – to identify success in relation to goals, to allocate finite resources, to build capacity, to reward and incentivise researchers, as a starting point for further research – but these are all different questions, and the information you need to answer them is not always going to be the same.

 

What do we know about peer review?

In recent years, while researchers and evaluators have started to swim with the metric tide and explore how new metrics have value in different contexts, ‘peer review’, i.e., the qualitative way that research and researchers are assessed, is (a) still described as if it is one thing, and (b) remains a largely unknown ‘quantity’.  I am not sure if this is ironic (or intentional?) or not, but there remains dearth of information on how peer review works (or doesn’t).

Essentially, getting an expert’s view on a piece of research – be that in a grant application, a piece submitted for publication to a journal, or work already published –  can be helpful to science.  However, there is now significant body of evidence that suggests that how the scientific community organises, requests and manages its expert input may not be as optimum as many consumers of its output assume.  A 2011 UK’s House of Commons report on the state of peer review concluded that while it “is crucial to the reputation and reliability of scientific research” many scientists believe the system stifles innovation and “there is little solid evidence on its efficacy.” Indeed, during the production of the HEFCE commissioned 2015 Metric Tide report, we found ourselves judging the value of quantitative metrics based on the extent to which they replicated the patterns of choices made by ‘peers’. This was done without any solid evidence to support the veracity and accuracy of the peer review decisions themselves; following a long-established tradition for reviews on the mechanics of peer review to cite reservations about the process, before eventually concluding that ‘it’ remains the gold standard. As one speaker at the University of Kent workshop surmised, “people talking about the gold standard [of peer review] maybe don’t want to open up their black boxes.” However, things might be changing.

 

Bringing in the experts at right time

In grant assessment, there is increasing evidence that how and when we use experts in the grant selection and funding process may be inefficient and lack precision, see for example: Nature; NIH; Science and RAND. Several funding agencies are now experimenting with approaches that use expert input at different stages in the grant funding cycle and to different degrees – the aim being to encourage innovation, while bringing efficiencies to the process, including by reducing the opportunity for bias and practically, reducing the burden on peers, examples of this are Wellcome Trust Investigator Award grants; HRC Explorer grants; Volkswagenstiftung Experiment grants; and Velux Foundation Villum experiment.

 

Opening peer review in publishing

In the publishing world, there is considerable momentum towards the adoption of models in where research is shared much earlier and more openly.  Preprint repositories such as bioRxiv and post-publication peer review platforms, such as F1000Research, Wellcome Open Research, and soon to be launched Gates Open Research and UCL Child Health Open Research, enable open commenting and open peer review respectively as the default. Such models not only provide transparency and accelerate access to research findings and data to all users but they fundamentally change the role of experts – to one focused on providing constructive feedback and helping research to advance – even if they don’t like or agree with what they see! Furthermore, opening up access to what experts have said about others’ work is an important step towards reducing the selection bias of what is published and allowing readers more autonomy to reach their own conclusions about what they see.

 

Creating a picture of the workload

Perhaps the most obvious ways in which ‘peer review’ is currently broken is under the sheer weight of what publishers, funding agencies and institutions are asking experts to do. Visibility around a contribution presents the opportunity for experts to receive recognition for the effort and contributions they have made to the research enterprise in its broadest sense – as is already underway with ORCID – thus providing an incentive to get involved. And for funding agencies, publishers and institutions, more information about who is providing the expert input, and therefore where the burden lies, can help them to consider who, when and how they approach experts, maximising the chance of a useful response, and bringing efficiencies and effectiveness to the process.

The recent acquisition of Publons by Clarivate is a clear indication of the current demand and likely potential for more information about expert input to research – and should go some way to addressing the dearth of intelligence on how ‘peer review’ is working – and actually works.

Show me the code

Answering research questions or making new discoveries in this day and age is often dependent on software tools as Luis Bastiao Silva from BMD Software recently highlighted. In publishing the details of your software tool and making it open source you can make a real difference to the research of others. Plus, publishing a software tool article is an excellent way to get credit for what you have created; and the F1000Research platform, with its article versioning system, support for LaTeX submissions and proper syntax highlighting, is particularly well-suited to publishing software articles.

 

A good researcher names (and publishes) their tools

Not only does software facilitate research, it is also a first class research output as evidenced by our latest software tool articles. Recently, a group of researchers from the Keck School of Medicine, University of Southern California, published a software tool article describing Arkas – a novel RNA-Seq analysis pipeline combining data preparation, quality control, data analysis and secondary analysis tools.  As noted by Harold Pimentel, Stanford University, in his peer review report, the pipeline usefully documents software versions and enforces consistency allowing users to easily identify any potential differences among versions.

Software tools are also essential to the field of agriculture, as demonstrated by web repository for Brassica phenotype data named the Brassica Information Portal (BIP). The portal, developed by Annemarie Eckes and colleagues from the Earlham Institute and published in our GODAN gateway, serves as a centralised source of trait data which is both open access and open source. Christoper J. Rawlings of Rothamsted Research applauds BIP’s greater use of ontologies in his peer review report.

Last week, in a paper published on F1000Research, Rafael Jimenez from the ELIXIR Hub and colleagues outlined recommendations to improve the quality and sustainability of research software. The paper passed peer review in only 12 days following publication. Unlike similar initiatives, the suggestions were drafted with more than just software developers in mind; the recommendations are directed at funders, institutions, journals, group leaders, and project managers. The authors stress the importance of making source code publically accessible from the start in order to increase reuse and collaboration, or as Linus Torvalds once put it: “Talk is cheap. Show me the code.”

 

Show us your code

To date, F1000Research has published over 160 software tool articles and we’re looking for more! Our current call for software tool papers, entitled ‘Show me the code’, intends to raise the profile of research software while highlighting its diversity. This means we’re looking for a wide range of papers spanning the life sciences and medicine, including but not limited to: Bioconductor vignettes, Cytoscape apps, Docker containers, Galaxy workflows, and R packages.

Interested? Submit your software tool article by 30 November 2017 to be part of the buzz. Be sure to mention ‘show me the code’ at submission stage to receive 50% off your next software tool article.

Health care and social media: educating the digital patient

Social media has become prolific in everyday life and allows the instantaneous sharing of information, which can include health care information. The authors of a Research Note published on F1000Research suggests as medical vocabulary becomes more prevalent on social media that more comprehensible language should be used. In this guest blog, Farris Timimi, cardiologist, Medical Director of the Mayo Clinic Social Media Network and well-known health care Twitter user gives his view on this.

Health care literacy continues to be a challenge. We all recognize the impact of literacy on quality outcomes, ranging from accessing health care, understanding the risk and benefits of tests and treatment to complying with medical advice. Health literacy can include a variety of things, including cultural, visual, computer and information comprehension; however, often not understanding the written information may be the most important and may have the greatest impact on health related outcomes.

Literacy and social media

The authors of this Research Note have demonstrated the potential application of social media, to serve as an aid to standard educational material. In particular, as the form of communication in Twitter is uniquely brief, it may foster a lower readability score than standard health care educational material. While this observation is interesting, health care barriers surrounding social media, reflecting fear of social media use in health care continues to be a challenge that limits the application and innovation of this technology.

It is the fear of these perceived barriers that prevents the full use of social media by health care professionals. These include fear of social disinhibition, reimbursement and compliance with privacy laws (e.g., HIPAA in the Unites States, Directive on Data Protection in the EU); yet the most powerful fear is that of unprofessional interactions online. While these imagined fears are greater than actual transgressions, we have found there are three interventions that need be put in place to address these concerns: clear guidelines, orientation and onboarding for new employees and meaningful social media training.

Social networks as tools

We recognize how much time is spent in social networks by our patients. Our co-workers and employees invest similar time in the same networks. Consider our capacity for engagement in healthcare, if we can strategically align both groups. To do so we need to view social networks as tools rather than toys, and ensure that our employees apply them professionally.

Our approach at Mayo Clinic has centered on the latter, with the development of a CME-certified training module for health care professionals focusing on social media, both the pragmatic application and professionalism in use.

It is critical that we in health care view social networks and social media as an additional resource in medicine rather than a risk we need to mitigate. Only by doing so, we can move from observation to intervention, a direction outlined by Hoedebecke et al in their article.

Glass half full: optimism for the reproducibility crisis

Last month, Richard Harris (of NPR fame) underlined the reproducibility crisis with an accessible narrative wittily yet aptly titled ‘Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hopes, and Wastes Billions’.  In what has been described as a hard-hitting expose of the dysfunctional biomedical research system, Harris refers to a study led by Leonard P. Freedman, President of the Global Biological Standards Institute (GBSI), which found that approximately half of all preclinical research isn’t trustworthy.

And while many researchers may still be coming to terms with the crisis, this week Freedman and colleagues have delved even further into the issue at hand, publishing a comprehensive review of the initiatives being taken to improve reproducibility.  Rather than focusing solely on the problem, ‘Reproducibility2020: Progress and priorities’ provides funders, publishers, researchers, and other stakeholders with actionable recommendations.

Much to our delight, the works of Harris and Freedman converge on our Preclinical Reproducibility and Robustness channel.  This channel facilitates the open and transparent publication and discussion of confirmatory and non-confirmatory studies in biomedical research.  Alongside our open data and method policies, this space was developed as part of our continued efforts to implement publishing practices which promote reproducibility.

Here, Leonard discusses the Reproducibility2020 initiative and offers some welcome optimism.

 

Lenard FreedmanWhy did GBSI decide to carry out this review?

We are optimistic for the future of research reproducibility, and this paper is really intended to demonstrate that the biomedical research community recognizes there is a problem and is committed to fixing it.

While much has been written about the various drivers that have contributed to irreproducibility in research, much less has been focused on the solutions. We undertook this evaluation because we believe that assessing and documenting the progress that has been made to date, and identifying the challenges that remain are equally important steps to resolution. The Report is the first comprehensive review of the initiatives being taken to improve reproducibility. It identifies action and impact that has been achieved by the biomedical research community and outlines priorities going forward.

This paper is really intended to demonstrate that the biomedical research community recognizes there is a problem and is committed to fixing it.

GBSI will update the Report annually, and new developments are posted frequently at GBSI.org. Hopefully, the lessons learned from these early efforts will assist all stakeholders seeking to scale up or replicate successful initiatives.  For organizations exploring ways to become involved, the Report identifies key examples of the roles of funders, journals, researchers and other stakeholders, and provides recommended actions for future progress.

Can you give a brief overview of what the main sources of reproducibility are and how they can be addressed?

Since the widespread acknowledgement that reproducibility is a problem facing the biomedical research community, stakeholders have responded with innovation and policy change. The community is taking steps to work together and address the complexities of improving rigor and reproducibility, and the Report highlights many tangible examples of community-led actions.

The Report is the first comprehensive review of the initiatives being taken to improve reproducibility

We have grouped the causes of irreproducibility into the four general aspects that really make up the research process: study design and data analysis, reagents and reference materials, laboratory protocols, and reporting and review. But what sets the Report apart are the broad strategies for continued improvement of reproducibility, namely: 1) to drive research quality through strengthened journal and funder policies; 2) to create high quality online training and proficiency testing and make them accessible; 3) to engage all stakeholders to establish community-accepted standards and guidelines; and 4) to enhance open access to data and methodologies.

But the challenges are still great and some problems are more entrenched than others. Let’s start with incentive structure firmly in place in academia.  This dynamic must seek a better balance between the pressures of career advancement and producing rigorous research. Ultimately, for impactful initiatives such as those described in the Report to be embraced by the community, the so-called perverse incentives currently driving academic success, such as publishing in the highest impact journals, or bringing in the biggest grant dollars, must be reassessed and ultimately changed.

What do you think has been the main changes in making biomedical research more reproducible since the ‘Case for Standards’ was first published?

By far the greatest progress over these few years has been in the stakeholders themselves recognizing the severity of the problem and the importance of taking active steps for improvement.  Every stakeholder group is now addressing the issues, including journals, government funding agencies, private funders, academicians and industry. That’s crucial because there is not one simple fix—it is a community-wide problem and it will take a community-wide effort to achieve solutions.

By far the greatest progress over these few years has been in the stakeholders themselves recognizing the severity of the problem and the importance of taking active steps for improvement.  

Can you tell us about the Reproducibility2020 Initiative?

GBSI launched the Reproducibility2020 Initiative in February 2016 as a challenge to all biomedical research community stakeholders to join with us to ensure that solutions are in place to improve reproducibility by the year 2020.  As a leader in this global effort, GBSI is devoting its resources to affect change where its programs can make the greatest difference, work with partners to advance the broader agenda to improve the quality of scientific discovery in pre-clinical biological research settings, and keep stakeholders informed about movement and progress.

What plans for the future has GBSI to ensure reproducibility remains a priority in the future?

As a global leader in championing effective solutions to reproducibility, GBSI has recently completed a new 5-year strategic plan and will expand its work in the following areas:

Advancing standards and best practices to ensure quality and advance discovery in basic biomedical and translational research

  • leading a global initiative toward improving the validation of reagents—particularly cells and antibodies
  • working with community leaders to address the growing need for standards in emerging fields, such as regenerative medicine, engineering biology, and lab automation

Promoting education and training

  • ensuring that high quality, accessible online training modules are available to both emerging and experienced researchers who are eager to improve their proficiencies in new and evolving best practices
  • Advocating improved policies that increase rigor, accountability, and open access to data and methodologies by journals, funders, and academic institutions, and other research community stakeholders.
  • In addition, GBSI will continue to bring the community together through a new membership program, expanded meetings and events, and publications of high impact for the field.

 

 

A connected culture of collaboration: recognising and understanding its value for research

I contributed to the recently published Digital Science report on the Connected Culture of Collaboration. In it I explore how it is important for science to understand more about how collaboration, multi-disciplinary research and team science work to best effect. And maybe, when collaboration might not be the best option. There is also a possibility of what researchers at MIT, Magdalini Papadaki and Gigi Hirsch, coined ‘consortium fatigue’ arising, whereby large scale research may result in, for example, low productivity or a sense of redundancy.

‘Science of science’ (or as the MIT team suggested ‘science of collaboration’) always seems woefully neglected, and under-funded, given that if we knew how to optimise support for science and research, we should be able to produce many more of those outputs that funding agencies are keen to count, and accelerate their impact both within and beyond academia.

Why collaborate?

There is more to know about when and how to forge, sustain and nurture collaboration.

The starting point for understanding collaboration is, as Laure Haak, Executive Director of ORCID, who wrote the Collaboration report’s foreword says, ‘we need to be intentional with our infrastructure’. The way research is set up, directed, executed, where, with what, with whom, and all the other things that can influence the results of an experiment at any given time, on any given day, provides the context that is likely to be pivotal in making a breakthrough, or not. Put simply, the environment and resources, and the team, available for scientific research are crucial.

It is easy to find examples of multi-disciplinary teams and collaborations that have produced significant leaps forward and far-reaching impact. A recent analysis of the UK’s Research Excellence Framework (REF) found that over 80 per cent of the REF impact case studies described impact that was based upon multidisciplinary research. There is, however, more to know about when and how to forge, sustain and nurture collaboration. There is also evidence that working as part of a large team or collaboration can have a detrimental effect on the career of some individuals; particularly while research articles remain a researcher’s main currency.

Assigning authors’ roles

To provide an updated view of authorship and greater transparency around research contributions, the Contributor Roles Taxonomy (CRediT) was developed.

Original research papers with a small number of authors, particularly in the life sciences, have become increasingly rare.  Therefore, use of author position, as a way to estimate levels of researcher contributions is not useful nor is it easy to distinguish the role each author played. To provide an updated view of authorship and greater transparency around research contributions, the Contributor Roles Taxonomy (CRediT) was developed.

CRediT is the result of cross-sector collaboration, medical journal editors, researchers, research institutions, funding agencies, publishers and learned societies, and provides a simple taxonomy of roles that can be assigned as descriptors of individuals’ contributions to scholarly published output.

Individual contributions are captured in a structured format and stored as a piece of meta-data during an article’s submission process.  The taxonomy, going way beyond the concept of ‘authorship’, includes a range of roles such as data curation; development of design methodology; programming and software development; application of statistical or mathematical techniques to analyze data; and data visualization. Assigning these roles to those putting their name to a piece of scholarly output allows individuals to be recognised for specific skills and contributions to the research enterprise.

What changes have we seen?

If we can understand how collaborations work and when, we can properly incentivise the sorts of behaviours and collaborations that might make breakthroughs more commonplace

Since its launch in 2014, there has been considerable support for CRediT’s pragmatic way to provide transparency and discoverability to research contributions, and importantly build this into the scholarly communication infrastructure at minimal effort to researchers.  The standards organisation, CASRAI (Consortia Advancing Standards in Research Administration), is the custodian of the CRediT taxonomy, and many organisations are already using the taxonomy.  In 2016 PLOS implemented the CRediT taxonomy for authors across all its journals; Cell Press have endorsed the use of the roles amongst their ‘authors’; Aries Systems includes the taxonomy in its Editorial Manager manuscript submission system; and F1000 are implementing the taxonomy across their open research publishing platforms during 2017.

If others follow, this means that we will be able to tie contributions, to collaborations, to outputs and to impact. Collaborations are considered by policymakers and funding agencies to be increasingly crucial ways to tackle complex scientific problems and global challenges. If we can understand how collaborations work and when, we can properly incentivise the sorts of behaviours and collaborations that might make breakthroughs more commonplace and potentially speed up the translation to tangible impacts.  And for ‘science of science’ enthusiasts like me, it will take us a small, but helpful, step closer to being able to understand how science and research works.

I will be talking about the ‘connected culture of collaboration’ in a webinar on Thursday 6th April. Find out more about the event and sign up here.

Envisioning future scholarly communication: The Vienna Principles

In June 2016, we published the Vienna Principles: A Vision for Scholarly Communication in the 21st Century. The set of twelve principles describes the visions and foundations of a scholarly communication system that is based on the notion of openness in science, including the social sciences and humanities.

Open science demands the highest possible transparency, shareability and collaboration in knowledge production, as well as in the evaluation of scientific knowledge and impact. The principles are designed to offer a coherent frame of reference to the often controversial debates on how to improve the current system of scholarly communication.

Mindful of the fact that systems of communication shape the very core of scientific knowledge production, we set out to envision guiding principles for scientific practice that we really want. In this post, we’d like to introduce the principles and provide context on how they came about. We’ll also share our ongoing work of turning the vision into practice.

“What science becomes in any historical era depends on what we make of it” — Sandra Harding, Whose Science? Whose Knowledge? (1991)

 

Focusing on the benefits of openness

Our work started in Vienna during the spring of 2015, when the Open Access Network Austria (OANA) commissioned the working group “Open Access and Scholarly Communication” to sketch a vision of how open science can change scholarly communication in the long run. Over the year, we had five further meetings, each of them in a different Viennese location, hence the name “Vienna Principles”.

The group consisted of a diverse set of people, including librarians, science administrators, students and researchers from a wide range of disciplines, including arts & humanities, engineering, natural sciences and social sciences in both basic and applied contexts.

Open science is still a fuzzy concept for many. People are often either unclear about its benefits or are overwhelmed by the challenges that come with it.

Many working group members are involved in related initiatives, such as Citizen Science Austria, Open Knowledge, Creative Commons and OpenAIRE, to name just a few, and several have a relevant professional background, including publishing and software development. The core group consisted of nine participants, but the overall work involved contributions and feedback by more than 20 people and the audiences of the 15th Annual STS Conference, Graz and the 3rd Plenary of the Open Access Network Austria.

At the beginning, there were a number of observations that were based on our own involvement in open science, and by the experience of group members that had joined the movement only very recently. Our first observation was that open science is still a fuzzy concept for many. People are often either unclear about its benefits or are overwhelmed by the challenges that come with it. Therefore, they tend to have a reserved attitude towards openness.

Many of the arguments carry implicit assumptions about the structures of a future scholarly communication system

Our second observation was that the debate within the open science community is not necessarily focused on the benefits of openness, but mostly on what constitutes openness, how to achieve openness, and what steps to take next.

The classic debate around the “green” and the “gold” route to open access is a good example for this. In these discussions, many of the arguments carry implicit assumptions about the structures of a future scholarly communication system, besides highly emotional debates about the commodification of scientific knowledge distribution.

 

What do we really want?

There are currently no commonly agreed set of principles that describes the system of open scholarly communication that we want to create. Such a collection of widely shared cornerstones of the scholarly communication system would help to better guide the debate around open science. At the same time, a vision is needed that better conveys the need for openness in scholarly communication to academia and society.

For the definition of the principles, we adopted a clean slate approach. This means that we set out to describe the world that we would like to live in, if we had the chance to design it from scratch, without considering the restrictions and path dependencies of the current system.

We established a set of twelve principles of scholarly communication describing the cornerstones of open scholarly communication

Our aim was to be clear, concise and as comprehensive as possible, without repeating ourselves. What followed was an intense phase, where we devised and revised, expanded and reduced, split and merged. We also addressed and incorporated the valuable feedback that we received by so many.

From this, we established a set of twelve principles of scholarly communication describing the cornerstones of open scholarly communication. This is just the beginning, with this being version 1.0 and we invite everyone to comment on this version.

 

What next?

Our paper has been positively received. Besides hundreds of tweets linking to the publication on Zenodo and newspapers and blogs have also reported about it. This includes articles that have been partially translated into Spanish, Japanese and German. The PDF on our website has been annotated 58 times alone. We are delighted that several researchers are now trying to adopt the principles in their research and collaboration projects.

We hope to be able to illustrate the best practises and identify any obstacles to open science in the scholarly communication system

The working group, consists of 16 new active members, of which some are consolidating the latest feedback received in recent months, and others who are devising recommendations on turning each principle into reality. This allows us to study and discuss the different attitudes towards the twelve principles in a range of disciplines, especially in those fields which seem most sceptical about the principles, such as historical and art-related subjects.

We will consider stakeholder’s viewpoints, clarify legal framework conditions, and discuss incentive and reward systems to identify how the principles can be best applied throughout the institutions. In doing so, we hope to be able to illustrate the best practises and identify any obstacles to open science in the scholarly communication system.

We plan to hold group discussions and workshops with stakeholders, publisher and funders to explain how the principles could support the services they offer and articulate the capability of these principles to different stakeholders’ needs.

We aim to have an updated version of the Vienna Principles by 2018

Furthermore, we are coordinating our efforts with other groups, such as the Force11 working group on the Scholarly Commons and SPARC Europe. By 2018, we aim to have an updated version of the Vienna Principles and several recommendations to support the adoption of open science based on the feedback obtained from the workshops and discussion groups.

We are looking forward to shaping the scholarly communication system of the future together with all of you.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑