Since December 2016, Editage Insights has been running a survey of scholars to gauge their opinions on academic journal publishing. In this interview, Clarinda Cerejo, editor-in-chief, shares preliminary findings from the first 5,000 responses.
A quick Google search on virtually any topic yields a list of sources of information on the search engine’s first page. Traditionally, scholarly publishing has always relied on the peer-reviewed journal articles as references in research and academic writing. A fair and rigorous peer-review process guarantees credible and verified content. However, a new dichotomy has…
As part of a continuing series, on Thursday 25th May we hosted a Digital Science thought leadership webinar discussing the trends in cloud-based computing and the importance of failure in innovation and how this can lead to great science. We discussed the benefits of investing in cloud-based applications and infrastructure and also what industry leaders like Amazon and CERN and planning on developing over the next five years.
Our speakers included:
- Host: Laura Wheeler – Head of Digital Communications, Digital Science
- Steve Scott – Director of Portfolio Development, Digital Science
- Brendan Bouffler, Global Manager, Amazon Web Services Research Cloud Program
- Tim Bell, Group leader of the Computer and Monitoring group within the IT department, CERN
- Dan Valen, Product Manager, Figshare
— Digital Science (@digitalsci) May 17, 2017
Laura Wheeler (@laurawheelers) kicked off by giving a brief overview of the esteemed panel and their backgrounds before handing over to Steve Scott, our first speaker. Steve started the webinar by providing an overview of the cloud and its various applications drawing on a rather tasty metaphor – making a pizza!
After walking the audience through the cloud-based infrastructures, Steve reflected on an important point: Researchers now require a vastly different set of skills than what once was required.
“Researchers of the past were not necessarily associated with the finer details of IT procurement – it wasn’t a hot topic for them. That’s changing. Scientists must now navigate the landscape of digital information management…One of the challenges for the cloud services is about training for researchers.”
Steve elaborated with some important points:
- Physical hardware is on the way out! It takes a huge amount of time to install and can become redundant and out of date very quickly – with cloud-based services you rent the hardware and the deployment of services can be instantaneous.
- Storing large amounts of data is a challenge. A USB can be lost or become corrupted easily. Lab notes that were taken in notebooks are perishable.
Quote from a lab at a national institute:
“We discovered a promising new way to run a major experiment we carried out seven years ago and needed to find the original datasets used. No-one could!”
Steve finished his presentation by talking about the collaborative nature of the cloud. Teams from across the globe are able to work on datasets easily and efficiently in real time!
— Dan Valen (@dnvln) May 25, 2017
Next up, we had Brendan”Boof” Bouffler, Global Manager at Amazon Web Services Research Cloud Program who started with a funny anecdote about Jeff Bezos and the origins of Amazon.
“I think that when Jeff Bezos set the company up he may not have expected to have a research computing team inside the company but he got one! We actually got there because we discovered that in order to do this stuff [Amazon’s online shopping] we had to get very good at building big data centres. When you build big data centres, you start to find out what you can do with them…My team’s task is to make this crazy business applicable to the scientists and to the community.”
Brendan then talked about the fundamental steps in the scientific method and how the tools he helps create amplify these processes allowing scientists to work better and more efficiently, asking more questions and getting quicker responses. It’s also important to mention that scientists hoping to process data are able to do so on a number of different platforms. If one doesn’t work, try another!
Image: Brendan Bouffler
Just like in all endeavors – failure is a chance to learn. Every time you carry out the scientific method, you get closer to finding out how the universe works.
— Digital Science (@digitalsci) May 25, 2017
Brendan referenced the pace at which technologies progress and how he envisages computing to be a commodity that one can use when they want, only paying for their usage time. Imagine a world, Brendan says, where every time we required electricity to power our electronics we had to spin up a generator out in our back gardens – apply this same logic to computing! Brendan spent the rest of his presentation commenting on what his team are focused on doing now and in the future. He also ran through the AWS platform. The possibilities that cloud computing offer industries like medicine are endless – it’s changing the way we cure disease!
Image: Brendan Bouffler
Brendan closed his presentation by pointing to the Amazon Web Services (AWS) Researcher’s Handbook he has helped create. The AWS Research Cloud Program helps you focus on science, not servers! Join the AWS Research Cloud Program and download the handbook here.
Next up, we had Tim Bell, Group leader of the Computer and Monitoring group within the IT department at CERN who walked us through CERN’s current cloud computing uses and its plans for the future.
But before that, Tim introduced The Large Hadron Collider!
Image Credit: Tim Bell
“Over the space of the past five years, we’ve been deploying a private cloud that allows the physicists to become familiar with the use of cloud technologies, and to be using those technologies in order to be driving their workloads through software rather that having people running around the computer centre with cables!”
Tim stated that it is very important that computing power keeps up with what the physics requires. He then ran through all the ways in which CERN is working toward commercial opportunities partnering with other cloud-based services like Google and Amazon.
Image Credit: Tim Bell
It’s not just about infrastructure though. Tim mentioned the software service solutions running on top of the CERN cloud. For example, the online repository Zenodo that was created to help researchers based at institutions of all sizes to share results in a wide variety of formats across all fields of science. Tim also mentioned the CERN Open Data Portal where anyone can access LHC curated data and become a physicist for a day! A useful tool that schools are using to teach pupils how professional scientists work.
Tim then talked about the challenges facing CERN today:
- Purchasing cloud resources through public procurement is very hard.
- Cost models where access to a data set had a cost on it.
- Data-intensive science.
- Skills combination is changing over time. We now need contract managers, rather than employees, who change the disks and install the machines.
- Scale – finding methods that CERN can be scaling.
— Digital Science (@digitalsci) May 25, 2017
Our final speaker was Dan Valen, Product Manager at Figshare who focused his presentation on why Figshare chose to be a cloud-based platform and why they chose AWS to be their cloud-based platform.
“Dealing with different file types has always been a challenge at Figshare. Taking on the tasks of handling large file uploads – specifically now that we can handle file uploads of up to five terabytes. To then build file previews to visualize them in the browser and ingest and expose the different files and metadata requires a solid infrastructure.”
Dan went on to list the important reasons why cloud-based data is vitally important to everyone from librarians and scientists to publishers and university administrators by drawing on a list of data storage horror stories.
There’s a real danger of storing items locally and not having a data preservation and archiving policy in place. Dan went on to state the key benefits of cloud-hosted services, one being predictable cost over time and another being low maintenance. Fundamentally, the most important benefit, however, is excellent security. AWS are able to offer the same types of security they offer to some of the world’s most data-sensitive organizations! The rest of Dan’s presentation was aimed at running through Figshare’s infrastructure and its storage workflows.
The webinar ended with a lively Q&A debate spearheaded by Laura Wheeler; great questions invoked great responses! Using #DSwebinar, our audience was able to interact with our panel throwing their opinions into the mix. If you feel you still have something to say – we’re all ears! Tweet us @digitalsci using #DSwebinar.
— Digital Science (@digitalsci) May 25, 2017
We are excited to announce several product enhancements that will help you derive even more value from TetraScience. Customer experience is extremely important to us. We developed all of the changes below in direct response to ideas and feedback from customers like you, so please stay in touch as you use the product.
This blog will cover:
- New navigation and mobile UI
- How to group devices
- How to group users into teams
- New alert options
Note: this product release requires no action from you, the user. We do recommend you read the below overview to help you get acquainted with the updated interface.
New navigation and mobile UI
The most striking update is our new top navigation bar. Each of the menu items that used to be on the left side of the screen, are now within the app picker at the top of the screen. This bar gives you a single place to access all of the different apps within TetraScience, which each provide a different view of the data that you capture with the TetraScience platform. As we release new apps, they will appear within the top navigation bar. The top bar is also the new home for the alert icon, which flashes during an active incident.
In the process of updating the top navigation, we also improved the usability and performance of our mobile experience. Many of you let us know that you wanted improved usability for your mobile device, so this will be a core design consideration going forward.
We renamed a few things
Many of you use our device panels to explore and download time series device data. The screen with all of the device panels was previously called “All Devices.” To more accurately reflect the contents of the “All Devices” screen, we renamed it to “Device Panels.”
Within our Device Panels screen (formerly called “All Devices”), you could create groupings of devices called “dashboards.” We renamed dashboards to “device groups” to broaden their future uses. You will also find that device groups now appear on the Lab Monitoring screen, which should help you focus on the devices that matter to you no matter where you are within TetraScience.
Organizations…what are those?
A TetraScience organization is a collection of users and the devices those users can access. All permissions in TetraScience are managed at the organization level. By default, all users can see all devices. Within an organization, each user has one of three roles: admins, owners, and members. Admins have full control over the organization and configure it for other users. The owner can do everything admins can do, but can also make other users into admins. Members can see data, but cannot change settings that will affect others users.
Teams (groups of users)
Less noise for your users
Organization administrators can now choose to limit the visibility of any device group to specific groups of users, known as teams. An organization member’s list of device groups will only show the device groups that admins configure to be visible. If needed, any user can go into the “All Devices” device group to find a specific device.
More powerful administrative utilities
You can now manage all of the device groups and teams in your organization from the My Organization screen. You can preconfigure your device groups and teams, so when a new user starts at your company, setting them up is as simple to adding them to the appropriate teams.
More sophisticated alert behavior
You can now send alerts to select teams. Previously alerts either went to one user (the creator of the trigger) or to all users in an organization. This change helps you to significantly reduce noise for your users, configure much simpler alerts, and more easily manage your alerts. This change also allows you to create escalation rules in the case that an incident is not rapidly resolved.
All of this new functionality was developed in direct response to feedback from you. Please keep the feedback and ideas flowing, so we can continue to improve!
Researchers are under more pressure than ever before to secure the money they need to do their work. The funding exists: the predicted worldwide spend on research in 2016 was $1.9 trillion. This was an increase of 3.4% on the previous year.
But with so many grants available in such a myriad of subjects via such a large variety of institutions, how can a researcher match their aspirations to the right opportunity?
We’re pleased to announce the launch of Mendeley Funding.
Mendeley Funding is a new tool which catalogues funding opportunities from across the globe. It includes calls for proposals from prominent organisations including the European Union, government departments in the United States like the National Institutes of Health, UK Research councils, and many more.
By using Mendeley Funding, Researchers can:
- Search for relevant funding
- Save interesting opportunities
- Access detailed information about funders
For more information, visit http://www.mendeley.com/funding. Then sign in to Mendeley, access the tool by using the link marked “Funding” in the toolbar, and get searching. A world of opportunities awaits you.
We’re always working to make Overleaf better by introducing new features and improving existing ones. Here’s a short update on what we’ve been up to lately:
- My Projects Dashboard Pagination Controls
- Journals and Services Menu – Academic Journals Search
- Project Views Count
- Performance Improvements
Click the links above to jump straight to that section, or click below to read the full post.
We suggest a centralized facility for submitting to journals—one that would benefit scientists and not only publishers.
May 10, 2017|
“We envision a single web page where we could submit all manuscripts for publication to any reputable journal. Each manuscript would receive a unique identifier, and the following very few boxes to fill: ORCID or ResearcherID identifier for each author, abstract, cover letter, and a short list of potential reviewers. And maybe a few tick boxes signifying author agreement, ethics, and regulatory compliance. A single PDF of the manuscript and supporting material could be uploaded and a journal chosen from a drop-down list. Presumably, such a centralized facility could also be used for distributing manuscripts for review, and would serve the added benefit of a “paper” trail for different journal editors to follow the rejection history of any manuscript, possibly even sharing reviews.”
It is widely acknowledged that submitting a paper to a journal is a fraught activity for authors. But why should this still be the case? James Hartley and Guillaume Cabanac argue that the process has always been complicated but can, with a few improvements, be less so. By adopting standardised templates and no longer insisting on articles being reformatted, the submission process can quickly be simplified.
Author survey shows that publication speed and the ability to share a variety of research outputs are the primary reasons why authors publish on the Wellcome Open Research publishing platform. Michael Markie, Publisher at F1000 and Robert Kiley, Head of Open Research, Wellcome discuss the survey results and what actions will taken based on them.
Wellcome Open Research has now been publishing for just over 6 months and to-date has published 63 articles.
The platform was specifically developed for Wellcome-funded researchers to explore the benefits of immediate publication of articles and other research outputs with no editorial bias, followed by an author-led, transparent, peer review process. As this approach differs somewhat from the traditional publishing model we were keen to reach out to those authors who had used this platform to understand their motivations for publishing here, what they liked and which aspects could be improved.
Consequently, in April 2017 we invited the first 50 submitting authors on Wellcome Open Research to participate in a short survey. We received an impressive 84% response rate and access to the survey results can be found here. Below is a summary of the major findings and what we have learnt from our authors.
What have we learnt?
The author experience with the editorial office and their experience of the overall publication process was very positive. A clear majority said that the submission process was efficient and they were very satisfied with the level of support, speed and responsiveness of the editorial team. Due to this positive experience, most the authors said they would recommend publishing on the platform to a colleague, and said they would also be inclined to publish again.
Speed and variety of article types
We asked our authors explicitly what was their primary reason for submitting to the platform. The two stand out reasons were, one, the speed of publication and two, the fact that the platform publishes all research outputs – not just traditional research articles.
With regard to speed of publication, the median time from submission to publication is 19 days, whilst the median time from publication to when an article has passed peer review and is indexed in PubMed, PMC and Europe PMC is currently 31 days. The speed in which research findings are not only accessible but also discoverable through these major online platforms is a key factor for our authors and one which is driving new submissions.
We are also very pleased that the platform is carving out a niche of publishing a variety of research types that the authors believe should be made publicly available. Currently half the articles we have published are not traditional research articles, but rather a rich mix of research outputs such as software tools, methods, protocols and data notes. Our authors have made it clear there is much research they would like to share with the community but can’t necessarily do so in a traditional journal; Wellcome Open Research is providing a useful venue to facilitate this.
Perception of peer review
The open peer review process, which is author led – suggesting reviewers, and engaging with them in an open, transparent way – is probably the biggest difference that our authors experience whilst publishing on the platform and, not surprisingly, this aspect of the process is where we have had suggestions of how we can improve.
We received valuable feedback that our competing interest’s criteria about co-authors may be too stringent, as in some cases previous co-authors and collaborators are the most appropriate people to review a certain article and so shouldn’t be automatically excluded. This is a valid point and something we will look more closely at. Ultimately, we need to balance the need of ensuring we receive an unbiased review against an ambition to allow the author to select the right reviewer for an article, which in some cases might be someone they have previously worked with in the past.
The survey also highlighted an interesting dilemma around attitudes to open peer review. So, whereas only 14% of respondents disagreed with the statement that “the ability to select the referees improves the publication process”, a third of respondents felt that author-driven selections would result in reviews being less critical. Whether this is the case is impossible to determine, but it is worth noting that reviewers have been prepared to “not approve” papers and that the reviews – all publicly available – are on occasions highly critical.
Responding to survey results
Our authors also felt that the information about the peer review process could be clearer, especially with regard to how and when to respond to their online reviews and at what point they should make their revisions. In light of this, we intend to streamline the authors user experience so they are fully aware what steps are needed and at what point to follow them. With the author having more autonomy in the peer review process and in the absence of an editor, it is important that the instructions and tools we provide to the author enables them to navigate the process in a simple and intuitive way.
Finally, our referee finder tool was well received, though only half the authors made use of it. For those who did use it, it not only helped find potential reviewers, but also helped to identify new collaborators by bringing the authors attention to research groups they were previously unaware of. In the words of one researcher:
“We chose referees relevant to the project, from the selector tool. In fact, one of them is now coming to do a seminar at my institution, so the process has also led to networking and potentially collaboration opportunities for us.”
With this in mind, we will work on making this tool more integrated and visible at the point where authors are selecting the reviewers for their article as it seems to be a very good complement to their own suggestions, and it is helping ensure the correct reviewers are being selected.
We will continue to survey our authors as more of them publish on the platform. We thank those who participated this time around and through this community feedback will make the necessary changes to keep improving the Wellcome Open Research platform.
London, January 26, 2015 – Editage, a leading provider of editorial and publication support services for the global research community and the flagship brand of Cactus Communications, and Overleaf, a rapidly growing online collaborative writing and publishing platform by WriteLaTeX, have entered into an agreement to allow Overleaf authors to directly transfer manuscripts to Editage for its entire range of language editing services.