TechBlog: eLife replaces commenting system with Hypothesis annotations

The next time you feel moved to comment on an article in the open-access online journal eLife, be prepared for a different user experience. On 31 January, eLife announced it had adopted the open-source annotation service, Hypothesis, replacing its traditional commenting system. That’s the result of a year-long effort between the two services to make Hypothesis more amenable to the scholarly publishing community.

Read full story

eLife enhances open annotation with Hypothesis to promote scientific discussion online

eLife, in collaboration with Hypothesis, has introduced open annotation to enable users of its website to make comments, highlight important sections of articles and engage with the reading public online. The open-source Hypothesis software has been extensively customised for use by eLife and other publishers with new moderation features, single sign-on authentication and user-interface customisation options now giving publishers more control over its implementation on their sites.

Read full post

Making Peer Review More Open

Traditional peer review relies on anonymous reviewers to thoughtfully assess and critique an author’s work. The idea is that blind review makes the evaluation process more fair and impartial–but many scholars have questioned whether this is always the case. Open review has the potential to make scholarship more transparent and more collaborative. It also makes it easier for researchers to get credit for the work they do reviewing the scholarship of their peers. Publishers in both the sciences and the humanities and social sciences have been experimenting with open review for almost two decades now, but it is only recently that open review seems to have reached a tipping point. So what exactly is open review, and what does it entail? Continue reading “Making Peer Review More Open”

PubFactory partners with Hypothesis to extend collaboration tools across the platform

pubfactoryhypothesisWe are thrilled to make Hypothesis annotation technology available across the PubFactory platform. Hypothesis’ mission, “To enable conversation over a world of knowledge”, so simply and precisely conveys the problem that they are tackling head on. We like this – we like this a lot.

Continue reading “PubFactory partners with Hypothesis to extend collaboration tools across the platform”

OS Bazaar – Berlin 2017

On October 23rd 2017 in Berlin, we held a one day FORCE pre-meeting open source bazaar in partnership with Hypothesis.  The day opened with a discussion on about replacing our currently siloed scholarly communications platforms and tools with a new ecosystem of open source technologies. This is the only way to transform the sector at scale, which is a job too large for any one organization or company.

Kristen opened the day with the key themes:

John Chodacki kept the day moving as the MC and we saw demos and presentations from an incredible lineup of projects:

Oh and don’t forget, we also had a celebration and product showcase during lunch from groups across the annotation community: SciLite, Hypothesis/SciBot, Pundit, Xpansa, PaperHive, eLife, Profeza) and others.  All in one day!

The tweets are all at #osbazaar. One tweet captured the essence of the day

Kristen summarized the day at a session at FORCE with this presentation:

Was the inaugural OS bazaar a rousing success? Was a good time had by all? Yes and yes. Any opportunity to come together and hear updates from the community about the incredible projects under development. We’re already looking forward to OS Bazaar #2!

Syndicating annotations

Steel Wagstaff asks:

Immediate issue: we’ve got books on our dev server w/ annotations & want to move them intact to our production instance. The broader use case: I publish an open Pressbook & users make public comments on it. Someone else wants to clone the book including comments. How?

There are currently three URL-independent identifiers that can be used to coalesce annotations across instances of a web document published at different URLs. The first was the PDF fingerprint, the second was the DOI, and a third, introduced recently as part of Hypothesis’ EPUB support, uses Dublin Core metadata like so:

<meta name=”dc.identifier” content=”xchapter_001″>
<meta name=”dc.relation.ispartof” content=”org.example.hypothesis.demo.epub-samples.moby-dick-basic”>

If you dig into our EPUB.js and Readium examples, you’ll find those declarations are common to both instances of chapter 1 of Moby Dick. Here’s an annotation anchored to the opening line, Call me Ishmael. When the Hypothesis client loads, in a page served from either of the example URLs, it queries for two identifiers. One is the URL specific to each instance. The other is a URN formed from the common metadata, and it looks like this:

urn:x-dc:org.example.hypothesis.demo.epub-samples.moby-dick-basic/chapter_001

When you annotate either copy, you associate its URL with this Uniform Resource Name (URN). You can search for annotations using either of the URLs, or the just URN like so:

https://hypothes.is/search?q=url:urn:x-dc:org.example.hypothesis.demo.epub-samples.moby-dick-basic/xchapter_001

Although it sprang to life to support ebooks, I think this mechanism will prove more broadly useful. Unlike PDF fingerprints and DOIs, which typically identify whole works, it can be used to name chapters and sections. At a conference last year we spoke with OER (open educational resource) publishers, including Pressbooks, about ways to coalesce annotations across their platforms. I’m not sure this approach is the final solution, but it’s usable now, and I hope pioneers like Steel Wagstaff will try it out and help us think through the implications.

Qualitative Data Repository Teams with Hypothesis to Develop Annotation for Transparent Inquiry (ATI)

Originally published 12 May 2017 on the QDR blog by Sebastian Karcher.

Scholars are increasingly being called on – by journal editors, funders, and each other – to “show their work.” Social science is only fully understandable and evaluable if researchers share the data and analysis that underpin their conclusions. Making qualitative social science transparent poses several knotty problems. The Qualitative Data Repository (QDR) and Hypothesis have partnered to meet this challenge by developing a new way to cite, supplement, and share the data underpinning published work.

The Challenge: Achieving Transparency in Qualitative Research

Three aspects of qualitative inquiry complicate transparency. First, qualitative data are multi-format and non-numeric (text, audio, video, pictures). Second, they are analyzed and used to support claims individually or in small groups: each insight drawn from one or a handful of cited sources (e.g., books, archival documents, interview transcripts, newspaper articles, video clips, etc.) serves as a distinct input to the analysis. Third, data, analysis, and conclusions are typically densely interwoven across the span of a book or article.

Qualitative Research – Individual Pieces of Data
Qualitative Research – Individual Pieces of Data

Quantitative social science does not face the same challenges. Quantitative work involves the computational analysis of numeric data arranged in a matrix and approached as an aggregate body of information. The analysis is typically summarized in tabular form in the text or appendix of published work. To make quantitative publications transparent, scholars share the study dataset (and relevant information about its creation) and supplemental materials such as the code used for analysis.

Quantitative Research – Matrix Data
Quantitative Research – Matrix Data

Making qualitative research similarly transparent requires resolving at least two problems: safely sharing non-numeric data that may come in multiple forms, and placing those data adjacent to the claims and conclusion in the text that they support. Traditionally, qualitative researchers showed at least some of their work in extended footnotes in which they cited the data they relied upon; provided supplemental information about how the data were analyzed and support their points; and provided extracts from those materials. Traditional footnotes are a sub-optimal solution, however. Tight space constraints severely limit what can be included, a problem made even more acute by the increasing use of in-text citation styles. Moreover, even where extracts of the evidence are included in long-form footnotes, there is no systematic way to ensure that available underlying sources are held and curated in ways that make them accessible and useful to scholars.

The Solution: Annotation for Transparent Inquiry (ATI)

Annotation for Transparent Inquiry (ATI), developed through a partnership between QDR and Hypothesis, uses author-generated web annotations on academic publications. Annotations provide information about data analysis, excerpts from data sources, and links to underlying sources, housed in a data repository. The approach harnesses the power of open web annotations, displayed by Hypothesis. Authors annotate their work and deposit underlying data sources with QDR. The repository curates these deposits and converts them into a set of web annotations on the published article, and creates a data project (the aggregate of the underlying data sources). The annotations can be viewed alongside the article using the Hypothesis client, and interested readers can access the underlying data sources archived at QDR.

Annotation for Transparent Inquiry
Annotation for Transparent Inquiry

The new collaboration between Hypothesis and QDR is already bearing fruit. You can see an example of scholarship annotated using ATI here. This is a working paper by Sam Handlin (Department of Political Science, University of Utah), “The Politics of Polarization: Governance and Party System Change in Latin America, 1990-2010,” published by the Kellogg Institute at Notre Dame University. The annotations you see on the side are served by Hypothesis. QDR curated the annotations and provides access to the underlying files, e.g. for this annotation.

Further, working with the Agile Humanities Agency, QDR has developed the function to use the g #annotations:query:<search phrase> at the end of a link to only show a subset of annotations on a given page using the Hypothesis proxy service. QDR uses this feature to present links to the set of annotations that make up the qualitative data underlying an article by limiting the view to annotations created from QDR’s Hypothesis account. You can see this at work in the link to Sam Handlin’s paper above.

Looking ahead, QDR will hold two workshops in late 2017 and early 2018 focused on evaluating and further developing ATI. The workshops are funded by a grant that the Robert Wood Johnson Foundation has awarded to the Qualitative Data Repository to pilot and promulgate ATI and to encourage its use.

Further, QDR and Hypothesis are hoping to address the challenges created a large share of academic literature in the hard and social sciences residing behind a paywall. Access is provided to particular IP-ranges known to be associated with institutions that pay for access. Finding user-friendly solutions to allow viewing annotations on paywalled material is therefore high on our agenda. We hope to draw on our partnership with a wide range of academic publishers in the “Annotating All Knowledge” coalition to develop those solutions. While our immediate interest is motivated by rendering qualitative research transparent, the annotation of academic literature will benefit a much broader scholarly community.

QDR and Hypothesis will also work towards facilitating third-party authentication to the Hypothesis platform. For QDR, the ability to authenticate users against its own user base is critical to limit access to sensitive material that may be stored in annotations, e.g. in the form of interview excerpts.

Weaving the annotated web

In 1997, at the first Perl Conference, which became OSCON the following year, my friend Andrew Schulman and I both gave talks on how the web was becoming a platform not only for publishing, but also for networked software.

Here’s the slide I remember from Andrew’s talk:

http://wwwapps.ups.com/tracking/tracking.cgi?tracknum=1Z742E220310270799

The only thing on it was a UPS tracking URL. Andrew asked us to stare at it for a while and think about what it really meant. “This is amazing!” he kept saying, over and over. “Every UPS package now has its own home page on the world wide web!” Continue reading “Weaving the annotated web”

The GO FAIR Initiative

Originally posted at Pundit by 

The diffusion and the public endorsement of data FAIRness has been rapid. The FAIR Data Principles were were published in late 2014 and early 2015. In 2015 at their summit in Japan, the European Council and the G7 adopted Open Science and the reusability of research data as a priority, thus providing fertile ground for their uptake. Finally, the European Commission with Big Data to Knowledge (BD2K), Science Europe, and the G20 in the 2016 Hangzhou summit al endorsed data FAIRNESS (Mons et al, 2017).

Despite this, the actual numbers about data FAIRness are still insufficient and disappointing: in his recent LIBER webinar “Are FAIR Data Principles FAIR?” Alastair Dunning reported some data that highlighted that practice is still far from theory. Considering the number of open repositories, 41% of their data are findable and 76% are accessible, but only 38% are interoperable and 18% reusable. In particular, 49% of the repositories do not assign a persistent unique identifier to data sets. So compliance is not high. Some of the principles are easy to measure, others are much more subjective. There are still many open issues and the definition itself is open to interpretation, as explained by its promoters here.

The Global Open (GO) FAIR Initiative is among the initiatives aiming at putting Open Science into practice: a bottom-up initiative to start working in a trusted environment where public and private sector partners can deposit, find, access, exchange and reuse each other’s data, workflow and other research objects.

There are many issues to be addressed, with more social limitations and obstacles than technological ones. In practice, the GO FAIR implementation approach is based on three interactive processes/pillars:

The first pillar is go change:  a cultural change is needed, where open science and the principles of data findability, interoperability, accessibility and reusability are a common way of conducting science.

Cultural change can be achieved through training focused on locating, creating, maintaining, and sustaining the required core data expertise.

The aim of the second pillar, go train, is to have core certified data experts and to have in each Member State and for each discipline at least one certified institute to support implementation of Data Stewardship per discipline.

The last pillar, go build, deals with the need for interoperable and federated data infrastructures and the harmonization of standards, protocols, and services, which enable all researchers to deposit access and analyse scientific data across disciplines.

To have reusable annotations and graphs in a unique interoperable environment independently of both the client and the server people are using will be one of the main goals of the implementation network on annotation.

The GO FAIR Initiative is supported by the Annotating All Knowledge Coalition, Pundit and Hypothes.is.

Read more on GO FAIR here:

https://www.dtls.nl/go-fair/

Access the full documentation here:

https://www.dtls.nl/documents/

Fill in the survey and support the initiative:

https://www.dtls.nl/survey/

 

The Pedagogy of Collaborative Annotation

In our first Canvas webinar introducing the Hypothesis app, we didn’t have enough time to discuss the most interesting aspect of collaborative annotation: its pedagogy. On 19 April, we reconvened to focus more on what actually happens when working with this new technology in the classroom, hearing directly from educators currently implementing collaborative annotation in their classrooms both inside and outside the LMS.

  • how to prompt student annotations
  • annotation assignments and rubrics
  • encouraging student collaboration
  • multimedia writing
  • scaffolding annotation practice across a term
  • to grade or not to grade annotations
  • leveraging annotation work in other curricular projects
  • and much more

You may have missed our live webinar, but you can watch the recording and view the slides to learn more about the pedagogical value of collaborative annotation.

Pilot Hypothesis in Canvas


Presenters

Webinar Links

Join us May 3-6 in San Francisco at I Annotate 2017, the fifth annual conference for annotation technologies and practices. This year’s themes are: increasing user engagement in publication, science, and research, empowering fact checking in journalism, and building digital literacy in education.

UKSG 40 : The Temple of Change

The sunny but sometimes chill air of Harrogate this week was a good metaphor for the scholarly communications marketplace . Once the worshippers at the shrine of the Big Deal , the librarians and information managers who form the majority of the 950 or so attendees now march to a different tune . From the form of the article to the nature of collaboration this was a confident organization talking about the future of the sector . And at no point was this a discussion about more of the same . Three sunny days , but for publishers present there was an occasional chill in the wind .

I started the week with a particular purpose in mind , which was all about the current state of collaboration . I was impressed by the Hypothes.is announcement with Highwire (www.highwire.org) . There are now some 3000 journals using open source annotation platforms like the not-for-profit Hypothes.is to encourage discoverable ( and private ) annotation . Not since Copernicus , when scholars toured monasteries to read and record annotations of observations of the galaxies in copies of his texts , have we had the ability to track scholarly commentary on recent work and work in progress so completely . And no sooner had I begun talking about collaboration as annotation than I met people willing to take the ideas further , into the basis of real community-building activity .

It seems to me that as soon as the journal publsher has imported an annotation interface then he is inviting scholars and researchers into a new relationship with his publishing activity . And for anyone who seeks a defence against the perceived threat of ResearchGate or Academia.edu the answer must lie in building patterns of collaborative annotation into the articles themselves , and becoming the intermediary in the creation of the community dialogue at the level of issues in the scholarly workflow . So it seemed natural that my next conversation was with the ever-inventive Kent Anderson of Redlink , who was able to show me Remarq , in its beta version and due to be formally launched on 1 May . Here discoverable annotations lie in the base of layers of service environments which enable any publisher to create community around annotated discussion and turn it into scholarly exchange and collaboration . We have talked for many years about the publishing role moving beyond selecting, editing, issuing and archiving – increasingly , I suspect, the roles of librarians – and moving towards the active support of scholarly communication . And this , as Remaeq makes clear , includes tweets , blogs , posters , theses , books and slide sets as well as articles . Services like Hypothes.is and Remarq are real harbingers of the future of publishing when articles appear on preprint servers and in repositories or from funder Open Access outlets , where the subject classification of the research is less important than who put up the research investment .

And , of course , the other change factor here is the evolution of the article ( often ignored- for some reason we seem to like talking about change but are reluctant to grip the simple truth that when one thing changes – in this case the networked connectivity of researchers – then all the forms around it change as well , and that includes the print heritage research article . Already challenged by digital inclusivity – does it have room for the lab video , the data , the analytics software , the adjustable graphs and replayable modelling ? – it now becomes the public and private annotation scratchpad . Can it be read efficiently by a computer and discussed between computers ? We heard reports of good progress on machine readability using Open Science Jupiter Notebooks , but can we do all we want to fork or copy papers and manipulate them while still preserving the trust and integrity in the system derived from being able to identify what the original was and being always able to revert to it . We have to be able to use machine analysis to protect ourselves from the global flood of fresh research – if the huge agenda was light anywhere then it was on how we absorb what is happening in India , China , Brazil and Russia into the scholarly corpus effectively . But how good it was to hear from John Hammersley of Overleaf , now leading the charge in connecting up the disconnected and providing the vital enabling factor to some 600,000 users via F1000 and thus in future the funder-publisher mills of Wellcome and Gates , as well as seeing Martin Roelandse of Springer Nature demonstrating that publishers can potentially join up dots too with their SciGraph application for relating snippets , video, animations sources and data .

Of course , connectivity has to be based on common referencing , so at every moment we were reminded of the huge importance of CrossRef and Orcid Incontrovertible identity is everything , I was left hoping that Orcid can fully integrate with the new CrossRef Events data service , using triples in classical mode to relate references to relationships to mentions . Here again , in tracking 2.7 million events since service inception last month , they are already demonstrating the efficacy of the New Publishing – the business of joining up the dots .

So I wish UKSG a happy 40th birthday – they are obviously in rude health . And I thank Charlotte Rouchie , closing speaker , for reminding me of Robert Estienne , who I have long revered as the first master of metadata . In 1551 he divide the bible into verses – and to better compare Greek with Latin , he numbered them . Always good to recall revolutionaries of the past !

PS. In my last three blogs I have avoided , I hope , use of the word Platform . Since I no longer know what it means , i have decided to ignore it until usage clarifies it again !

HighWire and Hypothesis Partner to Bring Annotation to Publishers

Today Hypothesis and HighWire Press are announcing a partnership to bring a high quality, open annotation capability to over 3,000 journals, books, reference works, and proceedings published on HighWire’s JCore platform.

Logo for HighWire Press.Annotation is a fundamental activity of researchers and scholars everywhere—from taking notes, collaborating with peers, and performing pre-publication reviews, to engaging in conversations with the broader community. Until now, solutions for journals have been limited, proprietary and siloed in ways that significantly constrain their utility. With the advent of a standards-based, open source and interoperable annotation paradigm, that is now changing.

Hypothesis, a non-profit annotation technology organization launched in 2011, is working with publishers, educators, researchers, and journalists to enable annotation across the internet. Within scholarship use cases include: post-publication annotation and community review; authors’ notes over their own work including updates to previous articles, invited discussions, pre-publication peer review, enhanced footnotes, corrections and errata and more. More than 70 major publishers, platforms and technology organizations have come together in support of this interoperable vision under the Annotating All Knowledge coalition.

Through this partnership, HighWire publishers will be able to implement and control their own annotation layers, moderated, branded, and visible by default over their publications. Annotations will be able to be made either under existing publisher user accounts, or within the Hypothesis namespace.

Dan Whaley, Hypothesis CEO and Founder, will be presenting as part of the Partner Showcase at the HighWire Publisher’s Meeting on 5 April 2017 and will be available for more information at the Partner Reception.

“Hypothesis is excited to work with HighWire to deliver a powerful toolchain across publisher content,” says Whaley. “By making annotation native to scholarly content at the platform level we stand the best chance of fulfilling the vision of an interoperable collaborative layer over all scholarship.”

HighWire publishers that are interested in bringing Hypothesis annotations to their publications should contact Heather Staines, Director of Partnerships.

About Hypothesis

The Hypothes.is Project is a San Francisco-based, non-profit software company focused on enabling humans to reason more effectively together through a shared, collaborative discussion layer over all knowledge. Learn more about Hypothesis online.

About HighWire Press

A leading ePublishing platform, HighWire Press partners with independent scholarly publishers, societies, associations, and university presses to facilitate the digital dissemination of more than 3,000 journals, books, reference works, and proceedings. HighWire also offers a complete manuscript submission, tracking, peer review, and publishing system for journal editors, Bench>Press. HighWire provides outstanding technology and support services, and fosters a dynamic and innovative community, enhancing the strengths of each of its members. For more info, visit highwire.org online.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑