PubMed Commons has been a valuable experiment in supporting discussion of published scientific literature. The service was first introduced as a pilot project in the fall of 2013 and was reviewed in 2015. Despite low levels of use at that time, NIH decided to extend the effort for another year or two in hopes that participation would increase. Unfortunately, usage has remained minimal, with comments submitted on only 6,000 of the 28 million articles indexed in PubMed.
“One of the biggest challenges we all face, in an era where everyone has a platform, is figuring out whom to listen to. Open platforms that once seemed radically democratizing now threaten, with the tsunami of false information we all face daily, to undermine democracy. When everyone has a megaphone, no one can be heard. Our hope is that by listening carefully through all the noise, we can find the voices that need to be heard and elevate them for all of you.”
Experiments like WikiTribune, the collaborative news outlet created by Wikipedia founder Jimmy Wales, excites me. I love the idea of professional journalists working alongside members of the audience, sharing skills and knowledge. I love the feedback loop between users and creators, and have seen the productivity and partnership that can shine through in the spaces where these two designations meet. Collaborative projects – where news organizations and audiences tell stories in partnership – are also a potential way to address misinformation and build trust. At the Computation and Journalism Symposium, which took place October 13 and 14 at Northwestern University in Evanston, Ill., I sat down with three co-panelists to talk about the exciting new tools they’re building for collaborative journalism.
Read the full post How Journalists Are Using New Tools for Collaborative Journalism on MediaShift.
This article is taken from The Coral Project’s Community Guides for Journalism, a website filled with strategies and skills to use in your reporting.
Before you make any changes in your community strategy, you need to decide how you’ll know if your changes are working. That’s where metrics come in — numbers that measure some aspect of what is happening on your site. Continue reading “The Coral Project’s Guide to Community Metrics”
This article is taken from The Coral Project’s Community Guides for Journalism, a new website filled with strategies and skills you can use in your reporting. This article was written by two team members on the project.
There are basic measures you can take ahead of time to reduce bad behavior in your system. These include:
- Set the first few comments from any new user to go to pre-moderation (if your system allows)
- Encourage users to report/flag bad behavior through clear onboarding and messaging. (Ideally, also have a system like Talk that accounts for unreliable flaggers.)
- Highlight good contributions – this models how users can get the newsroom’s attention through more than just bad behavior
- Create a list of places you can point users to – e.g. Crash Override, Heartmob, Trollbusters – to get support if they are being targeted.
- Respond with empathy to those who cross the line – they might not have understood the community guidelines. If appropriate and possible, consider giving them a time out from posting instead of banning them for life. However, sometimes a user acts in a way that is deliberately abusive by repeatedly targeting one or more people, and banning them from your community has no effect.
How you respond depends on the situation, but some themes are common across them all: it is important to work with the person being targeted, and where you can, to reach out to the abuser to try and discover what might have triggered this behavior.
Situation 1: Another user is the target
- Contact the person being targeted and ask what they would like to happen. They might have good reasons for being very wary of the police. Work with them on your proposed solutions. Keep them informed of any developments.
- Make a public statement about what’s happening, that it’s not ok, and what you’re doing about it. Also enlist the community to tell you if they come back again.
- If the person keeps coming back with new accounts, try slowing down allowing new users.. at least set new users’ comments to premod for the first comment or two.
- Contact the person doing it, and simply ask what is going on with them. There is often a triggering reason that made them first act this way. See if that can be addressed or at least acknowledged. Sometimes just recognizing and validating the existence of the situation that started the behavior can be enough.
Situation 2: A journalist/member of your team is the target
- Contact the journalist being targeted. If it seems to be a specific and genuine threat, make sure they’re safe, make sure they know what to do if the person tries to call/go to the office. Inform security at the office. Offer for them to work from home or to cover hotel costs if they feel genuinely targeted.
- Work with them on your proposed solutions. Keep them informed of any developments.
Make sure the journalist isn’t expected to read their own comments at this time. See if you or one of your team can give specific attention to comments on their piece for a period of time.
- Contact the police if the journalist agrees, and you believe there is a genuine threat of harm.
- Again, contact the person doing it, and simply ask what is going on with them. There is often a triggering reason that made them first act this way. See if that can be addressed or at least acknowledged. Sometimes just recognizing and validating the existence of the situation that started the behavior can be enough.
Situation 3: General, repeated, non-targeted abuse
- Set your system to pre-moderate everything
- Look at the abuser’s history of contributions. Look for patterns. If they keep creating new accounts, how can you tell each new account is the same person returning? Is there something you can do to make repetition of these patterns go straight to pre-moderation or somehow be flagged for moderator attention?
- Again, contact the person doing it, and simply ask what is going on with them. There is often a triggering reason that made them first act this way. See if that can be addressed or at least acknowledged. Sometimes just recognizing and validating the existence of the situation that started the behavior can be enough.
- Encourage your trusted community to use Ignore/Mute functions (where available – we have it in Talk), and to contact you privately if the abuser seems to have returned with a new account. Was that helpful? How does that match the policies you have in place? What did we miss?
For more articles on community skills and practice, visit The Coral Project’s Community Guides for Journalism
The post The Coral Project’s Guide to Managing Abusive Commenters appeared first on MediaShift.
To start putting a value on those who engage with its platform, The Times of London analyzed comments on its website from May 2016 to April 2017. The News UK title found that those who comment, which amount to about 4 percent of its subscribers, read three times as many articles as those who don’t comment.
For now, only Times subscribers can comment. The Times’ 1.2 million registered users can read comments, but they can’t participate in discussions. In time, the publisher plans to explore using comments to turn registered users into subscribers.
“We’re massively striving to get readers to use their subscription more frequently and drive habitual behavior,” said Ben Whitelaw, head of audience development at the Times and The Sunday Times.
Eventually, further research will help connect commenter behavior to where commenters are in the subscriber life cycle, which will help the Times reduce churn rates. “When we start putting a value on these people, then we can weigh up what weapons we have to engage them,” said Whitelaw, “but we need that data first.”
The majority of the most commented articles are about Brexit and can fetch about 2,000 comments, some of which are 400 words long, but the publisher also sees comment spikes on exclusive content and analysis of U.S. politics, such as the Times’ interview with Donald Trump in January.
“[The value in comments] is where there are multiple views and ideas being challenged, and experts — journalists or readers — get involved in the debate and add something to improve what’s above the line,” Whitelaw said. “We think it’s a clear point of difference and can drive business results.”
To free up resources, the Times switched in November 2016 to moderating comments after they had been published, rather than before. The switch allowed appropriate comments to appear on the site faster, said Whitelaw. Since then, the total number of comments has increased by 25 percent, although the number of commenters has remained the same. Comments on articles covering topics that incite passionate responses and a diverse range of opinion — like immigration, gender or race — are still moderated before publication. Since moving to this system, the Times hasn’t seen an increase in the number of comments it removes for breaching its terms of participation.
The Times’ audience development team has seven people. Five of them, rotating on a daily basis, are responsible for monitoring comment threads, steering conversations away from more contentious issues or involving journalists. They also encourage journalists to respond to readers. Whitelaw said the journalists are often happy to get involved because they have written a column on the subject.
For readers, part of the appeal of commenting is access to experts and the Times journalists. “We’re emphasizing to journalists that this is part of their job,” said Whitelaw. “Readers like it, and you can see the benefits in terms of engaging readers and renewing subscriptions.”
This is the first time the Times has delved into how comments can affect the bottom line of business, according to Whitelaw. Elsewhere, The New York Times is opening up more articles to comments, but it needs to automate part of the process to manage moderation. The Financial Times has also found those who comment are more engaged.
The Times of London uses social platforms for low-risk community-building experiments, like its Red Box bot newsletter and its Brexit Facebook group. The latter now has more than 1,200 members, up from 150 in April, and attracts a mix of Times subscribers and non-subscribers.
A surprising discovery from the research concerned recommended comments, which appear at the top when feeds are sorted by “most recommended.” Popular comments get about 50 recommendations, said Whitelaw. The Times’ research found that those who recommend comments read twice as many articles as those who don’t recommend comments.
“Commenting is what you see; that’s the tip of iceberg,” Whitelaw said. “But there’s a mass of activity that goes on beneath the surface, which recommendations fuels. Because people only recommend a handful of comments, they pick topics they are most knowledgeable on and spend more time thinking about it. They are like silent Times readers. These could be people we should be gravitating toward or rewarding.”
The post The Times of London finds commenters are most valuable visitors appeared first on Digiday.
The New York Times’ strategy for taming reader comments has for many years been laborious hand curation. Its community desk of moderators examines around 11,000 individual comments each day, across the 10 percent of total published articles that are open to commenting.
For the past few months, the Times has been testing a new tool from Jigsaw — Google parent Alphabet’s tech incubator — that can automate a chunk of the arduous moderation process. On Tuesday, the Times will begin to expand the number of articles open for commenting, opening about a quarter of stories on Tuesday and shooting for 80 percent by the end of this year. (Another partner, Instrument, built the CMS for moderation.)
“The bottom line on this is that the strategy on our end of moderating just about every comment by hand, and then using that process to show readers what kinds of content we’re looking for, has run its course,” Bassey Etim, Times community editor, told me. “From our end, we’ve seen that it’s working to scale comments — to the point where you can have a good large comments section that you’re also moderating very quickly, things that are widely regarded as impossible. But we’ve got a lot left to go.”
These efforts to improve its commenting functions were highlighted in the Times announcement earlier this month about the creation of a reader center, led by Times editor Hanna Ingber, to deal specifically with reader concerns and insights. (In the same announcement, it let go Liz Spayd and eliminated its public editor position.)
Nudging readers towards comments that the Times “is looking for” is no easy task. Its own guidelines, laid out in an internal document and outlining various rules around comments and how to take action on them, have evolved over time. (I took the Times’ moderation quiz — getting only one “correct” — and at my pace, it would’ve taken more than 24 hours to finish tagging 11,000 comments.)
Jigsaw’s tool, called Perspective, has been fed a corpus of Times comments that have been tagged by human editors already. Human editors then trained the algorithm over the testing phase, flagging mistakes in moderation it made. In the new system, a moderator can evaluate comments based on the likelihood of rejection and checks that the algorithm has properly labeled comments that fall into a grayer zone (comments with 17 to 20 percent likelihood of rejection, for instance). Then the community desk team can set a rule to allow all comments that fall between 0 to 20 percent, for instance, to go through.
“We’re looking at an extract of all the mistakes it’s made, evaluate what the impact of each of those moderating mistakes might be on the community and on the perceptions of our product. Then based on that, we can choose different forms of moderation for each individual section at the Times,” Etim said. Some sections could remain entirely human-moderated; some sections that tend to have a low rate of rejection for comments could be automated.
Etim’s team will be working closely with Ingber’s Reader Center, “helping out in terms of staffing projects, with advice, and all kinds of things,” though the relationship and roles are not currently codified.
“It used to be when something bubbled up in the comments, maybe we’d hear repeated comments or concerns about coverage. You’d send that off to a desk editor, and they would say, ‘That’s a good point; let’s deal with this.’ But the reporter is out reporting something else, then time expires, and it passes,” Etim said. “Now it’s at the point where when things bubble up, [Ingber] can help us take care of it in the highest levels in the newsroom.”
I asked Etim why the Times hadn’t adopted any of the Coral Project’s new tools around comment moderation, given that Coral was announced years ago as a large collaborative effort between The Washington Post, the Times, and Mozilla. It’s mostly a matter of immediate priorities, according to Etim, and he can see the Times coming back to the Coral Project’s tools down the line.
“The Coral Project is just working on a different problem set at the moment — and the Coral Project was never meant to be creating the New York Times commenting system,” he said. “They are focusing on helping most publishers on the web. Our business priority was, how do we do moderation at scale? And for moderation at our kind of scale, we needed the automation.
“The Coral stuff became a bit secondary, but we’re going to circle back and look at what it has in the open source world, and looking to them as a model for how to deal with things like user reputation,” he added.
It was late April and the staff of the Coral Project was “on tenterhooks” as The Washington Post was conducting its first public test of Talk, the project’s new commenting platform, Andrew Losowsky recalled recently.
The Washington Post — which launched the Coral Project along with The New York Times, Mozilla, and the Knight Foundation to improve communities around journalism — invited about 30 commenters who were active on its Capital Weather Gang blog to try out the platform and offer feedback. The callout attracted more than 130 comments, which included Post staffers probing commenters for more details and specifics, and additional reactions submitted through a form and email.
“We were expecting people to be quite negative,” said Losowsky, the project lead. “Initial change isn’t something that people tend to welcome. It looks a bit different, it has a few different features, and the responses we got were actually very good and very respectful and thoughtful. That’s, of course, what can happen when you openly make clear that you are listening to and engaging with your readership.”
The Post plans to make the Talk platform its primary on-site commenting system, and it’s now working to further integrate it into its site with plans “to launch as soon as is practical,” said Greg Barber, the Post’s director of digital news projects.
The Coral Project, meanwhile, is taking that feedback from the Post’s users and integrating some of the changes they suggested into the platform.
Talk will replace the Post’s current commenting platform, which it calls Reverb, internally, Barber said. “It was born of necessity because our commenting vendor went out of business and we needed a solution, so we made one,” he said. “It was created during a time after the Coral Project had been announced but the Coral software wasn’t yet ready, so we needed an interim solution…it was never intended to be a permanent solution to our commenting needs. Coral was.”
The Coral Project launched in 2014 with a three-year, $3.89 million grant from the Knight Foundation that was set to expire this summer. The project has been able to secure additional funding from Mozilla and the Rita Allen Foundation to continue its work, Losowsky said, adding that they’re in conversations with additional funders, including Knight. (Disclosure: Knight also supports Nieman Lab.)
Along with Talk, Coral has also released Ask, a platform that enables newsrooms to ask specific questions of their audience, and it’s planning to release guides to journalism and engagement later this year.
While the Post will be the first news organization to use the Talk system, the Coral Project is in talks with a number of other outlets who didn’t want to be among the earliest adopters, Losowsky said.
Talk was designed with the idea in mind that a commenting platform should be more than just an empty box at the bottom of a story. As journalism business models become increasingly reliant on direct reader revenue — digital subscriptions are all the rage right now — Losowsky said commenting systems should work to proactively engage readers and build community around the news:
Almost everybody online knows how to post something on Facebook or Twitter. The barriers to entry to being able to publish your thoughts online is [low]. As a result of that, news organizations need to think about what is the kind of dialogue they want to host versus the kind of dialogue that will appear elsewhere. I think it’s perfectly fine to say that there are rules here that are different from rules in other spaces, and if you want to do some other form of interaction, you can go and do it over there — but this is the kind of thing we’re looking for here. These are the baseline assumptions that we have here. Here are the things we’re trying to do with it. This is what this space is for versus that space.
Being able to really define that, I think, is going to be really important. On the one hand, news organizations are not going to win in a battle with Facebook to create the best social network. But what news organizations can do is create a space which gives direct access to the journalists, that has the ability to bring the community into the process and be part of the process, manage interaction on the news organizations’ terms rather than Facebook’s terms about what is visible, what moderation tools you have, about the ability to focus and highlight on different conversations and so on. And news organizations can be transparent about how they’re using people’s data and really safeguard the privacy and transparency around the data of every interaction that they’re having with the community.
On every article, news orgs using Talk can set an opening prompt at the top of the box to try guide the discussion and keep the conversation on-topic. Other features meant to facilitate productive discussions include the ability to add context to reports of inappropriate comments, banned words that are automatically rejected and suspect words that are flagged for moderation, and badges for staff members so they’re easily identifiable.
Outlets can also personalize the way users can respond to comments. The default, based on research from the Engaging News Project, is a “respect” button instead of a “like” button.
Moderators are able to ban users directly from the comment, and the moderation dashboard automatically highlights banned and suspect words. They can also pin their favorite comments at the top of the feed, to highlight the best comments and also set the tone for the conversation.
“There are a lot of things that we’re focusing on — first of all, from the commenter’s perspective, really thinking about how do we indicate what’s wanted in the space and use subtle cues to encourage and improve the behavior,” Losowsky said. “Then, from the moderator or journalist’s side, how do we create tools to make it fast and easy to be able to do the actions that they need to take — remove the bad content, ban users who are abusing the system, suspend those who in some way are perhaps redeemable or having a bad day and give them a time out, and then be able to not only approve but highlight really good comments so that you’re indicating the kind of behavior you want to encourage.”
Talk, like everything The Coral Project has produced, is open source, so outlets can build upon it as they like and the entire Talk system is built around plugins, with the idea that publishers can tailor the system to their needs. The Coral Project also offers hosting services, which could be useful for smaller newsrooms.
For its part, the Post has been conducting quality assurance testing and also making sure the code in Talk doesn’t interfere with any of the other Post’s services. It was also tested on the Post’s development and staging servers to make sure everything worked properly before it was rolled out to users.
Barber said the test in late April was unusual because the Post conducted it before it was able to hook Talk up to its own authentication system, which is one of the ways that it’s customizing the platform to hook up to its infrastructure. The Post is also connecting Talk to the systems it uses to monitor its servers and hooking it into its CMS so comment streams can automatically be created for stories.
“The Coral Project group is continuing to build core features onto Talk and to take some of its features and turn those into plugins that are more accessible to organizations like the Post, so that we can tinker with them, as we’re often wont to do, to customize in different ways,” Barber said. “Other organizations that are interested in customizing in the same way that we are — or in ways that are different from what we want to do — will have that capability as well. The Coral team, of course, is critical to this. They’re building the main software, they’re building the main functionality, they’re giving us the spaces to customize the bits and pieces we want to work in specific ways. But then what The Washington Post team is doing is working on specific plugins that might not fit Coral’s overall strategy, but are things that we want to do here.”
The Coral Project team is also working on adding new features based on the feedback it received from Post commenters who tested out Talk last month. The current Post commenting system, for instance, allows readers to edit comments. Talk didn’t. They’re now working on adding an edit feature to Talk. Outlets will be able to set a time limit for — maybe five or 10 seconds — for commenters to read over their post and edit it before it goes live.
“If you change your mind, if you regret it, if you see a typo you still have a window in which you can edit and change it,” Losowsky said. “That was something that was requested by a number of different Washington Post people.”
Barber and Teddy Amenabar, the Post’s comments editor, were active in the comments for the test last month, thanking users for feedback and asking questions such as “Anything missing that you’d like to see in a new system?” They collected that feedback and created a spreadsheet with the information that they then shared with the Coral Project. The Coral Project plans to take that feedback and continue to build out Talk while also looking for ways to help news organizations develop their engagement strategies and define what kind of conversations they want to host on their own platforms.
“What do you want from us in this space from the perspective of the audience? And from the perspective of the journalist, what do we want in this space? What do we want to happen here? By outlining and making clear what your expectations are for the space, you’re already creating a greater likelihood of success.”
Machine learning and natural language processing in play for community moderation https://viafoura.com/blog/automated-moderation-and-community-moderation/ … #inma17
Written by WENCY LEUNG in The Globe and Mail
“…By finding patterns in the messages – such as readability, frequency of swearing and tendency to veer off topic – Cheng thinks there are clues to “who’s behind this bad behaviour.” And Cheng, whose fellowship is sponsored by Microsoft, isn’t the only one who believes our personalities, mental states, and even physical health are reflected in the language we use online.
It turns out, the comments we make online reveal a lot about us. Researchers are now analyzing online comments for a wide array of predictive patterns and signals, using Internet discussions and social media as sources of constant, easy-to-access information about what’s going on in people’s lives.
Their efforts may eventually allow health professionals to monitor patients’ well-being based on their Twitter streams and Facebook entries. Controversially, employers or insurance companies could one day screen job applicants and potential clients based on their social media status updates…”
Lilah Raptopolous, community manager at the FT, explains how the newsroom approaches editorial projects and stories in a way that involves readers from the beginning
“The FT places an emphasis on comments because they are a valuable tool both editorially and commercially, she added. Editorially, they help build trust with readers, become story leads or sources, give direct feedback to FT’s reporting and connect people.
“I like to remind reporters that there are a lot of people commenting, but when you are responding to them you are not just responding to that one person but also to everyone else reading the comments. So the interactions that happen there are valuable to everyone who is quietly paying attention.”
On the commercial side, the FT’s internal audience research and reader surveys have shown “a strong link” between comments and engagement. People who write comments are, on average, seven times more engaged than those who don’t, so they spend more time on the website, read more stories and return more often.”
No matter what we call it, commenting on scholarly publications has a spotty record of success. Despite the mediocre results, journals, databases, and third party sites keep trying to get authors and readers to engage in this way. This post explores different models and the challenges online commenting faces.
Join us May 3-6 in San Francisco at I Annotate 2017, the fifth annual conference for annotation technologies and practices. This year’s themes are: increasing user engagement in publication, science, and research, empowering fact checking in journalism, and building digital literacy in education.
Let’s be honest, discussion forums are a great idea—we all want students to engage more with their assigned readings and with their classmates. But “discussion” forums fail at precisely what they claim to do: cultivate quality conversation.
Collaborative annotation assignments are a better way to encourage students to engage more deeply with course content and with each other. For one, conversations that take place in the margins of readings are more organic, initiated by students themselves about what confuses or intrigues them most. In addition, these annotation discussions are directly connected to texts under study, helping to keep conversation grounded in textual evidence.
Using Hypothesis, instructors can make PDFs and web pages hosted in Canvas annotatable. Students can then annotate course readings collaboratively, sharing comments, and replying to each other’s comments. Instructors can also create annotation assignments using Hypothesis so that students submit their annotation “sets” for feedback and grading in Canvas.
You may have missed our live webinar on 4 April 2017, but you can watch the recording and view the slides to learn more about the pedagogical value of collaborative annotation and be given a guided tour in setting up and using the Hypothesis tool in Canvas. Educators currently using the Hypothesis with Canvas in their classrooms also shared their experiences with annotation.
- Dr. Jeremy Dean, Director of Education, Hypothesis
- Chris Long, Education Technology Coordinator, Huntington Beach Union High School
- Michelle Sprouse, Doctoral Student in English and Education, University of Michigan
- Dr. Alan Reid, Assistant Professor, English, Coastal Carolina University
- Hypothesis Canvas webinar slide deck
- Contact Jeremy Dean via email or Twitter
- Hypothesis Terms of Service
- Hypothesis account sign up
- Download Hypothesis extension
- Private Hypothesis group for Canvas webinar
- Hypothesis webinar Canvas course home
- Hypothesis Teacher Guide
- Hypothesis Canvas app installation guide
- Register for I Annotate 2017
- Hypothesis Customer Support
This piece originally appeared on Source.
Over the last several months, The New York Times R&D Lab has been thinking about the future of online communities, particularly those communities and conversations that form around news organizations and their journalism. When we think about community discussion, we typically think about comments sections below our articles, or outside forums that link to our content (Twitter, Reddit, etc.). But what comes after free-text comments?
To explore this further, we developed Membrane, which is an experiment in permeable publishing. By permeable publishing, we mean a new form of reading experience, in which readers may “push back” through the medium to ask specific, contextual (and constrained) questions of the author. Membrane empowers readers with two new abilities. The first is that they can highlight any piece of text within the article, select a question they want to ask (e.g. “Why is this?”, “Who is this?”, “How did this happen?”), and submit that question to the newsroom, asking the reporter to give further explanation or clarify. The second is that they can browse–inline–questions that have already been previously answered by the reporter, giving them the benefit of the discussion that has already occurred. When a reader’s question is answered, they are notified, letting them know that the newsroom is paying attention to their feedback. In this way, the article becomes a channel through which questions can be asked, responses can be given, and relationships can be developed.
While free-text comments allow the reader to express their point in their own words, the individual nature of those comments creates several challenging constraints. First, it means that it’s difficult to understand anything about the aggregate conversation around a piece: what are people most interested in or concerned about, what types of questions are being asked, etc. Second, if journalists want to engage in a comment thread, they have to respond to individual commenters, rather than having any way to respond to an entire topic or an oft-repeated question. Third, they require teams of moderators to keep out off-topic or objectionable content.
Not surprisingly, other publications have been experimenting with alternative forms of incorporating user feedback into the story process and cultivating community. Hearken is a platform by the creators of Curious City that solicits questions submitted and voted on by the public that producers, reporters, and editors then work to answer. Digg Dialog is being developed as a tool for news organizations to make a “two way conversation between writers, editors, and readers.” The Coral Project–a collaboration between the Mozilla Foundation, The New York Times, and The Washington Post, funded by a grant from the John S. and James L. Knight Foundation–is rethinking community engagement around news and is currently working on a project to ascertain which users are most trustworthy. Buzzfeed Reactions are a classic example of a constrained interaction that provides insight into what users find interesting (or OMG-worthy). Several publications have been experimenting with removing comments entirely and shifting the conversation to social media.
Each of the previously mentioned tactics takes a different approach to time (synchronous/asynchronous), constraints on the reader (submitting free text/submitting something more constrained), and juxtaposition with the article (adjacent to the thing being talked about/below or away from the article). We began to wonder what other interactions could exist along these axes. What kinds of interactions might avoid the challenges of free-text comments? How might these interactions shape different kinds of behavior, both among our readers and between readers and journalists? How can we make it easier for newsrooms to understand where people want more context about a particular part of a story, and also provide tools for incorporating that feedback in the article itself? The way we design methods of communication shapes and informs the communities that form around journalism–what alternative kinds of communities can we foster?
How We Made It
Membrane is the result of a series of iterations and prototypes around the ideas of digital text and community. We spent quite a while thinking about how article text is typically treated as a static whole (a larger research project that you can read about here), without interactivity, save for occasional hyperlinks. When we think about adding interactivity, we tend to turn to multimedia, like video, slideshows or interactive graphics. Previous experiments around the potential of digital text in journalism are compelling but relatively rare: ProPublica’s thoughts on sentient articles and their Explore Sources tool, The New York Times’s piece on the best and worst places to grow up, Smarticles, and Tangle being a few examples. We wondered: how could we use the affordances of digital text as the anchor for compelling interactions that elucidate stories and provide context?
One of our first prototypes around this idea was an inline card-based interaction. Certain phrases or words in an article would be highlighted, and when the user clicked or tapped on these highlighted phrases, content would appear below the paragraph giving further context on that person, place, idea or event. We realized, however, that this assumed we knew every question our readers wanted to ask. What if they were curious about something we hadn’t predicted? How could we allow them show us where they were curious? And how could we design a system that wouldn’t require extensive moderation to do so?
Questions in the Margin
Our second prototype was a mock-up that visualized the core interaction of what would form Membrane. In this prototype, the user highlighted the text about which they had questions, and
could select a list of questions from a predefined drop-down list. We limited these questions to “who, what, when, where, why.” In this prototype, however, the actual Q&A happening in the article was moved off to the sides, centering the article as the core content and the responses to readers as ancillary to that primary information. We quickly realized that this thinking was still built on a print-oriented understanding of text.
Reader & Response
Our third prototype is what became the current version of Membrane. In this version, there are only two things: “prompts” and “responses.” A “prompt” is any question or feedback submitted by the reader to the reporter, in the form of highlighted text plus a question. A response is any content contributed by an author: the first piece of writing on the subject (which we refer to as the “opening”), an answer to a question, etc. A Membrane piece starts with an opening of any length, which readers can then mark up with prompts. Reporters can select prompts and respond to those prompts with more responses, which readers can also mark up, and so on, creating a tree-like structure that can be explored at any level of depth the reader likes.
Although Membrane was developed in a journalistic context, we quickly realized that we shouldn’t limit our thinking about the technology to just reader/reporter scenarios. The overall interaction we had designed could support far more types of conversations than the ones we initially imagined. Because of this, we were mindful to keep Membrane relatively agnostic to both technical implementation and subject matter (hence “prompts” and “responses” rather than, e.g., “questions” and “answers”). In this way, the project as a proof-of-concept could demonstrate its use in one of the journalistic scenarios we had imagined, while also pointing to larger questions. For example, Membrane has led us to wonder how such systems could function with:
- Different types of responses. A reporter could respond to prompts with text, or they could respond to prompts with images, videos, audio, etc.
- Different types of subject matter. In addition to news articles about people or events, Membrane could be used for ongoing written pieces that are soliciting ideas/next steps from the community.
- Different lengths of time. The community that forms around an author who uses Membrane very consistently would likely look different from the more ephemeral communities that form around a time-constrained event (elections, Oscars, etc.).
- Different types of questions. A writer could use the default “who, what, when, where, why” list of questions, or customize their list with subject-specific questions (Membrane tailored for an article on cooking, auto repair, etc.).
- Different numbers and types of authors. Membrane supports multiple authors, so a piece could be written by just one author or several authors, and support community members who become authors.
- Different forms of interaction. Membrane might be used to get additional detail or source material on a finished piece; to get updates on an ongoing event or situation; to ask for more justification on an opinion piece; to nudge the evolution of an ongoing piece or project, and more.
In this way, Membrane functions both as a prototype on its own, while also prompting thought about how systems like it could support new forms of communication, communities, and journalism.
We will be open sourcing Membrane in the coming months, and look forward to other developers playing with the code and exploring some of these questions. Before that, we’ll be launching a live Membrane experiment, asking some of the most interesting voices in journalism and community building to write pieces with Membrane and engage with readers’ questions. Until then, you can see other New York Times R&D Lab projects at nytlabs.com.