You can now use social audio app Anchor to publish podcasts

The social audio app Anchor is on Thursday introducing a new feature that allows users to easily publish podcasts to major podcasting platforms, including Apple Podcasts and Google Play.

Users can initially set up the podcast through the app by choosing a name, art, and more, and then subsequent episodes will be automatically added to the feed.

“They’ll be able to control everything about the podcast that they need to control from Anchor,” cofounder and CEO Michael Mignano told me. “Our hope is that we can remove all of the technical and difficult aspects of the process to the end user. If we had it our way, the user would never even need to know what an RSS feed is. It’s an older piece of technology that we think most creators need to even be aware of.”

Even though users will be able to upload podcasts through the app, they’ll still be subject to the requirements of each of the podcast platforms, and Mignano said podcasts created through Anchor should be available on the various podcast apps within a day or two of the initial upload.

While Anchor wants users to create audio and listen within the app, Mignano said the company was adding the ability to export audio as podcasts because it wants to encourage users to create longer stories that might be better suited to listen to as a podcast rather than in the app, which was designed for shorter audio.

“For us, anything that removes friction or enables creators to make something is a win for both the creator and for us,” he said. “If we can bring people over to the platform by offering them tools they can’t get anywhere else, than we feel we’ve done our jobs.”

Anchor launched in 2016 and was designed to try and make it easier for users to record and share audio while also fostering discussions. The app was incubated at the New York startup accelerator Betaworks, and it has raised more than $4 million in venture funding.

Anchor has yet to begin monetizing the app, but Mignano said the app will likely introduce advertising or subscription offerings. He declined to offer a timeline, but said the company is committed to eventually sharing sharing revenue with users.

In March, Anchor relaunched the app with an array of new features, including integrations with Spotify and Apple Music that lets users import song and tools that simplify the interview process and enable listeners to call into shows.

At the time, Nick Quah wrote in his Hot Pod newsletter that the additions put Anchor in competition with Bumpers, an audio creation app founded by Ian Ownbey and Jacob Thornton, formerly of Twitter:

In my head, I’ve come to place Anchor and Bumpers in one bucket, given both these apps’ focus on serving as the mediating space between users and other users, while establishing another bucket specifically for short-form audio app 60dB and the AI-oriented Otto Radio which seems, to me at least, primarily occupied with developing a firm grasp on the interface between professional publishers and listeners.

Mignano wouldn’t say how many users Anchor has, and it remains to be seen if social audio can take off when apps such as Facebook and Snapchat already dominate many users’ time and homescreens. Still, a number of outlets, including The Verge and The Outline, are publishing on the platform, and as the app continues to evolve, Anchor wants to ultimately make it easier for users to create and share audio clips.

“People can both create and listen freely, much like open platforms for other mediums like photos, text, or videos,” he said. “We want it to be a conversation, we want it to be multidirectional, just not one way like broadcast. I think a way for us to get there is by opening up tools, creating utilities and tools that empower creativity.”

The Toronto Star, “surprised by low numbers,” is shutting down Star Touch, its expensive tablet app

The Toronto Star announced on Monday that, “after much research,” it’s shutting down Star Touch, the expensive ($23 million invested!) tablet-only app it launched in 2015. The app’s shutdown is accompanied by layoffs of 29 full-time employees and one part-time employee.

“The overall numbers of readers and advertising volumes are significantly lower than what the company had forecast and than what are required to make it a commercial success,” John Boynton, president and CEO of TorStar and publisher of the Star, wrote in a memo to employees. (The previous publisher, John Cruickshank, stepped down last year after it became clear Star Touch was underperforming.)

A Star spokesman told The Globe and Mail that “the tablet’s monthly audience peaked at 80,000 unique readers, a small percentage of the Star’s monthly online readership, which hovers around 550,000 in the Greater Toronto Area alone.” It had originally aimed to be at 180,000 daily users by the end of 2016; it was at only 26,000 by March of last year.

Star Touch shuts down July 31 and will be replaced by a new universal app that, well, sounds as if it does what any news app should do now and it’s crazy the Touch app didn’t do these things: “operates both on smartphones and tablets…offers more of the features that you, our readers, have told us you want: breaking news, constant updates, more content, easy searches and navigation and the ability to share items much more easily on social media.”

“We need to simplify our business and having three downloadable apps, namely a tablet app, a mobile app and PDF, confuses consumers and is resource intensive, complex and costly. Having just two apps will simplify this,” Boynton wrote in his memo, printed in full at Canadaland along with a memo from the Star’s editor-in-chief, Michael Cooke. (The two apps will be the universal one and this print replica.)

Star Touch was supported by advertising and entirely free to readers. It was modeled on Montreal’s French-language La Presse+, which is digital-only via iPad app (and website) Monday through Friday and has a print edition on Saturdays (though that too is expected to go away later this year).

La Presse “remains, by accounts as recently as last week, a success,” editor-in-chief Cooke noted in his memo. “Throughout the diligent work before, at and after launch, Star executives and managers, and really all of us, knew there was significant risk that the Montreal experience might not translate to the [Greater Toronto Area] — arguably the toughest, most saturated media market in North America.”

Ken Doctor wrote about Star Touch’s “one time a day” model for Nieman Lab in 2015:

Star Touch, like La Presse+, won’t be a breaking news product. Readers get one edition a day, seven days a week. The breaking news function, The Star believes, remains with free smartphone and desktop web; Star Touch will link to The Star’s site for live files. Why? Research showing readers want editions — the old Economist bookends theory — and, in any event, the complexity of tablet presentation would require even more labor for a continuously produced product.

While the Star gradually built other updating features into the Touch app — a “live news” panel for real-time updates; breaking news notifications — it clearly wasn’t enough to convince readers that a tablet app updated once a day was the best way to get their news.

Canada’s Postmedia also made a bet on tablet editions which it shut down in 2015. It announced last week that it is launching new mobile apps for the National Post and the Financial Post, as well as a new digital replica of the National Post.

Star Touch closing notice from this tweet.

With its new Reader Center, The New York Times wants to forge deeper connections with its readers

When it comes to hearing from readers, The New York Times wants to go a lot further than just letting people chime in at the bottom of some articles.

Last week, the newspaper announced The New York Times Reader Center, a new initiative focused on finding new ways to connect with Times readers and deepening the connections it already has. The team, whose “exact size is still taking shape,” according to a Times spokesperson, will be staffed by a handful of journalists who will work with various Times departments — including interactive news, social, and even marketing and branding — on various reader-centered projects.

“Our agenda is not for our little team to make a splash. Our agenda is for The New York Times to have stronger connections with readers,” said Hanna Ingber, an editor on the international desk and the project’s lead. “In order for us to be successful, we need to work with everyone. The Reader Center will be a way to convene all those people to make the most of all the work that’s already happening.”

The Reader Center has already pushed out a handful of projects. Last month, it invited a small group of Times subscribers to receive text messages from White House correspondent Michael Shear as he travelled with President Trump on his first international trip. (The project is similar to a Times effort last summer that let readers receive texts about the Rio Olympics from Times deputy sports editor Sam Manchester.)

In another project, as part of a story by Times reporter Claire Cain Miller about how parents can raise feminist sons, the Reader Center asked readers to share their own experiences raising boys, and included many of their comments in a follow-up piece. That project was modeled in part after an initiative Ingber helped run in conjunction with “Ladies First,” a Times documentary about women in Saudi Arabia who were able to vote and run for office for the first time. After the Times ran the piece, Ingber made a call out to Saudi women, asking for stories about their lives. Over 6,000 of them replied, and the Times used some of their responses in a follow-up story. “That was good journalism,” Ingber said. “We used these voices to better understand what their lives were like and what their hopes were.”

With the Reader Center, the Times is the latest news organization to make deeper reader engagement more core to its editorial processes. Some organizations (The Boston Globe, The Washington Post, the Times itself when it comes to podcasts) have turned to Facebook groups, while other efforts (the Civics 101 podcast, The Texas Tribune’s community editor) are using reader input to influence their editorial decisions. While these efforts go beyond article comments in an effort to engage readers, comments, too, are a big part of the equation. As my colleague Shan Wang reported earlier this week, the Times aims to open 80 percent of its articles up for comments this year, up from 10 percent, using algorithmic tools to help evaluate them in bulk. The Times also plans to amplify reader voices by regularly producing comment roundups.

“We want to do everything we can to hear more of those voices and amplify them,” Ingber said.

It isn’t clear yet how much overlap the new Reader Center will have with the responsibilities formerly assigned to the Times’ public editor, a position the newspaper eliminated this month. Publisher Arthur Sulzberger Jr. said in a memo to Times staffers last week that readers around the world “collectively serve as a modern watchdog,” and are “more vigilant and forceful than one person could ever be.”

Ingber said that while the Reader Center isn’t designed to replace the public editor, the new initiative is similarly built around the mission to create greater transparency into how Times stories are produced, and to hear directly from readers on how it can improve its processes.

“Our goals are to make journalism more transparent and to change the relationship between readers and journalists, by empowering journalists to do more to connect with readers, to respond to readers, and to be more engaged with readers,” Ingber said. “Ultimately this will not just lead to better journalism, but also to more accountability and to the elevating of the position of our readers.”

Photo of a woman reading the New York Times by Eflon used under a Creative Commons license.

Apple’s new analytics for podcasts mean a lot of change (some good, some inconvenient) is on the way

“It may look obscure,”tweeted Gimlet’s Matt Lieber, “but this is the biggest thing to happen to the podcast business since Serial first went nuclear.” Lieber was talking about a major announcement that came out of the podcast session at WWDC, the Apple developer conference, which took place on Friday. It was a piece of business delivered with relatively little fanfare — par for the course, I think, with the nature of Apple’s historically chill relationship with podcasts —  and Lieber’s right. This is a very big deal, and a lot of change is on the way.

Here’s the headline: Apple is finally opening up in-episode analytics for podcasts. The data will be anonymized, consistent with Apple’s general stance on privacy, and the new analytics layer is scheduled to arrive with the iOS 11 update this fall. This means that podcast publishers will, at long last, receive data that tells them just how much of their episodes are actually being listening to — within the Apple Podcast app, at least, which is still largely understood to serve the majority of listening. (Estimates, however sampled, tend to range between 60 and 80 percent). Previously, podcast consumption was chiefly conceptualized based on downloads, a black box metric that’s criticized as lacking the level of granularity that are table stakes for advertisers buying on digital platforms in 2017. With this announcement, that measurement issue — long articulated as the defining problem of the medium — can finally be meaningfully interrogated, with many believing that the hurdle impeding advertisers from committing more dollars to the space can be thrown out the window.

But some are also arguing this change will bring a mixed bag of consequences, and in some ways, the new data puts the space at risk of snuffing out various dynamics that make it special. Which is to say, while there’s a hope that this will finally lead to podcasting realizing its full economic potential, the shadow of Web 2.0 looms large.

The WWDC session also contained a few other useful announcements, including a design overhaul for Podcasts app and new extensions to feed specifications that would give publishers more control over how they can present episodes within RSS feeds. Among other things, publishers will now have the ability to bundle episodes by season and signal which episodes are actual content versus extras like trailers. Noted Apple writer Jason Snell has a good rundown on this over at his blog, and you can check out the spec document here. And as I mentioned last week, this is probably what the redesign looks like, courtesy of this Reddit thread. (Once again, your mileage may vary with sourcing Reddit.)

But let’s get back to the analytics stuff. Since Friday’s announcement — which you can watch in full at this link, but only on the Safari browser, because Apple — there’s been a ton of writing appraising the matter, and in case you’d like a quick primer, I recommend this write-up by Recode’s Peter Kafka, which also contains screenshots of the upcoming analytics dashboard. (I’m going spelunking in some rabbit holes here, so a primer this is not.)

Here, we’ll attend to wonkier questions: What does this new analytics universe portend? How will the podcast business change? If so, who wins and who loses?

I wasn’t born a prophet, so I don’t know how exactly this will play out, but I do have some notes and assessments on a bunch of the key issues. This write-up is by no means comprehensive, and I’ll be exploring more questions in future issues as we deal with the consequences of announcements. For now, let’s jump in, and we’ll move through a bunch of topics.

Just double-checking: Is this really a big deal?

Yep, I’m pretty certain it’s massive, but it’s worth weighing the counter-argument. Even if Apple serves a majority of all listeners, the argument goes, it doesn’t account for the whole listening universe, and as such there might be muted effects to how this ends up moving the way business is being done. I’m not sure I’d buy much stock in that view: first, not only does most listening quantitatively happen on Apple, the company is qualitatively synonymous with the space. Second, there still doesn’t appear to be a strong alternative to Apple with a big enough consolidated market share that could meaningfully challenge (or avoid) the way Apple defines audience measurement. Which means that, in June 2017, it’s still feasible to think that whenever Apple says jump, most folks are still pretty much going to make like Durant.

How will the new analytics layer change the way we currently understand podcast audiences in the aggregate?

A couple of parts to this:

(1) Many believe that an ecosystem-wide audience resizing is in the cards. Because the vast majority of podcast audience appraisal is conducted based on downloads — and because we don’t actually know what happens to an episode after it’s downloaded — the way podcast audiences are represented, understood, and sold is almost certainly going to change. Just about everyone I spoke to frames this in terms of some form of downsizing, which makes intuitive sense, because there will always be some percentage of episodes being downloaded that are left unlistened (and ads left unserved). But the positive spin I’m given is that this change nevertheless comes with a higher level of accountability, and the gains in trust from advertisers will likely lead to much greater gains over the long term.

As Matt Turck, Panoply’s chief revenue officer, puts it, “I’m assuming we will see listener numbers fall short of download numbers; however, the benefit to making analytics far less mysterious should vastly outweigh the concern.”

(2) That said, there remains the possibility that the new in-episode analytics layer might reveal inconvenient truths about audience behavior. I’ve been told there are a few non-Apple tools and platforms (like Spotify and some third-party listening apps) with in-episode analytics already in the market, and while they only supporting a minority share of listening, the consumption data they’ve been collecting suggests there’s nothing especially revolutionary hiding in those new numbers.

Aaron Lammer, of Longform and Stoner, is one among the skeptical. “I would push back against the idea that there is some great insight lurking in these analytics,” he said when we chatted over Twitter. “As people who’ve set up elaborate app-based analytics hooks where you can track everything will tell you-there isn’t that much interesting… I’d rather look [at] it as standardization rather than revolutionary shift.”

That point on standardization, I think, is really important to file away in your head.

(3) Bryan Moffett, the COO of National Public Media, made a good observation on how the proliferation of dynamic ad insertion technology might mean the transition to an in-episode analytics world would still contain tricky imprecision.

To quote him in full:

A dynamic ad server will serve up many different versions of a single episode. They could vary in length by a few minutes or even more. For example, if one user gets an episode of TED Radio Hour with four dynamic :30 sponsorships and a :30 promotion block in its hour of content, but another user for some reason gets the same episode with just two :30 sponsors, the length difference is over a minute and the content is not aligned minute by minute for each episode.

Apple’s analytics rolls up all listening to a given episode and averages, so there is bound to be some imprecision. It’s not a lot, and it’s certainly a better world than the one we live in now.

It’s never easy shifting gears.

How will the podcast business be affected?

Time will tell, obviously. But here’s the range of the thinking out there:

(1) As I mentioned, there is a sense from some bigger publishers that this new analytics layer will finally allow them to kick open conversations that may meaningfully unlock long coveted brand advertising dollars. Contrary to direct response advertisers, whose intended outcomes (and measurement methodologies) additionally revolve around conversions off promo codes, brand advertisers are generally thought to require a higher level of trust in the impressions being reported back to them. Podcasting’s black-box download-oriented measurement universe has long been described as the primary hurdle preventing brand advertisers from allocating more dollars to the medium, and it is believed that Apple’s in-episode analytics are a significant first step forward in opening up conversations between brand advertisers and podcast publishers across the system (conversations that have to do with perception as much as actualities).

(2) But how does this development affect the direct response side of the podcast advertising business? There’s a general belief among the folks I’ve talked to that direct response advertisers, or performance-based advertisers, will likely be stable, though there appears to be suspicion that the new analytics layer presents yet another horizon of opportunities for those advertisers and their respective agencies to haggle more over prices. I’m also being told that there are expectations of some oncoming turbulence/fluctuations in price points, as those advertisers go through the process of figuring out how to integrate this new data layer into their current practices.

(3) There are two versions of the apocalyptic view on the business end. The first takes the shape of some worries about ad-skipping, and what the new analytics layer is going to reveal about the extent of this behavior. (For more background on this, read this Wall Street Journal from last summer). The end-times scenario is said to be one where it’s discovered that podcast ads are skipped over at such a volume and intensity as to kill their value. On this front, the responses seem to generally track along the built-in split between brand advertising and performance-based advertising; there is a sense that, even if there is a problem, it would mostly affect the former, while the latter would remain somewhat stable, because conversions are still taken to be more important than impressions. Again, the positive spin I’m served ties back to a sense of greater accountability that the new analytics layer brings into publisher-advertiser interactions: we’ll know who is actually providing value to advertisers, and we’ll know who isn’t doing so as much. As Midroll chief revenue officer Lex Friedman said, “Podcasters who are confident that people are listening to their ads should be very happy about this.”

The second apocalyptic argument presents a scenario where podcast CPMs plummet, ultimately leading to the collapse of the market. This view generally draws on a parallel between podcasts and what happened to blogs once the format started experiencing waves of ad tech development. Personally, I can’t quite see the specifics of how this move by Apple could bring those dynamics to podcasting just yet. My understanding of the plummeting blog CPMs pegs the phenomenon to the continuous structural devaluing of blog advertising real estate brought on emerging ad technologies that gave advertisers (and ad tech companies) unchecked leverage. And while I think the broader risk of podcasts possibly going down the road of blogs is absolutely real, I don’t have a sense that this new analytics layer alone automatically leads to a devaluing of podcast advertising real estate. If anything, Acast’s recent rollout of a programmatic podcast advertising product is more likely to incur those types of effects, should the tool ever get traction — this development from Apple strikes me as a step forward that’s small enough to stop short from these effects.

Who wins, who loses?

(1) Obviously, publishers who have made a practice of inflating download numbers will get checked — though the counterargument that all metrics, without active third-party verification, can be gamed over time is certainly a prudent one.

(2) An argument can be made that this system-wide shift to a new analytics standard would usher in a weeding-out period. Podcasts delivering strong ad value will get additional data to strengthen their appeal for more advertising dollars, and podcasts not doing so will be flushed out of the ad market. It would mean that high-performing podcasts would be in a better position to extract more value, while not-so-high-performing podcasts would have a harder time accessing advertising dollars.

(3) It should be considered that whatever audience readjustments happen will probably disproportionately and negatively impact smaller podcasters’ ability to derive advertising revenue. Which is to say, just as how every publisher experiences the turbulence of discovering that its meaningful listening audience size is probably going to be smaller than its downloads, smaller podcasts will be whipped around harder, and in some (if not most) cases, that could lead to those shows falling beneath a certain threshold for advertising consideration. That’s bad for podcasts with already relatively small but meaningfully engaged audiences. In these cases, there are presumably two available moves: first, lean deeper into a niche that maintains a specific appeal for relevant advertisers, and second, pursue other non-advertising revenue streams.

I suppose, generally speaking, it’s worth keeping in mind that advertisers need to be served value too, and also, advertising isn’t necessarily the only business model available to publishers.

Content considerations. Metrics and measurements have long informed the way programs are created, and we should probably expect to see the dynamic express itself further with the new analytics layer. A couple of threads to consider:

(1) Knowing just how much of episodes are being listened to presents a much better feedback loop to improve not just editorial products, but also advertising products. And there is also the likely effect that we’ll see the blossoming of new formats, genres, and show structures that come from playing toward what the new metrics tells us.

(2) On the flip side, there should also be room for the more general worry that we’re sliding into a world where metrics outweigh creative decisions. I think there’s always room for that concern, regardless of whatever metrics are available — there will, to some extent, always be operators looking to play to the numbers rather than actually use the numbers to make better work.

(3) I’m pretty drawn to the question, raised here on Twitter by The Atlantic’s Alexis Madrigal, of whether increased data granularity within a medium would lead to the detriment of experimentation within that medium. Instinctively, I feel as if there is some truth to this, but I also suspect experimentation has less to do with the available metric universe and more to do with the ways in which compensation is structured off those metrics. (A quick tangent: I also find myself wondering how “experimental” material is defined; personally, I tend to grade experimental-ness relative to however the medium currently behaves, and think experimental programming will exist in any format regardless of where it is in its life cycle. I think the more interesting question here is about the conditions under which “experimentation” can exist within high-budget and high-scale productions.)

I’m not even close to being done, but I’ll leave it here for now. Obviously, this enormous and complex development contains many, many layers, and I’ll continue to dig around and write about them in future issues. (I mean, that’s why Hot Pod exists, right?)

Here are some of the questions I’ll be thinking about:

  • To what extent will podcasting go down the road of blogs, and what does that even mean? And should podcasting end up experiencing those same dynamics, what are the differences based on audio as a media format?
  • How will the podcast industry change? Will the professionalizing publishers benefit as they hoped for? What will happen to smaller and indie podcasters?
  • How will podcasting change for audiences?
  • Will we see the industry create more jobs for producers, developers, and assorted media folk?
  • How will the development impact what I’ve described as the bifurcation of the space, with podcasts as extension-of-blogging on one side and podcasts as extension-of-radio on the other?

As for my own normative view on all of this, I’m still figuring it out. I do think that the podcast industry is indeed still comparatively tiny, as Recode’s Peter Kafka points out, with podcast ad spending projected to only be about $250 million this year. While it’s growing at a solid and steady rate, it’s still peanuts compared to where radio (about $14.1 billion) is today, and there’s more to be gained and lost from changing how business is being done today. And like Kafka, I do think change was going to happen no matter what.

Also, as I mentioned on Twitter, I find myself skeptical about the nostalgia and privileging of the status quo. But that’s a story for another day.

Roman Mars, Esquire. New Hampshire Public Radio’s Civics 101 has some new competition in the form of a somewhat surprising side project from the 99% Invisible chief: “What Trump Can Teach Us About Con Law” is an explainer podcast that features Mars being taught the basics of constitutional law by UC Davis professor Elizabeth Joh based on ongoing developments in the current iteration of the White House. I’m told that the podcast is officially produced under the Radiotopia banner, which brings the number of Radiotopians with two podcasts up to two (the other is Hrishikesh Hirway, who makes both Song Exploder and the West Wing Weekly for the indie podcast collective). Mars’ new podcast comes mere days before the launch of another new Radiotopia podcast, Ear Hustle. That’s scheduled to roll out later this week.

Career spotlight. Spend enough time in the New York podcast scene — or any major city with a podcast scene, really — and you’re bound to bump into someone who came up through WNYC, which was once the city’s only major institution dealing with narrative radio. In this week’s Career Spotlight, we’re bumping into Leital Molad, who currently leads podcast development for the Pierre Omidyar-backed First Look Media.

Hot Pod: What do you do?
Leital Molad: I’m the executive producer of podcasts at First Look Media. In a nutshell, I develop and produce podcasts for The Intercept (First Look’s investigative news site) and Topic (our entertainment studio). Right now we have two podcasts in production, Politically ReActive and Intercepted. I oversee those shows week to week, working with the producers, giving editorial notes, and liaising with our business team on the marketing side. The other big part of my job is taking pitches for new shows, creating pilots, and bringing projects to launch. Since I got to First Look last October, we launched three shows: Maeve in America, Intercepted and Missing Richard Simmons.
HP: Where did you start, and how did you end up in this position?
Molad: I started as an intern at WNYC in 2000. The next year I got a full time job as a production assistant for Studio 360 with Kurt Andersen, and spent the next 15 years working on that show, ultimately running it as senior producer. My last year at WNYC I launched and EP’ed a health podcast, Only Human. I started thinking about my next career move and figured that this podcast renaissance was a great time to break out of my cozy public radio cocoon and try something new. So I took the leap and went to First Look — a media startup that was just getting into podcasting.
HP: How did you learn to do the job?
Molad: WNYC was an amazing place to learn everything I know about radio and audio. I got to wear many hats, ranging from basic show production — booking guests, writing scripts, cutting tape — to reporting my own stories, producing documentaries, and running live events. And I learned a ton about launching new shows after working on Only Human, which has been very helpful in my new job. Also, having been in the trenches with audio production (which I love), I can be a better manager of producers and engineers. Getting new shows off the ground at a startup often means being able to jump in on production when needed, and that’s been invaluable.
HP: When you started out, what did you think wanted to do?
Molad: After college, I didn’t land on what I wanted to do until I was brainstorming with a family friend who offered to help with some career advice. He asked me, “If you could have anyone’s job, who would it be?” Right away I said, “Terry Gross.” He said, “Well, that’s what you need to do!” I had been a DJ at my college station and an avid listener of public radio, and those two things just clicked. I wasn’t sure how to become the next Terry Gross; eventually I figured I should go to journalism school. So I came to New York for grad school at NYU, and then, very luckily, landed the internship at Studio 360. My dream of hosting evolved into an appreciation and desire for producing, which I fell in love with.  Maybe I’ll still host a show some day, we’ll see!  (You know, they say anyone can start a podcast with a laptop and a microphone…)

Molad adds that she’s on the lookout for more female voices, and that interested parties should get in touch. You can find Leital on Twitter at @leitalm.

Bites

  • ESPN has rolled out the podcast feed for its upcoming 30 for 30 audio adaptation. The first episode is set to drop on June 27. (website)
  • Malcolm Gladwell’s Revisionist History is coming back on Thursday. (NY Times)
  • WBUR is launching a storytelling podcast aimed at kids. (WBUR)
  • Looks like the Chapo Trap House team has bagged themselves a book deal with the Simon & Schuster imprint Touchstone Books. On a related note, I’m hearing that the podcast channel is increasingly fruitful prospecting ground for book publishers. (Twitter)

When certainties fade: The changing state of academic research into the changing world of news

Innovation everywhere. Innovation in the news business. Innovation in social media. Innovation (and creative destruction!) in presidential political communication. Innovation in the topics and methods of scholarly research. Innovation as a keyword and a buzzword. Innovation as an ideology and a sign of the times.

Things are different, to put it mildly, than they used to be. When we talked over lunch at a 2013 symposium on “Data Crunched Democracy,” organized by Daniel Kreiss and Joseph Turow at the University of Pennsylvania, we could not help but shake our heads at how much had changed since we had started our research into news and journalism. We marveled at how a domain of inquiry that until recently was seen as a somewhat specialized area within the larger field of communication was generating an unprecedented amount of scholarship. All the while, the questions, theories, and methods for studying journalism were also changing, spurred in part by the challenge of the evolving news environment. Yet the frantic pace of knowledge production had somewhat prevented scholars to engage in a collective process of sensemaking about what had been accomplished and what might lie ahead.

Four years later — while much in academia has changed, what has barely been altered is the time it takes to get a book conceived, written, revised, edited, and printed! — we edited Remaking the News, a book that tries to make sense of the past couple of decades of journalism scholarship and imagine new pathways moving forward. We approached some of the most accomplished people we knew who were researching news and asked each of them to write an essay about an aspect of the changes in journalism and the new scholarly opportunities afforded by these transformations. We also asked them reflect on why their arguments mattered to news professionals, scholars, and the public at large. In this article, we share three key lessons we learned as a result of this four-year journey:

  • Alternative modes of telling the story often afford novel arguments while rekindling the passion for the craft.
  • Diversity and conflict are a source of strength and innovation for both newspeople and researchers.
  • Nostalgia, in either journalism or the academy, is not productive; the present moment is ripe for reflecting on the past as a way to imagine new futures.

Expanding the storytelling toolkit

For over a decade, media organizations have been experimenting with alternative modes of presenting information and telling stories. From The New York Times’s exemplary “Snow Fall” to Politico’s recent article on media bubbles, taking advantage of the resources available in the digital environment has become a mantra of the news business. It has pushed journalism in some great directions.

One thing that we learned in the process of putting together Remaking the News is that scholars also ought to find new ways to present information and make a case. In particular, we discovered the renewed potential of the essay format that this edited volume embraces. We do not propose that this become the default genre for scholarly communication. But we found out that it fostered intellectual creativity and joy in ways that we do not normally see in the process of writing the dominant genre, namely the journal article.

These types of articles are to academics what the straight news format is to journalists: effective and easy to write templates that convey the essence of complex arguments to audiences increasingly swamped with information. But like all good formulas, they run the risk of becoming, well, formulaic, and sapping creativity and enjoyment from the craft. They can become, to use an exercise analogy, the treadmill option for runners.

Living in Brooklyn and Evanston, we are both familiar with the pleasures of winter, and know all too well that during the colder months, in order to stay in shape, you have to take the running inside, into the gym and onto the treadmill. Writing a peer-reviewed journal article, in our experience, has increasingly become the treadmill running of scholarly writing. It is necessary, practical, beneficial, generates valuable information exchange, and often invites a form of argumentation that serves the process of analysis well. Not doing it would leave you incapable of getting off the couch once winter has drawn to an end.

But it is often overdone. The corporatization of the academy, like the increased bottom-line concerns in the news business, has led to an ever-expanding pressure to publish larger and larger numbers of articles. New journals pop up from one season to the next like wild mushrooms in the forest, and the existing ones move from publishing four times a year to doing it eight times a year. Concurrently, search and promotion committees expect longer lists of publications from scholars. All of this has turned a whole lot of academic life into what Dean Starkman called, referring to the news business, the hamster wheel: It keeps you in shape but takes the fun out of exercising the mind.

Which is why, in part, in the process of editing Remaking the News, we found that writing an essay has become more like a long run through the woods, particularly one you take in the sun on one of the first days of spring. Without getting too maudlin about it, we discovered that by virtue of its fewer genre constraints and its implicit openness, essay writing clears the head, generates creative new approaches to old problems, and gives authors the freedom to draw on our earlier exercise regimen — that is, the journal articles that have been put through their disciplinary paces — in order to push scholarship in new directions. As editors, it was remarkable to see the level of enthusiasm, commitment, and risk-taking among our authors — something which is quite different from what we experience and hear about the journal publishing process.

Just as journalists are embracing new ways of telling the story, then, we encourage academics to think about new ways of making a case and communicating their ideas. We encourage hiring and promotion committees to adapt their practices accordingly. The digital age has seen an explosion in different communication modalities and platforms. Much of this work goes beyond the essay format, of course, ranging from social media writing to the interactive visualization work increasingly common in the digital humanities. We would like to see more of all of it.

These alternatives should not be seen as subservient to the journal article genre — in the same way that interactive storytelling is not subservient to straight-news, inverted pyramid storytelling. We are not saying that academics ought to dispense with their treadmill workouts…er, with their journal articles. But we are saying that it is important to take alternative modes of communication seriously and value their contributions in their own right. Different forms of academic, and journalistic, writing complement each other in unique and productive ways. There is much intellectual creativity and personal engagement that can arise from expanding the storytelling toolkit.

Embracing diversity

We live in diverse societies and therefore conflict is to a certain extent unavoidable. This applies to both the academy and journalism. Reporters and editors routinely choose between different stories. Even within a single story, they often hear different sides of it; sometimes the versions are complementary, while other times they can be polar opposites.

To cope with this diversity, research on newsmaking conducted since the 1970s has documented a tendency among journalists to privilege certain stories over others, as well as certain sources and accounts within an article. Social scientists are not different: We have our preferred topics, theories, and methods. We sometimes accept alternative approaches as equally productive, but on other occasions think ours is the best and even that the alternatives are plainly wrong.

To counter the shortcomings of a tendency to narrow down diversity that he observed in his landmark studies of news work, Herbert Gans proposed in 1979 the notion of “multiperspectivism.” Gans offered a very concrete set of proposals back then, and he updated them in a thoughtful essay published in 2011. But beyond the specifics of both texts, Gans’ idea is that journalists would do well by incorporating an orientation towards broadening the set of topics and voices represented in the news. By implication, this also meant housing competing viewpoints within the news report in inclusive rather than agonistic manner. In our approach to the volume, we were inspired by the notion of multiperspectivism and tried to include a broad spectrum of intellectual orientations. We also thought that any potential conflicts and disagreements that could arise were a potential source of intellectual innovation.

Two areas of diversity and disagreement in the book are worth highlighting for this article. The first one has to do with the tensions between disciplinary and interdisciplinary approaches to the study of news — this is mirrored, to a certain degree, in the tensions between journalists and technologists in contemporary newsmaking. The second is between knowledge generated primarily with applied goals in mind, or mainly for scholarly purposes.

Regarding the tension between “disciplinary” and “interdisciplinary” approaches, scholars in the first group (to generalize broadly) often frame their intellectual arguments in relationship to other pieces of scholarship that also focus on journalism. They often attempt to generalize about their findings in ways that allow them to build a common theoretical apparatus and advance the state of knowledge about the news. These scholars are building a discipline while making knowledge; thus they have an investment in the institutional vitality of the news media as a source of legitimation of their scholarly enterprise.

The second group of scholars in the book, conversely, seemed more interested in “studies of journalism” rather than in journalism studies. These writers usually framed their journalism research as a case of something else — new media, political communication, cultural studies, and so on. Often, the chapters addressed other literatures as much as they addressed scholarship on the news. They also tended to include arguments for outward disciplinary connections rather than inward disciplinary growth, using journalism as a way of shedding light on cross-cutting social processes and phenomena.

Instead of fostering confrontation or falling into the trap of adjudication, we favored a stance of welcoming these diverse approaches. We tried to make visible their different assumptions and fostered productive conversations among the various perspectives. In the academy as much as in journalism, the goal of multiperspectivism is to turn what David Stark has called “creative friction” into new ways of seeing the world.

A second area of diversity and conflict present in the volume is between scholars who produce action-oriented media research and thinkers who conduct what some philosophers of science call “basic research.” This area cuts across the professional worlds of academics and journalists, since the former type of research is sometimes done either in part to engage professionals or wholly within industry and think tanks. It is also an old area of disagreement among both social scientists and journalists.

In our book, it is addressed primarily in the chapters by Talia Stroud and Matt Hindman. Stroud focuses on the distinction between studies that help the bottom line and those that help the quality of democratic life. She argues that the tension between “democratically-useful and industry-useful research is often overdrawn, and even when it exists, that this conflict can be productive,” thus concludes by offering alternatives that satisfy both research aims. In a related vein, rather than bemoaning journalists’ use of reader metrics or claiming that this use somehow debases or diminishes journalism, Hindman accepts metric deployment as a given and tries to discover an ethical use for them. Both Hindman and Stroud problematize critical and practical approaches to the study of news, therefore showing how diversity becomes a source of conceptual innovation.

Dispensing with nostalgia

Social scientists, like journalists, are in the business of sensemaking: finding out information about important phenomena and accounting for what happens in ways that are truthful and relevant to our publics. Academics, unlike journalists, study these phenomena but also build theories trying to find the logic behind them. The topics we choose and how we explain them tend to be shaped by the times we live in. So during the third quarter of the 20th century, when the industrialized mass media system was at its peak, scholars focused on issues such as the ability of the press to tell citizens which news stories to talk about, and the commingling of mass and interpersonal communication in shaping the effects of media on society. What emerged from that scholarly focus were both knowledge about media, culture, and politics and theoretical notions like agenda setting and the two-step model of influence.

In the social sciences, theories tend to have an inertia of their own by helping frame the process of inquiry long after the historical conditions that led to their development change. During periods of historical discontinuity, and especially at the beginning of them, this leads to a nostalgic reflex that is both scholarly and normative: The new phenomena are made sense with theoretical approaches from the past — they are the only ones we have at our disposal at the time, after all — and their implications are assessed, often negatively, in comparison to what was the norm before. Thus a sizeable portion of the scholarship on online news has applied notions like agenda setting and the two-step model to the current environment and has found that it is much more difficult for the press to set the agenda now than before, and that the ascent of social media to the pinnacle of power in the new media ecology has added layers of complexity to the relatively simple two-stage process of influence. This has been tied to common normative assessments yearning for the glorious Watergate days, when the press could supposedly focus people’s attention on what was important and Facebook and Twitter did not pollute the public sphere with a tsunami of fake news.

The problem with this kind of nostalgic stance is that it obliterates both theoretical imagination and practical possibilities. Overcoming nostalgia does not mean doing away with the conceptual tools and normative ideals of the past. It means not taking them for granted, and instead revisiting them in ways that do justice to the unique characteristics and potentials of the contemporary moment. For instance, how does the fact that most people access digital news from social media platforms and search engines affect the power of agenda setting by news organizations? Does the rise of personal publics on Facebook, Instagram and Twitter affect the influence exerted by co-located interpersonal networks and, if so, shouldn’t we think about a three-step flow, instead of the two-step process outlined by Katz and Lazarsfeld 60 years ago?

Yes, the golden days of the industrialized mass media system played a part in Watergate. But would that system have contributed to the Black Lives Matter movement with the same efficacy that the use of social media platforms by activists and the public at large did? And while it is possible that the contemporary mix of news and social media contributed to a rise in the volume of false information during the 2016 electoral cycle in the United States, this mix has also been credited with contributing to loosen oppressive information regimes as in the case of the Arab spring. We need to assess both sides of the coin concurrently.

Nostalgia provides reassurance and self-gratification, but it is also intellectually and socially stultifying. It is time to move on, make sense of the present by learning from history, not by clinging to it, in order to help shape more productive futures.

When certainties fade

If there is a common thread that cuts across these lessons about the value of diversity, the vitality of expanded storytelling options, and the importance of dispensing with a nostalgic stance is that they all challenge the certainties associated with homogeneous viewpoints, writing genres, explanatory models, and normative ideals. There is nothing inherently wrong with certainty; it can be quite productive, in particular during a period of historical stability.

But, going back to the opening of this article, the contemporary context is marked by rapid and widespread innovation, including in the research about, and practice of, journalism. In the words that Michel Foucault penned for The Order of Things and that anchored the introduction of our volume, this context “restor[es] to our silent and apparently immobile soil its rifts, its instability, its flaws; and it is the same ground that is once more stirring under our feet.” This feeling of great transformations can be unsettling and paralyzing, yet also exhilarating and liberating.

Above all, it reminds us that we are in the driver’s seat, and that perhaps we might not have the luxury of relying a whole lot on the routines and institutions that served us so well during the second half of the twentieth century. A renewed sense of agency might actually be the ultimate beauty of writing about our digital age.

C.W. Anderson is an associate professor at the College of Staten Island (CUNY) and the CUNY Graduate Center. Pablo J. Boczkowski is a professor in the School of Communication at Northwestern University.

Photo by HAT Triathlon used under a Creative Commons license.

Membership programs are paying off for news outlets — and so is helping them set up their programs

If you want readers to donate, you have to ask — often. It sounds obvious, but it’s a strategy many news organizations have been forced to become more comfortable with, and one that takes a lot of resources to really get right.

Before Hawaii’s Honolulu Civil Beat went nonprofit in late June of last year, it was charging $4.99 a month for access to its paywalled site, already significantly lowered from the $19.99 price point it tried out at launch. It had 1,100 recurring subscribers.

Since going nonprofit and starting a membership drive, average recurring monthly donations to the Pierre Omidyar-backed online news site rose to $12 — that’s $144 a year versus $60, even with all the stories free to read. It now has around 1,500 members.

“We haven’t reached our ceiling in terms of the number of donors, which continues to go up month after month,” Ben Nishimoto, director of philanthropy for the Civil Beat, said. “We’re seeing all the metrics of a healthy membership program, and a lot of that has to do with the structure and advice that the News Revenue Hub has offered.”

Civil Beat was among the first five organizations to join the News Revenue Hub, which for a fee takes on the heavy lifting of setting up membership programs — the software, the recruitment and retention, the messaging and maintenance — and facilitates an exchange of insights among participating outlets. The Hub, the brainchild of Mary Walter-Brown, first began last fall as an initiative of the nonprofit news site Voice of San Diego, an exemplar of sustainable digital news membership.

After helping the five pilot news organizations together raise more than a million dollars in half a year, News Revenue Hub has spun off into its own standalone organization, led by Walter-Brown (who’s now its CEO) with digital manager Tristan Loper (also previously of Voice of San Diego). It launched at a time when many news organizations were trying to rethink their mission and messaging post-U.S. election. Walter-Brown is now hoping to continue its early successes, adding five new organizations. (We’ve written separately about The Marshall Project and The Intercept’s challenges; the full list: InsideClimate News, NJ Spotlight, Honolulu Civil Beat, The Lens, PolitiFact, The Marshall Project, The Intercept, CalMatters, Youth Radio, and Rivard Report.) The Democracy Fund will continue to support some of the overhead costs for participating outlets.

“It’s been great from a couple of perspectives: The first, basically learning how to do this kind of fundraising, and then also, realizing there’s a definite benefit to asking often of people, more often than I think as organizations we previously felt comfortable doing, and that’s been gratifying,” Beth Daley, director of strategic development at InsideClimate News, said. “The support we have from individual members is dwarfed by support we get from foundations, for instance, but foundations also like to see that we’re diversifying our funding.” (In addition to money from memberships, InsideClimate News is also looking to raise $25,000 from readers to fund one reporting trip exploring the impacts of climate change across the U.S.)

“At first, I thought, gosh, I hope this wasn’t just this weird enigma. It’s starting to become more predictable, which is what I’m excited about,” Walter-Brown said. “We’re starting to see that membership can work for a PolitiFact, an InsideClimate News — organizations with national and global readers — that it can work for The Lens, NJ Spotlight, Civil Beat. And we’re now seeing the same thing for The Intercept — an international site focused on privacy and surveillance issues — when we weren’t sure how that was going to resonate with readers.” There’s already a bit of a waiting list of organizations eager to join, Walter-Brown said, and the Hub is looking to bring several more into the fold this year.

“We evaluate each client based on whether they have a base big enough and diverse enough to make it worth their while, whether they have the internal staff willing to dedicate the amount of time and energy to this that’s needed, whether they have the buy-in from the top, whether hey support the principle of building a true relationship with your audience,” Walter-Brown said. “It’s clear when you go through these conversations. There are some where I just say, ‘You’re not ready, but here are some tools you can use to build your audience, and let’s talk in six months.’ We don’t turn people away with a blanket ‘no.’”

While most of the organizations currently in the Hub are nonprofits, it’s not a requirement: PolitiFact, one of the five original pilot outlets, isn’t a nonprofit. It launched its program just before Donald Trump’s inauguration in January, and has since raised $200,000 in contributions and pledged donations. Three quarters of its members are part of the “informed” or “involved” tiers ($50 to $150 and $151-$500); only “a few dozen individuals have contributed $500 or more,” according to Emily Wilkinson, PolitiFact’s then-business development director.

Nor, of course, is a large national following a requirement. NJ Spotlight has added 470 members since it started its membership program, raising $86,000 most at the “engaged” or “informed” levels (a minimum $35 to $100; then $101-$500), Paula Saha, who oversees events, audience, and donor development at New Jersey state-focused nonprofit news service, told me.

“Most of them came in during our winter drive, but we’ve had a steady trickle since,” Saha said, crediting a well-tailored email campaign that starts with soft asks that get stronger as readers become increasingly familiar with the work NJ Spotlight does. “It’s been really nice to see the steady trickle; with recurring donors, it’s obviously a gift that keeps on giving, quite literally.” NJ Spotlight has worked to impress upon readers the value of memberships: happy hours, coffees, an intimate event for members with Senator Cory Booker (from the dedicated Slack group for News Revenue Hub organizations, it’s also gotten some ideas for events like trivia nights).

The News Revenue Hub, especially by facilitating technical setup and helping organizations understand and use better metrics (syncing Eventbrite with Salesforce, for instance), has freed up outlets to actually get to know their most dedicated readers. Honolulu Civil Beat has been able to host regular community events, such as taking groups out to neighboring islands or continuing its storyteller series, Mariko Chang, the Civil Beat’s membership and events manager, told me.

“Ben [Nishimoto] and I try to call all of our new donors as well, which takes people by surprise,” she said. “We let them know they can come to us if they have concerns. We ask them ways we can do our jobs better. It’s been helpful to have the time and resources now to make those personal connections while we can.”

The Hub itself will remain a small staff through the rest of 2017, but it also receives additional foundation support and is looking to raise half a million more to help with expansion.

“We’re hoping to be able to scale accordingly — eventually there may be different tiers of service, maybe a full-service option where they only need someone part-time and we’re providing more of the copywriting and execution. Then others may want to bring on a full staff like we had at Voice of San Diego, with an events manager, a digital manager,” Walter-Brown said. “I’m excited to explore what the service will look like in the long term, whether it’s an incubator for some organizations, a centralized place where they outsource tasks, for organizations that only want to focus on editorial.”

Chairs of different colors, by Steve, used under a Creative Commons license.

The Wall Street Journal is killing its What’s News app (but bringing lessons from it to its main app)

The Wall Street Journal on Thursday said it was shutting down its standalone What’s News digest app — one of the few survivors of a period when top publishers were launching secondary mobile apps aimed at reaching different audiences and incubating innovations harder to execute behind the outlet’s primary homescreen icon. The Journal is currently in the process of revamping its main news app, and it plans to introduce features it developed for What’s News into the main app.

The What’s News app — named for the Journal’s daily front-page briefs — launched in the summer of 2015 as the paper’s first mobile-only product. The app features a swipe-heavy design with a select 10 news stories at a time (plus some opinion). It’s updated regularly throughout each weekday, puts stories in quick summary form, uses custom headlines distinct from those on WSJ.com, and allows users to follow specific news topics. Access to the app was included as part of a subscription to the Journal. The Journal said the app had been downloaded more than 110,000 times; it will cease publishing on June 30.

Prior to the app’s launch, deputy editor-in-chief Matt Murray told my colleague Shan Wang that the What’s News app was the result of a concerted effort from the Journal’s news desk to become mobile-first.

“We were simply doing what all journalists are now doing, which is thinking about digital journalism, what our readers want, and how you experience news on your phone,” he said at the time. “Somewhere we made the connection to the news digest already in our papers, What’s News.”

Now the Journal is incorporating those lessons into its main app as part of a larger overhaul. In an interview earlier this month, before the paper announced its plans to to shut the What’s News app, mobile editor Phil Izzo said it was looking toward the What’s News app for inspiration as the Journal thought about introducing more flexible ways to indicate story hierarchy and package stories in the app.

Other news organizations, such as The New York Times, also introduced secondary news apps only to pare them back. (It’s a common strategy in businesses seeking to stoke innovation — separate and reintegrate.) In 2014, the Times launched the millennial-seeking NYT Now and a standalone Opinion app. It quickly shuttered the Opinion app before moving NYT Now to a free model, eventually eliminating it altogether last summer.

The Times of London also last year closed its secondary app aimed at international audiences after only 10 months of operation. The Washington Post still maintains two separate mobile apps (one “Classic” app, with the usual list of headlines, and one with a more swipe-friendly, forward-looking interface).

The Journal still operates a handful of other standalone apps, including the WSJ Live video app (though it hasn’t been updated since 2015) and WSJ City, an app that shares the same design as What’s News, but exclusively covers London-based business news.

But the Journal’s core focus now is its main mobile app — starting with iOS. Earlier this month, the Journal introduced rich push alerts and added the functionality to follow specific reporters in the app and receive a push notification every time they publish a story — useful for readers who want to pay close attention to a reporter covering their industry. (The paper had introduced a following feature — for topics and companies, not reporters — in the What’s News app last year. The Journal said those who used it spent 20 percent more time in the app than those who didn’t.)

In addition to redesigning the main feed to add more flexibility, the Journal would like to add increased personalization to the app, product director Jordan Sudy said.

“It’s already personalized with content that you save and the push alerts you’re receiving by following certain authors, but we want to be able to actually have some sort of feed or what have you in the app that will surface that content to you in a digestible way,” Sudy said. “Everybody sees the content that’s been chosen by the editors, but [we want to] also make the app for you — but not doing it in a binary way. Right now, it’s all the same app for everybody — the Times sort of does the same thing — or you have these aggregators where it’s the same app for everybody, but aggregated personalized content. We want to make sure we do both.”

The Journal’s personalization capabilities aren’t there to enable that quite yet, but Izzo said it is in the process of laying the groundwork by introducing better metadata through improved article tagging.

While previous redesigns were introduced as big wholesale changes (I wrote about WSJ.com’s 2015 redesign, for example) the Journal is now focusing on a more iterative process that will see lots of smaller incremental changes, Izzo said. He declined to provide a timeline for when the Journal would introduce additional features or when its current cycle would finish.

“We’re thinking of multiple iterations for as long as the phone is the primary delivery system for news, and then whatever comes next, then that’s going to be the thing that we’re thinking of,” Izzo said. “The whole point of making it an iterative process is that we don’t just focus on this intensely for a year and then we go back to doing something else. That’s going to create the same problem we had in the past. What we’re trying to do is set up a place where we can make changes. We’re never going to be a tech company. We’re never going to be Google or Facebook. But what we can do is have more control over our product and more control over what we put out.”

The scariest chart in Mary Meeker’s slide deck for newspapers has gotten even a smidge scarier

It’s an annual moment of print realism here at Nieman Lab: The posting of the attention/advertising slide from Mary Meeker’s state-of-the-Internet slide deck. It’s enough of a tradition that I can now copy-and-paste from multiple versions of this post. Here’s a sentence from the 2013 version:

For those who don’t know it, Meeker — formerly of Morgan Stanley, at VC firm Kleiner Perkins since late 2010 — each year produces a curated set of data reflecting what she sees as the major trends in Internet usage and growth. It may be the only slide deck that qualifies as an event unto itself.

And a chunk from the 2014 version:

What’s useful about Meeker’s deck is that its core data serves as a punctuation mark on some big, ongoing trends. The kind of trends we all know are happening, but whose annual rate of progress can be hard to judge. Like, say, the continued demise of print.

The Meeker slide that always interests me most is the one where she shows how American attention is divided among various forms of media — and how that division lines up with where advertising dollars go. How much of our attention goes to television, say, versus how much of our advertising goes there?

It’s not absolute dogma that the two — audience attention and advertising dollars — will always be equal. But it makes sense that they would tend toward parity. More people listening to the radio should lead to more companies advertising on the radio, or vice versa.

So let’s travel back in time. Here’s Meeker’s chart for 2011:

mary-meeker-adshare-2011

The two things that jump out at me: Print gets a lot more advertising than it gets attention. And mobile is the opposite. You’d think that would equalize with time.

Here’s 2012:

mary-meeker-adshare-2012

Equalization! Or at least the path to equalization, proportionately. Print loses attention, but loses ad dollars a bit more quickly; mobile gains attention, but gains ad dollars a bit more quickly. (Sizable margin of error here, it’s worth saying.)

Here’s 2013:

mary-meeker-adshare-2013

The print story remains the same: down in attention and in ad dollars. But note there is still a wide gap between the two — print still gets far more ad dollars than its hold on the American attention would seem to “deserve.”

Here’s 2014:

mary-meeker-chart-2014

The mobile growth everyone anticipated is happening — moving from 4 percent to 8 percent in 12 months’ time. And print continues to lose both time spent and revenue.

Here’s 2015:

mary-meeker-adshare-2015

On the positive side, print’s share of attention remained steady at 4 percent. You’ll note, though, that when the numbers get that small, you’d need roughly a 25 percent decline in attention share to drop from 4 percent to 3 percent. So steady doesn’t necessarily mean steady — it just means a pace of decline less than that. (And of course we don’t know if that 4 percent is really 3.51 percent of 4.49 percent either.)

The ad-side trend, though, is unchanged — down another two points from 18 percent to 16 percent. And, of course, there’s still a long way to fall from there. Note, too, that mobile advertising had another huge jump, from 8 percent to 12 percent.

Finally, here’s the newest slide for 2016 (it’s slide 13):

Take a look at mobile! Up from 12 to 21 percent of ad revenue in one year. We now spend 40 percent more time looking at media on our phones than on our laptops and desktops. And there’s still plenty of room for growth, both in ad spend and in time spend. But given that the vast majority of new digital ad revenue goes to Google or Facebook — see Meeker’s slide 15 for more on that — that money isn’t exactly a boon to publishers.

But then there’s print — staying steady at 4 percent of time spent for the third straight year, but another big drop in ad dollars, from 16 to 12 percent. That lines up with the evidence that the decline in newspaper print advertising accelerated last year in a big way.

Let me wrap up by copying what I wrote four years ago, since the overarching trends haven’t really changed since then:

Print advertising is not coming back. It will fall further. Substantially further. All newspaper planning for the coming few years needs to reckon with that basic fact.

Mobile continues its rocket rise, and there’s still lots of room for ad revenue growth. And now it’s even eating away at the Great American Time Suck, television. Mobile is eating the world, and most news organizations make only a pittance off it.

Lots more interesting stuff in Meeker’s complete deck.

The Boston Globe is getting smarter about digital subscriptions — and tightening up its paywall

Earlier this month, BostonGlobe.com readers were surprised to find that a popular way of skirting the site’s paywall had been quietly closed. By visiting the site in their browser’s private mode, readers were able to circumvent the site’s free article limit, letting them read more articles than they would otherwise. And that wasn’t the only major change to the site’s paywall recently: At the end of April, the Globe also cut back on the number of articles it let visitors read for free every 45 days — from five articles to a mere two.

These changes, while jarring to many readers, are part of the Globe’s ongoing strategy to “strike the right balance between giving users the opportunity to sample content and getting them to subscribe,” said Peter Doucette, chief consumer revenue officer at Boston Globe Media. “We’ve been constantly experimenting with finding that balance, because fundamentally we believe the Globe’s journalism is worth paying for.” A lot of people agree: The Globe’s digital subscriber count currently sits at roughly 84,000, up from around 65,000 a year ago. That’s the most of any local newspaper in the country.

Doucette said the Globe is happy with the early results of its latest tweaks, and has “no specific plans to restrict the paywall further.” But further changes seem inevitable considering the many changes BostonGlobe.com has gone through over the years in an effort to get more people to pay. The site debuted with a hard paywall in 2011, targeting its most committed, regular web readers and offering its print subscribers an exclusive extra. (The free, ad-supported Boston.com, in contrast, was aimed at a more casual audience). The Globe switched gears just three years later when it introduced a more leaky metered paywall, which The New York Times had by then shown could be a successful approach. Over time, the newspaper has continually tweaked and refined its approach, opening and closing exceptions to its paywall meter. In a recent effort, for example, the Globe has stopped counting Google AMP links towards the meter.

Tim Griggs, an independent media consultant and the former publisher of The Texas Tribune, said that the beauty of the metered paywall is that it gives news organizations plenty of flexibility to tweak how many free articles to offer readers, how often to reset the meter, what factors affect the meter, and how to message all of this to potential subscribers. If a hard paywall is a hammer, the metered paywall is a scalpel. “It’s an elegant solution when done with the right data rigor and the right user experience,” Griggs said in an email. “Many news sites aren’t so great at either of those things.” Indeed, former Globe executive and Nieman Fellow David Skok wrote for us last year about the importance of using reader data and predictive analytics to determine optimal times to raise or lower the paywall:

Imagine a reader browsing the web on their smartphone while on a train heading into work. They click on a link through Reddit and arrive on your news site where they are served a paywall. Using predictive analytics, we are quite certain that this Reddit mobile reader will not subscribe to your website. In fact, the reader may even post on Reddit just how much she despises your paywall. So, instead of wasting our time trying to get that reader to subscribe, what other kinds of value can you exchange with her that could be of mutual benefit? Perhaps it’s an email newsletter signup form that could begin an inbound marketing relationship? Perhaps it’s a video preroll ad with a high CPM to generate maximum ad revenue? Perhaps it’s a prompt for the reader to “like” you on Facebook so that they can help expand your reach?

There’s a lot of evidence news organizations are getting more nuanced with their approaches — and that over time has resulted in paywalls with fewer holes, not more. The Wall Street Journal, which has put most stories behind a hard paywall since the 1990s, recently closed a feature that let visitors skirt restrictions by pasting a story’s headline in Google. It also recently killed of a secret (yet surprisingly well-known) free login popular among those in media circles. With the moves, it joined The Washington Post, which has been testing efforts to close loopholes that let visitors access its content for free.

Premium news organizations in 2017 are in a constant process of opening and closing paywall loopholes, depending on their goals. Griggs illustrated how the early parts of this process worked at The New York Times, which from 2011 to 2013 evaluated how to handle and respond to “avoidance behaviors” such as cookie deletion. One finding was that people who deleted cookies to avoid the Times’ paywall were also more likely to subscribe than people who did not delete cookies — likely because those cookie deleters were also some of the most frequent readers. When news organizations recognize this kind of behavior, they’re able to try new ways of reaching those readers, such as targeting them with specific soft messaging. News organizations can repeat this process for each of the various workarounds, all of which necessitate their own specific approaches.

Griggs agreed that, as publishers get smarter about understanding reader behavior, their paywalls tend to get less leaky. “When you’re talking about a relatively new line of business, there’s a lot to study and learn, there’s a lot to test, and there’s a marketplace shift happening at the same time,” he said. “So you can understand and act on things you previously didn’t know.” (The Times, for example, designed its paywall to be comparatively porous at first in an effort to collect as much data as possible.)

Doucette said that publishers are “now entering a second generation of digital models.” In the first generation, publishers were just trying to prove out the concept that consumers would pay up. With that accomplished, many are now focused on optimizing and building on that model. “We’ve learned a lot in five years about what the levers are, how we can pull them and what are the tradeoffs,” Doucette said. “We understand a lot of things better than we used to. The changes are the natural evolution of that understanding.”

Photo of a brick wall by Kingy used under a Creative Commons license.

How The Washington Post plans to use Talk, The Coral Project’s new commenting platform

It was late April and the staff of the Coral Project was “on tenterhooks” as The Washington Post was conducting its first public test of Talk, the project’s new commenting platform, Andrew Losowsky recalled recently.

The Washington Post — which launched the Coral Project along with The New York Times, Mozilla, and the Knight Foundation to improve communities around journalism — invited about 30 commenters who were active on its Capital Weather Gang blog to try out the platform and offer feedback. The callout attracted more than 130 comments, which included Post staffers probing commenters for more details and specifics, and additional reactions submitted through a form and email.

“We were expecting people to be quite negative,” said Losowsky, the project lead. “Initial change isn’t something that people tend to welcome. It looks a bit different, it has a few different features, and the responses we got were actually very good and very respectful and thoughtful. That’s, of course, what can happen when you openly make clear that you are listening to and engaging with your readership.”

The Post plans to make the Talk platform its primary on-site commenting system, and it’s now working to further integrate it into its site with plans “to launch as soon as is practical,” said Greg Barber, the Post’s director of digital news projects.

The Coral Project, meanwhile, is taking that feedback from the Post’s users and integrating some of the changes they suggested into the platform.

Talk will replace the Post’s current commenting platform, which it calls Reverb, internally, Barber said. “It was born of necessity because our commenting vendor went out of business and we needed a solution, so we made one,” he said. “It was created during a time after the Coral Project had been announced but the Coral software wasn’t yet ready, so we needed an interim solution…it was never intended to be a permanent solution to our commenting needs. Coral was.”

The Coral Project launched in 2014 with a three-year, $3.89 million grant from the Knight Foundation that was set to expire this summer. The project has been able to secure additional funding from Mozilla and the Rita Allen Foundation to continue its work, Losowsky said, adding that they’re in conversations with additional funders, including Knight. (Disclosure: Knight also supports Nieman Lab.)

Along with Talk, Coral has also released Ask, a platform that enables newsrooms to ask specific questions of their audience, and it’s planning to release guides to journalism and engagement later this year.

While the Post will be the first news organization to use the Talk system, the Coral Project is in talks with a number of other outlets who didn’t want to be among the earliest adopters, Losowsky said.

Talk was designed with the idea in mind that a commenting platform should be more than just an empty box at the bottom of a story. As journalism business models become increasingly reliant on direct reader revenue — digital subscriptions are all the rage right now — Losowsky said commenting systems should work to proactively engage readers and build community around the news:

Almost everybody online knows how to post something on Facebook or Twitter. The barriers to entry to being able to publish your thoughts online is [low]. As a result of that, news organizations need to think about what is the kind of dialogue they want to host versus the kind of dialogue that will appear elsewhere. I think it’s perfectly fine to say that there are rules here that are different from rules in other spaces, and if you want to do some other form of interaction, you can go and do it over there — but this is the kind of thing we’re looking for here. These are the baseline assumptions that we have here. Here are the things we’re trying to do with it. This is what this space is for versus that space.

Being able to really define that, I think, is going to be really important. On the one hand, news organizations are not going to win in a battle with Facebook to create the best social network. But what news organizations can do is create a space which gives direct access to the journalists, that has the ability to bring the community into the process and be part of the process, manage interaction on the news organizations’ terms rather than Facebook’s terms about what is visible, what moderation tools you have, about the ability to focus and highlight on different conversations and so on. And news organizations can be transparent about how they’re using people’s data and really safeguard the privacy and transparency around the data of every interaction that they’re having with the community.

On every article, news orgs using Talk can set an opening prompt at the top of the box to try guide the discussion and keep the conversation on-topic. Other features meant to facilitate productive discussions include the ability to add context to reports of inappropriate comments, banned words that are automatically rejected and suspect words that are flagged for moderation, and badges for staff members so they’re easily identifiable.

Outlets can also personalize the way users can respond to comments. The default, based on research from the Engaging News Project, is a “respect” button instead of a “like” button.

Moderators are able to ban users directly from the comment, and the moderation dashboard automatically highlights banned and suspect words. They can also pin their favorite comments at the top of the feed, to highlight the best comments and also set the tone for the conversation.

“There are a lot of things that we’re focusing on — first of all, from the commenter’s perspective, really thinking about how do we indicate what’s wanted in the space and use subtle cues to encourage and improve the behavior,” Losowsky said. “Then, from the moderator or journalist’s side, how do we create tools to make it fast and easy to be able to do the actions that they need to take — remove the bad content, ban users who are abusing the system, suspend those who in some way are perhaps redeemable or having a bad day and give them a time out, and then be able to not only approve but highlight really good comments so that you’re indicating the kind of behavior you want to encourage.”

Talk, like everything The Coral Project has produced, is open source, so outlets can build upon it as they like and the entire Talk system is built around plugins, with the idea that publishers can tailor the system to their needs. The Coral Project also offers hosting services, which could be useful for smaller newsrooms.

For its part, the Post has been conducting quality assurance testing and also making sure the code in Talk doesn’t interfere with any of the other Post’s services. It was also tested on the Post’s development and staging servers to make sure everything worked properly before it was rolled out to users.

Barber said the test in late April was unusual because the Post conducted it before it was able to hook Talk up to its own authentication system, which is one of the ways that it’s customizing the platform to hook up to its infrastructure. The Post is also connecting Talk to the systems it uses to monitor its servers and hooking it into its CMS so comment streams can automatically be created for stories.

“The Coral Project group is continuing to build core features onto Talk and to take some of its features and turn those into plugins that are more accessible to organizations like the Post, so that we can tinker with them, as we’re often wont to do, to customize in different ways,” Barber said. “Other organizations that are interested in customizing in the same way that we are — or in ways that are different from what we want to do — will have that capability as well. The Coral team, of course, is critical to this. They’re building the main software, they’re building the main functionality, they’re giving us the spaces to customize the bits and pieces we want to work in specific ways. But then what The Washington Post team is doing is working on specific plugins that might not fit Coral’s overall strategy, but are things that we want to do here.”

The Coral Project team is also working on adding new features based on the feedback it received from Post commenters who tested out Talk last month. The current Post commenting system, for instance, allows readers to edit comments. Talk didn’t. They’re now working on adding an edit feature to Talk. Outlets will be able to set a time limit for — maybe five or 10 seconds — for commenters to read over their post and edit it before it goes live.

“If you change your mind, if you regret it, if you see a typo you still have a window in which you can edit and change it,” Losowsky said. “That was something that was requested by a number of different Washington Post people.”

Barber and Teddy Amenabar, the Post’s comments editor, were active in the comments for the test last month, thanking users for feedback and asking questions such as “Anything missing that you’d like to see in a new system?” They collected that feedback and created a spreadsheet with the information that they then shared with the Coral Project. The Coral Project plans to take that feedback and continue to build out Talk while also looking for ways to help news organizations develop their engagement strategies and define what kind of conversations they want to host on their own platforms.

“What do you want from us in this space from the perspective of the audience? And from the perspective of the journalist, what do we want in this space? What do we want to happen here? By outlining and making clear what your expectations are for the space, you’re already creating a greater likelihood of success.”

Photo by Philipp used under a Creative Commons license.

Scribd says it has over 500,000 subscribers paying $8.99/month for ebooks, audiobooks, and now news

Scribd’s $8.99/month subscription service started out with only ebooks. Over time, it’s expanded to audiobooks, sheet music, documents, magazines — and, as of Tuesday, newspapers. “Select articles” from The New York Times, The Wall Street Journal, and The Guardian, as well as some archival content from the Financial Times, will now be available to Scribd subscribers.

And Scribd says there are quite a lot of subscribers: The service now has over half a million paying subscribers, paying $8.99 a month, and the company is profitable. I was so surprised by the subscriber number that I asked CEO Trip Adler to repeat himself; it’s true, he said: “We have a $50 million revenue run rate.” The San Francisco–based company now has more than 110 employees.

Newspaper content was a “natural addition” for Scribd, Adler said. The most popular forms of the content on the service are, in order, ebooks, audiobooks, and documents. Magazines were added last fall. Scribd used to also include comic books and graphic novels in its service, but stopped including them because there wasn’t enough reader interest. It also switched from a completely unlimited content model to one that offers access to three ebooks and one audiobook per month. (Documents, magazines, and newspapers are unlimited.)

Judging by Scribd’s stated membership numbers, the switch in business model appears to have worked. The numbers seem impressive and are not something that I would have predicted a couple years ago when the ebook subscription site Oyster shut down — especially considering that Amazon keeps adding more reading offerings to Prime.

Scribd won’t be focusing on breaking news from the papers it partners with. Instead, it’s looking for longer, more evergreen content that “fits in with a book kind of experience,” Adler said. “We’re going for the longer-form content that might actually take a few minutes to read, has a longer shelf life, and will be interesting beyond the first day it comes out.” The newspaper content — along with Scribd’s other content — is organized by interest.

Each of the newspapers is making a fixed number of articles available to Scribd; Scribd editors choose which ones to include on the service. Some of the publishers are being paid a flat licensing fee; others are paid by the read.

“People have been talking for a long time about how to monetize journalism and we think we’ve come up with a really interesting answer,” Adler said. The newspapers included for now are the big names that aren’t having as much trouble monetizing as smaller papers, but Scribd may include more papers in the future. “We think, if we can offer all these different newspapers together for one subscription price, we can return more money to journalists that way.”

How NPR considers what new platforms — from smartwatches to fridges — will get its programming

Here is a (far from complete) list of places where you can listen to NPR programming: Your old school radio. Your car radio. Your smartphone. Your smartwatch. Your Amazon Echo. Your Google Home. Your refrigerator?

If you own a Samsung Family Hub fridge (which features a giant screen on one of its doors), you can get a bulletin briefing of your calendar for the day, as well as an hourly news update, via NPR. (That’s in the United States. In Europe, the news partner is Upday; in Korea, it’s Kakao.)

“Folks in the building have the same questions. I heard somebody talking about the fridge the other day — ‘Is that true, we’re on a fridge?’ I said, yeah,” Ha-Hoa Hamano, NPRs senior product manager, told me, amused at my excitement. (Full disclosure: I have an explicable obsession with this fridge thing, which we first wrote about here five long years ago. I fixated on it when writing about Upday’s expansion across Europe, as well.) “But we take into consideration a lot when approaching these — the level of effort it takes on our part, whether the audience there makes sense for us, whether our audience is there already, whether we’re going to gain new audience from it. Generally, we try to get to ‘yes’ faster than we try to get to ‘no.’”

Samsung is already a technology partner for NPR, and approached NPR with a list of Samsung devices — “the pitch for the fridge was that the kitchen is the new hub for family and entertainment interaction early in the morning” — they wanted to see offer NPR. NPR One is available on the newer versions of the Samsung Gear smartwatch; the fridge integration was an easy extension.

“As opportunities like these come up, we can talk monthly, or weekly at times. For a lot of upcoming things, sometimes it works out for us to collaborate super heavily on a project, but sometimes it’s a little more far-reaching,” Hamano said. “We have a super lean team, so sometimes it has to be, ‘yes, but not right now,’ or ‘yes, but this may take a lot more time.’” The core team that works with new platform projects is indeed lean: Hamano, the same legal team that looks over contracts with partners, a designer — “the same few people working on a dozen of these projects at once.”

The NPR One API facilitates some of these partnerships. Through its API service, advertising (er, sponsorships) are built in for devices with or without screens.

Its developer center is open, and developers working on projects of any scale can first dip their toes into what having NPR on a certain device might look like. Then there’s a formal queue (of mostly of inbound requests) that ranges from kickstarted startups designing a quirky little device that allows for hands-free control of a phone to larger projects, like fridges and Lexus cars. The kickstarted device, for instance, “went out and ran a beta with users on personal keys for as long as they could, and when they were ready, came to us for full certification,” Hamano said.

Some projects, like an NPR One radio made using a Raspberry Pi, don’t require any commercial licensing, and any interested tinkerer is free to create their own little radio for personal use, a project that doesn’t require tech review and legal certification on NPR’s part.

On its end, NPR has to be cautious about maintaining technical standards for partner platforms, since “we definitely don’t want to be out there on a platform where users are super frustrated and think issues are coming from NPR when it’s a problem with the device,” Hamano said. Sometimes tech partners will pass along bug reports from users. (So far, there hasn’t been a technical failing serious enough to cause NPR to pull out of a partnership.) With projects like NPR on Amazon’s Alexa, keeping up with the evolving features of the platform itself is a challenge on its own.

Diehard NPR One or NPR app users, for instance, want specific features — being able to binge-listen podcasts in a preferred order, for instance — or want the exact same experience in their NPR app as on their Amazon Echo. A few listening habits have emerged there, according to Hamano, such as heavy usage on weekday mornings (halved on weekday evenings) and most usage shifting an hour or more later on Saturday and Sunday mornings.

“With all these platforms, it is a challenge for us to figure out: Is it the platform? Or is it the product?” Hamano said. “We always try to keep our baseline metrics about audience size and listening hours, and stay lean in terms of reacting to what audiences tell us needs to be addressed.”

A Santa Claus decor item and a vintage radio on top of the refrigerator in the Lustron Home at the 1950s exhibit at the Ohio History Center Museum in Columbus, Ohio. Photograph by Sam Howzit, used under a Creative Commons license.

Mixed reality, computer vision, and brain–machine interfaces: Here’s the future The New York Times’ reborn R&D lab sees

When it comes to emerging technologies, there’s a lot to keep newsrooms busy. Virtual reality has promise, and so do chatbots. Ditto, too, for connected cars and connected homes. In fact, the challenge for most newsrooms isn’t figuring out potential new platforms for experimentation but rather determining which new technologies are worth prioritizing most.

At The New York Times, anticipating and preparing for the future is a job that falls to Story[X], the newsroom-based research and development it launched last May. A “rebirth” of the R&D Lab the Times launched in 2006, Story[X] was created to look beyond current product cycles to how the the Times can get ahead of developments in new technology. (The previous R&D Labs’ Alexis Lloyd and Matt Boggie landed just fine, taking high positions at Axios.)

Heading up the unit, which will total six people when fully staffed, is Marc Lavallee, the Times’ former head of interactive news. Lavallee said that while Story[X] will always ask how practical new technologies are, the group is more likely to err on the side of the speculative than the safe — a mental model that isn’t always easy to adopt in newsrooms full of people trained to ask pointed questions like: “Is this even real?”

While that skeptical lens is helpful, “we want to have a sense that, even if we feel like something isn’t ready today, if we feel like there’s some sense of inevitability, we want be thinking about it and experimenting,” Lavallee said. A lot of these developments will be outside the Times’ control, but if Story[x] does its job properly, there will be “fewer fire drills induced by some keynote from some tech company that changes the game and requires us to play catchup.”

In a wide-ranging conversation, Lavallee and I spoke about how the Times evaluates new technologies, which areas he believes are most ripe for expermentation, and what technology news organizations aren’t paying enough attention to. Here’s an edited and condensed transcript of our conversation.

Lavallee: With VR, we’re doing a lot of experimenting around telling individual stories that bring readers to another place. There’s a rich vein of exploration to be done there. I don’t think we’ve even fully explored that. That’s a good place to be working while we wait to see what the adoption of the current and next generation of these devices is.

The reason why I’m fixated on whether there are non-linear stories for us in VR is that that, to me, feels like the thing that’s actually going to drive wider adoption. Social experiences and gaming to me feel like the way that we’ll see this become more of a mass experience, I think. And I’m not sure that there’s necessarily a thing for us to do there. We have to keep doing what were doing and wait for other parts of the ecosystem to flesh out.

I do think that there is a tremendous potential for us in the AR space. That’s where we can do things that are more utility-driven, which is where we’re seeing today’s pickup through, for example, being able to place a virtual IKEA couch in your living room to see if it would fit.

Bilton: We haven’t talked too much about advertising, which is also a part of the mandate at Story[x]. What are the potential innovations there?

Lavallee: There are a couple of ways we’re working through it. I’m of the opinion that the full scope of the potential for the The New York Times in the 21st century is incredibly broad, because we have this brand flexibility. It’s something that is basically with you all day every day and helping guide every decision you make and being that trusted ally in your life.

We’re not going to do that alone. It does require a different kind of partnership with a bunch of different kinds of companies. The tech space is the easiest to find those kinds of opportunities. I would say the partnership with Samsung is the first of a genre of partnership that we’ll see much more of over the next couple of years, where neither of us would be able to do something like that 360 video of that scale alone. But together, we can each play our part in speeding the evolution of the technology and content in parallel, as opposed to waiting for one to happen and then doing the other.

Bilton: We’ve talked about a lot of different potential areas of innovation. Is there one that you don’t hear as much chatter about that you think has a lot of promise?

Lavallee: There’s a cluster of ideas that combine what’s happening in the quantified self movement and what’s happening in your brain at any point in time. That leads to the kind of brain-machine interface stuff that Facebook was demoing last month. They’re saying that within two years they’ll have a skull cap that will let you think at 100 words per minute.

I see that as technology that will let us understand how much attention you’re paying while reading or listening to something, what you retained, what you perk up at, and how the content experience can adapt and understand what kind of learner you are. I think there is tremendous potential to do that so the content is more tailored to your level of interest. That’s something that I’m not aware of media organizations diving into yet, but I think it’s a huge frontier for us. Over the next few years we’re going to be thinking a lot more about what’s going on inside readers’ heads.

Photo of Google Cardboard VR by Othree used under a Creative Commons license.

The Christian Science Monitor’s new paid, daily product is aiming for 10,000 subscribers in a year

“If the Monitor were to vanish, what would the world lose, really?”

That’s the first line of a column will appear in next week’s print issue of The Christian Science Monitor Weekly, the 109-year-old print magazine. The question is what has driven the team at the Monitor to think over the last 18 months about how to replace its website with a different kind of core digital product that would (a) make clearer the publication’s focus on “a completely different way of seeing the news,” (b) get readers to pay for that news, and (c) abandon outmoded ways of arranging content.

“We had felt like we were trying to do three different things at the same time,” said Mark Sappenfield, the Christian Science Monitor’s editor, who just replaced Marshall Ingwerson in March. “We were trying to keep our core product going, build our verticals, and play the pageview game.”

“We’d been marching through these business lines and not really feeling that successful in any one of them,” said David Grant, associate publisher. “The long-term sustainability wasn’t there. What we landed on was trying to build a much deeper relationship with our readers. That’s landed us on this journey of going all in on digital subscription. The Daily is really in the service of this mission of making people more thoughtful, less neurotic, more calm, and seeing the world through the lens of progress.”

On Monday, the Monitor launched Monitor Daily, a daily news digest of five pieces of content (stories, videos, graphics), plus one editorial and “one clearly labeled religious article offering spiritual insight often related to the news,” that will be emailed to subscribers each weekday at 6 p.m. Boston time. Each article can either be read in “30 Sec. Read” form — a summary that still has a clear beginning, middle, and end — or expanded to a full edition that is estimated to take about 50 minutes to read. The Daily is also on the Monitor’s website and available as audio (read by Monitor staff) to stream or download.

“We’re bringing the whole organization into focus around this task,” said Sappenfield.

The Monitor Daily, which has no ads, is free for a month. After that, it will cost $11 per month or $110 a year, or $9 a month/$90 a year for subscribers to the print weekly. That price, the team believes, is competitive. While the Monitor’s website will still run some free content — namely evergreen Monitor stories that are one or two months old — “our mission is really to get you to sample this thing, and eventually get you to become one of our subscribers,” said Grant. The homepage has been completely redesigned around the daily. In general, Sappenfield said, when it comes to the Daily coverage, users will get to see the package for free one time before they’re required to subscribe.

In months of “constant sprints,” the Monitor’s team tested beta editions of editorial products. “The key question is always, ‘How disappointed would you be if this didn’t exist, on a scale of 1 to 10?’” said Dave Scott, chief product manager. “If we weren’t getting 8 and above, we’d go back in and do some more work. We got to the point where we were getting 8 and above and decided to go forward.”

In the last two weeks, of the 4,000-plus unique readers who looked at a version of the package, 74 percent read past its fourth item and 44 percent made it to the bottom of the package. Twenty-six percent of beta readers expanded an article while they were reading.

“The fact that we’re not sending readers someplace else to read the content, and letting them read only what matters to them, is paying dividends in letting them read more,” Grant said.

The team’s goal is to reach 10,000 paying Daily subscribers by a year from now, which seems extremely ambitious. Slate Plus, which is $5 a month or $49 a year, was able to draw 9,000 paying subscribers in its first year, but Slate is bigger than the Monitor. “It’s not an easy goal,” Grant acknowledged. “But when we show up to work every day, that’s what we folks on the business team are thinking about: How do we get to 10,000 subscribers?”

Photo of The Christian Science Monitor building in Boston by Sarah Nichols used under a Creative Commons license.

This site publishes high-touch, time-intensive data visualizations (and has a business that sustains it)

Over 7,000 artists played in the New York City area in 2013. Only 21 of those later made it, really made it, headlining at a venue with an over 3,000-person capacity — among them, bigger names like Chance the Rapper, X Ambassadors, Sam Smith, and Sylvan Esso.

I learned this sort of random but fascinating tidbit from a data visualization titled “The Unlikely Odds of Making it Big,” from the site The Pudding.

The Pudding is the home to high-touch, painstakingly crafted data visualizations — what the site calls “visual essays” — that are distinct in their obsessive complexity over points of cultural curiosity. Most pieces stand wholly apart from the U.S. news cycle; no anxiety-inducing interactives around budget, taxes, health care. Want to see everywhere jazz legend Miles Davis is mentioned across Wikipedia, and how he’s connected to other people, recordings, and places? Here you go.

(Other things I’ve discovered browsing The Pudding’s interactives: that the town where I live is probably not the microbrew capital of the U.S., that there’s pretty strong evidence that NBA refs favor the home team, that the song “No Diggity” by Blackstreet is irrefutably timeless, at least based on Spotify play counts, compared to its 1990s peers.)

Pudding is the newly partitioned off editorial arm of a three-person data visualizations company Polygraph (polygraph.cool!), started two years ago by Matt Daniels, a consultant with a digital marketing background. Daniels and his partners Russell Goldenberg and Ilia Blinderman publish sumptuous visualizations that scratch personal itches. The Pudding also works closely with freelancers on pretty much whatever questions they’re interested in exploring visually, as long as it’s based on data. Freelancers are paid a flat rate of $5,000 for each piece.

“We’re all over the map. But basically, every individual picks their idea, we vet it ourselves and make sure the data’s there, that it’s interesting, and we just go off and do it,” Goldenberg told me. (The ideas backlog for The Pudding is listed out in this public Google Doc.) “Our goal is for The Pudding to be a weekly journal. We specifically seek out stories that aren’t news related, because we don’t want to compete in that space. The Washington Post, The New York Times, FiveThirtyEight, lots of places are doing interactive graphics well, doing multiple data journalism pieces per day. That doesn’t jive with what we want to be.”

Goldenberg previously worked at The Boston Globe as an interactive news developer and Blinderman’s a science and culture writer who studied data journalism at Columbia. Despite journalistic credentials, The Pudding (and Polygraph) isn’t aiming to be a journalistic enterprise. The team might in the course of developing a visualization call up a few people to run questions by them, or have to create their own data source (this freelancer’s exploration of the Hamilton musical libretto, for instance), but most of the data it builds interactives on is already available (no FOIAing needed).

Work gets promoted on The Pudding site, and through the Polygraph and Pudding newsletters, which will eventually merge into one. Polygraph’s newsletter sharing the latest visualizations has about 10,000 subscribers; The Pudding’s has about 1,000 after launching this year. Otherwise, promotion is largely word of mouth — and some pieces have been able to spread widely that way. They’re definitely open to collaborating with “more visible partners,” Goldenberg told me, though “we’re not being aggressive about our outreach.”

(A similar project popped up last year called The Thrust, which wanted to serve as a home for data visualization projects that didn’t fit with traditional news organizations or into their news cycles. The creators left for full-time jobs at ProPublica and The New York Times and the site has stopped updating.)

The moneymaking side of Polygraph functions like a digital agency, with Daniels, Goldenberg, and Blinderman pushing out projects for large clients like YouTube, Google News Lab, and Kickstarter. Goldenberg wouldn’t disclose how much they charge for these sponsored pieces, but revenue generated from a handful of client projects funds the entire editorial side, including paying for freelancers pieces and the three current full-time staffers’ salaries.

“We try to take on client work to just support our staff and basically to sustain The Pudding, with about three to six freelancers each quarter — what we’re doing is maybe kind of backwards,” Goldenberg said. “The thing about our editorial work is that also essentially serves as marketing for us. Generally, when we publish a new project on The Pudding, we get a few business inquiries. It’s a nice symbiotic relationship.”

Polygraph is also hiring for two more full-time positions — a “maker” and an editor — both at competitive salaries, which suggests that its client-side business is going quite well. Its ambitions looking forward, though, are straightforward: publish more interesting data-driven visualizations.

“We want to push forward the craft of visual storytelling, and these are not things you do on a daily basis,” Goldenberg said. “We still want to take our time and spend a couple of weeks, maybe a month or more, on a project. Unless we have dozens of people working with us, we wouldn’t really be able to publish more than once a week or so. We’re mostly just trying to establish that rhythm, and keep pushing out good pieces.”

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑