European news sites are among the worst offenders when it comes to third-party cookies and content

The forthcoming General Data Protection Regulation on May 25 is pushing publishers to take a hard look at just how dependent their outlets have become on cookies third-party trackers they load on their own sites in order to collect data from their visitors.

News sites actually load more third-party content and set more third-party cookies than other top websites, according to a new study of websites across seven European countries from the Reuters Institute. Continue reading “European news sites are among the worst offenders when it comes to third-party cookies and content”

Here’s what you need to know to build successful paid newsletters, popup newsletters, morning digests, and community newsletters

Thinking about starting your own email newsletter? A panel at ISOJ 2018 contains a wealth of advice for launching all types of editorial newsletters, from paywalled offerings to limited-run recaps tied to popular television shows to indispensable morning digests to community-creating newsletters. Continue reading “Here’s what you need to know to build successful paid newsletters, popup newsletters, morning digests, and community newsletters”

Google announces a $300M ‘Google News Initiative’ (though this isn’t about giving out grants directly to newsrooms, like it does in Europe)

— Google said Tuesday it’s committing $300 million over three years towards various products and initiatives intended to help news publishers and sweeten Google’s relationships with them, as part of an umbrella initiative it’s calling the Google News Initiative.

Continue reading “Google announces a $300M ‘Google News Initiative’ (though this isn’t about giving out grants directly to newsrooms, like it does in Europe)”

The Washington Post on Reddit surprises users with its non-promotional, ultra helpful presence

Democracy dies in dankness.

That’s not a typo in the Washington Post’s Reddit profile: The Washington Post account is an avid poster of some pretty good memes and gifs. It’s got jokes. It’s also a sharer of everything from polling stories to breaking national security stories to lifestyle columns to geeky features to fact-checks, and a facilitator of, and participant in, AMAs. The official publisher account has been live since April of this year, shortly after the platform began allowing public profiles, and appears to have broken through Reddit’s tough anti-brand, anti-paywall shell.

Continue reading “The Washington Post on Reddit surprises users with its non-promotional, ultra helpful presence”

“Exceedingly generous”: Google will split revenue with publishers who use its new subscription tools

Google, an advertising giant, has been making nice with news publishers by developing a series of tools they can use to more precisely attract and target paid subscribers. (It also ended the first-click-free policy this month, allowing subscription-based publishers to choose how many articles to show to readers for free without search-ranking consequence.)

Google’s nice comes at a small business price for any publishers who might want to use the planned subscription tools, but the details are still being ironed out with publishers.

“It will obviously come down to what we think that business relationship should be, but bottom line, I think [revenue sharing] will be exceedingly generous [to news publishers],” Google’s head of news Richard Gingras told the Financial Times on Sunday. “In our ad environment, the rev shares are 70 per cent-plus. The rev shares [for publishers] will be significantly more generous than that.” (Google’s AdSense offers around a 70-30 split for publishers who use it to place ads on their sites.)

Gingras made sure to distinguish Google’s tack from Facebook’s “walled garden” approach, telling the FT that “unlike other participants in the environment, we’re not trying to own the publisher. If there are cases where we do cause the subscription to happen, we don’t want to own the customer. None of this changes the marketplace economics, people will pay for what they value.”

That “other participant in the environment” on Friday formally announced its test of news subscriptions models within its Instant Articles format, through which it won’t take any cut of the revenue from subscription signups (the subscription transaction and payment processing will take place entirely on the publishers’ site). Facebook’s subscription tests are Android-only, as it’s been wrestling with Apple over the past few months over Apple’s default 30 percent cut of “in-app sales,” Recode reported.

The Wall Street Journal tested live push notifications, with some help from the Guardian’s Mobile Lab

When the Bureau of Labor Statistics released its jobs report at the beginning of the month, news organizations unleashed their push notifications.

On Friday morning, the Wall Street Journal tested live mobile push alerts for their jobs coverage, working closely with the Guardian Mobile Innovation Lab, which has been for the past year tirelessly testing a range of ideas for distributing news that make the most of people’s phone-reading preferences.

Readers who arrived at the Journal’s mobile site or its Android or iOS apps were able to read its live coverage of the jobs numbers for July — but were also alerted with preview push notifications on updates as they read the existing analysis on the page (readers could dismiss and keep reading, or jump to the update from the push alert).

Journal developers built the infrastructure for the live notifications, and its markets team reported on the event and sent the pushes. The Mobile Innovation Lab provided guidance — based on learnings from its own past experiments and user testing — throughout the process, from evaluating design prototypes for the alerts to crafting an effective survey for users who encountered the Journal’s experiment.

The Journal has its own internal live coverage tool, built ahead of the Iowa Caucuses coverage in time for last year’s elections, but hadn’t dealt with live push notifications, according to Jennifer Hicks, deputy managing editor of digital at the Journal.

“We had a highlights feature where we could pin key posts, but we couldn’t notify readers within the live reading experience,” she said.

The Guardian’s Mobile Innovation Lab had been hosting some get-togethers and roundtables with various news organizations after the November 2016 election around news notifications, and the Journal expressed interest in trying out an experiment with the Lab. Work on this project started in June.

“There were lots of experiments the Guardian group was doing, so we talked about what we could bite off and pull off in a short amount of time,” Hicks said. “For us, it was also an opportunity to change our culture and talk directly to readers about testing a new feature.” (The Journal and Mobile Lab teams had a joint Slack channel going morning of the live notifications project for potential troubleshooting in implementation.)

The Journal plans to use the live notifications feature in future live coverage (with tweaks as necessary), according to Journal mobile editor Phil Izzo: “From jobs reports to the Olympics to terrorist attacks, we use live coverage a lot, and that’s one of the reasons we really wanted to build this out, since we knew there were so many use cases for it,” he said.

Both the Guardian and Journal teams emphasized the project’s experimental nature; it’s the first partnership of this kind for both organizations. The Lab is welcoming similar partnerships with other interested outlets.

“In the Lab we’re working for the industry and not just for ourselves — if we were to experiment in silence for two years and not share tips and tricks that we’ve experimented with, that wouldn’t be fulfilling the mission of the Lab,” Sarah Schmalbach, the Lab’s senior product manager, said. “We have been flexing our notifications muscle, then when we felt more confident in what we’d learned, we began to host events to ask other organizations what they were doing, where we’d then make a point to say, please come talk to us if there’s anything we can do to help, any data we can provide. Maybe we can launch something together.”

“We really relied on Sarah and [Mobile Innovation Lab editor] Sasha Koren to provide expertise in terms of, how do you talk to your audience directly, how do you conduct a real-time experiment, how do you offer a survey to audiences that gets you useful and actionable feedback,” Hicks said. “We had a lot of guidance on how to set up an experiment, which is not something we’ve done regularly at the Journal.”

Data points the Journal will evaluate for this jobs report experiment center around engagement, and include time spent on the live coverage, whether readers dismissed the notifications or clicked into the post, and bounce rate during the live event.

“Another thing we’re thinking about is, does this tell us anything about experimentation at the Journal?” Izzo said. “Did we make the job reports live blog better, because we put more attention to it, and should we push to do more things like this in the newsroom in general?”

People have trouble A) detecting faked images and B) identifying where they’ve been changed

I recently, shamefully fell for a photo plastered all over my timeline last week of Vladimir Putin sitting in a chair at G-20 as other world leaders, including Donald Trump, leaned in for what appeared to be an intense, whispered discussion. The photo was, as Gizmodo put it gently, totally fake.

Fake headlines of the Pope-endorsing-Trump variety are just one part of the ecosystem of fakery online. There’s faked audio to worry about. Faked video. And of course, faked images.

It turns out people aren’t very good at identifying manipulated images, according to new research published Tuesday in the journal Cognitive Research by researchers Sophie J. Nightingale, Kimberley A. Wade, and Derrick G. Watson from the University of Warwick.

Participants were slightly better than random at picking out untouched versus manipulated photos, classifying 62 percent of the images in the study correctly. Participants also weren’t great at picking out where exactly a photo had been changed, even when they did accurately identify a photo as manipulated: they were able to identify an average of 45 percent of manipulations presented.

The study first tested participants on whether or not they could identify a manipulated image by showing them images of people in real-world scenes taken from a Google search, and manipulated versions of those images. In a second experiment, the researchers tested whether participants could pinpoint the region of the photo that had been changed.

People don’t necessarily appear to be better at pinpointing “implausible” manipulations (such as a shadow in the wrong place) than “plausible” ones (such as removal or addition of something into the photo), the researchers found:

Recall that we looked at two categories of manipulations — implausible and plausible — and we predicted that people would perform better on implausible manipulations because these scenes provide additional evidence that people can use to determine if a photo has been manipulated. Yet the story was not so simple. In Experiment 1, subjects correctly detected more of the implausible photo manipulations than the plausible photo manipulations, but in Experiment 2, the opposite was true. Further, even when subjects correctly identified the implausible photo manipulations, they did not necessarily go on to accurately locate the manipulation. It is clear that people find it difficult to detect and locate manipulations in real-world photos, regardless of whether those manipulations lead to physically plausible or implausible scenes.

They concluded:

Future research might also investigate potential ways to improve people’s ability to spot manipulated photos. However, our findings suggest that this is not going to be a straightforward task. We did not find any strong evidence to suggest there are individual factors that improve people’s ability to detect or locate manipulations. That said, our findings do highlight various possibilities that warrant further consideration, such as training people to make better use of the physical laws of the world, varying how long people have to judge the veracity of a photo, and encouraging a more careful and considered approach to detecting manipulations. What our findings have shown is that a more careful search of a scene, at the very least, may encourage people to be skeptical about the veracity of photos. Of course, increased skepticism is not perfect because it comes with an associated cost: a loss of faith in authentic photos. Yet, until we know more about how to improve people’s ability to distinguish between real and fake photos, a skeptical approach might be wise, especially in contexts such as law, scientific publication, and photojournalism where even a small manipulation can have ethically significant consequences.

In its own writeup of the study, the Washington Post, made a fun quiz based on the images used in Nightingale’s experiment, which, if you’re curious about your own abilities, you can take here.

Trying to write a killer headline for social? Here are some of the most (and least) effective phrases

Jostling for readers for your listicle on Facebook? Aim for the number “10” in your headline.

Trying to promote a story on Twitter? Emotion-based appeals popular on Facebook don’t translate to Twitter.

Findings from a BuzzSumo trigram analysis of 100 million headlines published between March and May of this year confirms a lot about the clickbait-y, competitive publishing environment of social media.

The analysis reveals nothing particularly surprising, for instance, about the headline phrases that generated the most likes, shares, and comments: “Will make you” was by far the most successful phrase, and emotion-based appeals like “melt your heart” and “make you cry” also do well. (Also, we reported that 10 was the most common number for a BuzzFeed list way back in 2013.)

Publishers beware though: Facebook says its algorithm is cracking down again on clickbait in its News Feed.

Phrases that performed poorly on Facebook? “Control of your,” “work for you,” or “on a budget”— which apparently works well on Pinterest. Phrases that performed well on Facebook don’t work as well in Twitter headlines, where phrases that emphasize immediacy and analysis do best — “what we know,” “things to know,” “this is what.”

On Facebook, it’s also important to hit just the right headline length. Super short or super long headlines don’t appear to be effective. Posts between 12 and 18 words — and between 80 to 95 characters — get the most shares on Facebook.

This particular study draws its insights from some of the most shared stories on Facebook and Twitter, which include articles from major publishers like HuffPost and BuzzFeed; it’ll release separate headlines analysis for sharing business-to-business stories later in the year. You can read the entire post here.

The New York Times, with a little help from automation, is aiming to open up most articles to comments

The New York Times’ strategy for taming reader comments has for many years been laborious hand curation. Its community desk of moderators examines around 11,000 individual comments each day, across the 10 percent of total published articles that are open to commenting.

For the past few months, the Times has been testing a new tool from Jigsaw — Google parent Alphabet’s tech incubator — that can automate a chunk of the arduous moderation process. On Tuesday, the Times will begin to expand the number of articles open for commenting, opening about a quarter of stories on Tuesday and shooting for 80 percent by the end of this year. (Another partner, Instrument, built the CMS for moderation.)

“The bottom line on this is that the strategy on our end of moderating just about every comment by hand, and then using that process to show readers what kinds of content we’re looking for, has run its course,” Bassey Etim, Times community editor, told me. “From our end, we’ve seen that it’s working to scale comments — to the point where you can have a good large comments section that you’re also moderating very quickly, things that are widely regarded as impossible. But we’ve got a lot left to go.”

These efforts to improve its commenting functions were highlighted in the Times announcement earlier this month about the creation of a reader center, led by Times editor Hanna Ingber, to deal specifically with reader concerns and insights. (In the same announcement, it let go Liz Spayd and eliminated its public editor position.)

Nudging readers towards comments that the Times “is looking for” is no easy task. Its own guidelines, laid out in an internal document and outlining various rules around comments and how to take action on them, have evolved over time. (I took the Times’ moderation quiz — getting only one “correct” — and at my pace, it would’ve taken more than 24 hours to finish tagging 11,000 comments.)

Jigsaw’s tool, called Perspective, has been fed a corpus of Times comments that have been tagged by human editors already. Human editors then trained the algorithm over the testing phase, flagging mistakes in moderation it made. In the new system, a moderator can evaluate comments based on the likelihood of rejection and checks that the algorithm has properly labeled comments that fall into a grayer zone (comments with 17 to 20 percent likelihood of rejection, for instance). Then the community desk team can set a rule to allow all comments that fall between 0 to 20 percent, for instance, to go through.

“We’re looking at an extract of all the mistakes it’s made, evaluate what the impact of each of those moderating mistakes might be on the community and on the perceptions of our product. Then based on that, we can choose different forms of moderation for each individual section at the Times,” Etim said. Some sections could remain entirely human-moderated; some sections that tend to have a low rate of rejection for comments could be automated.

Etim’s team will be working closely with Ingber’s Reader Center, “helping out in terms of staffing projects, with advice, and all kinds of things,” though the relationship and roles are not currently codified.

“It used to be when something bubbled up in the comments, maybe we’d hear repeated comments or concerns about coverage. You’d send that off to a desk editor, and they would say, ‘That’s a good point; let’s deal with this.’ But the reporter is out reporting something else, then time expires, and it passes,” Etim said. “Now it’s at the point where when things bubble up, [Ingber] can help us take care of it in the highest levels in the newsroom.”

I asked Etim why the Times hadn’t adopted any of the Coral Project’s new tools around comment moderation, given that Coral was announced years ago as a large collaborative effort between The Washington Post, the Times, and Mozilla. It’s mostly a matter of immediate priorities, according to Etim, and he can see the Times coming back to the Coral Project’s tools down the line.

“The Coral Project is just working on a different problem set at the moment — and the Coral Project was never meant to be creating the New York Times commenting system,” he said. “They are focusing on helping most publishers on the web. Our business priority was, how do we do moderation at scale? And for moderation at our kind of scale, we needed the automation.

“The Coral stuff became a bit secondary, but we’re going to circle back and look at what it has in the open source world, and looking to them as a model for how to deal with things like user reputation,” he added.

Photo of chattering teeth toy by Wendy used under a Creative Commons license.

Membership programs are paying off for news outlets — and so is helping them set up their programs

If you want readers to donate, you have to ask — often. It sounds obvious, but it’s a strategy many news organizations have been forced to become more comfortable with, and one that takes a lot of resources to really get right.

Before Hawaii’s Honolulu Civil Beat went nonprofit in late June of last year, it was charging $4.99 a month for access to its paywalled site, already significantly lowered from the $19.99 price point it tried out at launch. It had 1,100 recurring subscribers.

Since going nonprofit and starting a membership drive, average recurring monthly donations to the Pierre Omidyar-backed online news site rose to $12 — that’s $144 a year versus $60, even with all the stories free to read. It now has around 1,500 members.

“We haven’t reached our ceiling in terms of the number of donors, which continues to go up month after month,” Ben Nishimoto, director of philanthropy for the Civil Beat, said. “We’re seeing all the metrics of a healthy membership program, and a lot of that has to do with the structure and advice that the News Revenue Hub has offered.”

Civil Beat was among the first five organizations to join the News Revenue Hub, which for a fee takes on the heavy lifting of setting up membership programs — the software, the recruitment and retention, the messaging and maintenance — and facilitates an exchange of insights among participating outlets. The Hub, the brainchild of Mary Walter-Brown, first began last fall as an initiative of the nonprofit news site Voice of San Diego, an exemplar of sustainable digital news membership.

After helping the five pilot news organizations together raise more than a million dollars in half a year, News Revenue Hub has spun off into its own standalone organization, led by Walter-Brown (who’s now its CEO) with digital manager Tristan Loper (also previously of Voice of San Diego). It launched at a time when many news organizations were trying to rethink their mission and messaging post-U.S. election. Walter-Brown is now hoping to continue its early successes, adding five new organizations. (We’ve written separately about The Marshall Project and The Intercept’s challenges; the full list: InsideClimate News, NJ Spotlight, Honolulu Civil Beat, The Lens, PolitiFact, The Marshall Project, The Intercept, CalMatters, Youth Radio, and Rivard Report.) The Democracy Fund will continue to support some of the overhead costs for participating outlets.

“It’s been great from a couple of perspectives: The first, basically learning how to do this kind of fundraising, and then also, realizing there’s a definite benefit to asking often of people, more often than I think as organizations we previously felt comfortable doing, and that’s been gratifying,” Beth Daley, director of strategic development at InsideClimate News, said. “The support we have from individual members is dwarfed by support we get from foundations, for instance, but foundations also like to see that we’re diversifying our funding.” (In addition to money from memberships, InsideClimate News is also looking to raise $25,000 from readers to fund one reporting trip exploring the impacts of climate change across the U.S.)

“At first, I thought, gosh, I hope this wasn’t just this weird enigma. It’s starting to become more predictable, which is what I’m excited about,” Walter-Brown said. “We’re starting to see that membership can work for a PolitiFact, an InsideClimate News — organizations with national and global readers — that it can work for The Lens, NJ Spotlight, Civil Beat. And we’re now seeing the same thing for The Intercept — an international site focused on privacy and surveillance issues — when we weren’t sure how that was going to resonate with readers.” There’s already a bit of a waiting list of organizations eager to join, Walter-Brown said, and the Hub is looking to bring several more into the fold this year.

“We evaluate each client based on whether they have a base big enough and diverse enough to make it worth their while, whether they have the internal staff willing to dedicate the amount of time and energy to this that’s needed, whether they have the buy-in from the top, whether hey support the principle of building a true relationship with your audience,” Walter-Brown said. “It’s clear when you go through these conversations. There are some where I just say, ‘You’re not ready, but here are some tools you can use to build your audience, and let’s talk in six months.’ We don’t turn people away with a blanket ‘no.’”

While most of the organizations currently in the Hub are nonprofits, it’s not a requirement: PolitiFact, one of the five original pilot outlets, isn’t a nonprofit. It launched its program just before Donald Trump’s inauguration in January, and has since raised $200,000 in contributions and pledged donations. Three quarters of its members are part of the “informed” or “involved” tiers ($50 to $150 and $151-$500); only “a few dozen individuals have contributed $500 or more,” according to Emily Wilkinson, PolitiFact’s then-business development director.

Nor, of course, is a large national following a requirement. NJ Spotlight has added 470 members since it started its membership program, raising $86,000 most at the “engaged” or “informed” levels (a minimum $35 to $100; then $101-$500), Paula Saha, who oversees events, audience, and donor development at New Jersey state-focused nonprofit news service, told me.

“Most of them came in during our winter drive, but we’ve had a steady trickle since,” Saha said, crediting a well-tailored email campaign that starts with soft asks that get stronger as readers become increasingly familiar with the work NJ Spotlight does. “It’s been really nice to see the steady trickle; with recurring donors, it’s obviously a gift that keeps on giving, quite literally.” NJ Spotlight has worked to impress upon readers the value of memberships: happy hours, coffees, an intimate event for members with Senator Cory Booker (from the dedicated Slack group for News Revenue Hub organizations, it’s also gotten some ideas for events like trivia nights).

The News Revenue Hub, especially by facilitating technical setup and helping organizations understand and use better metrics (syncing Eventbrite with Salesforce, for instance), has freed up outlets to actually get to know their most dedicated readers. Honolulu Civil Beat has been able to host regular community events, such as taking groups out to neighboring islands or continuing its storyteller series, Mariko Chang, the Civil Beat’s membership and events manager, told me.

“Ben [Nishimoto] and I try to call all of our new donors as well, which takes people by surprise,” she said. “We let them know they can come to us if they have concerns. We ask them ways we can do our jobs better. It’s been helpful to have the time and resources now to make those personal connections while we can.”

The Hub itself will remain a small staff through the rest of 2017, but it also receives additional foundation support and is looking to raise half a million more to help with expansion.

“We’re hoping to be able to scale accordingly — eventually there may be different tiers of service, maybe a full-service option where they only need someone part-time and we’re providing more of the copywriting and execution. Then others may want to bring on a full staff like we had at Voice of San Diego, with an events manager, a digital manager,” Walter-Brown said. “I’m excited to explore what the service will look like in the long term, whether it’s an incubator for some organizations, a centralized place where they outsource tasks, for organizations that only want to focus on editorial.”

Chairs of different colors, by Steve, used under a Creative Commons license.

Who’s really driving traffic to articles? Depends on the subject: Facebook (lifestyle, entertainment) or Google (tech, business, sports)

When you’re publishing to Facebook, or tweaking a headline to align with some carefully honed SEO strategy, how closely do you take note of story topic?

New research from suggests that news organizations trying to make the most of Facebook referrals and Google search traffic need to be extra discerning about story topic, as some — like lifestyle or entertainment — see the majority of their referral traffic coming from Facebook, while others — like tech, sports, and business — see the lion’s share of their traffic coming through Google search. (The findings were based on’s analysis of more than 10 million articles published last year by outlets within its network.)

Lifestyle articles, for instance, get more than 87 percent of their external traffic from Facebook, and just 7 percent from Google search. (63 percent of that traffic also came from a mobile device.) On the extremely Google-reliant end are job postings, which get 84 percent of their traffic through Google search versus 12 percent from Facebook. (There were significantly fewer job-related posts among the 10 million stories analyzed, 2,700 posts, compared to 110,000 lifestyle articles or 210,000 sports articles.)

Across the millions of articles analyzed, Facebook referrals accounted for 39 percent of external traffic, Google 35 percent. Other sources, such as Bing or Pinterest or Reddit, often made up less than a percentage point of referral traffic.

The report also breaks down what words often appear in stories from each topic (word cloud alert). Here, for instance, is U.S. presidential politics (hello, Drudge Report):

Here’s sports (where Twitter, at 10 percent, is actually a not insignificant source of referral traffic):

You can read the full report here.

This contest is looking for more ideas on innovation ways to present factchecks (grand prize: $10,000)

If most fact-checking as it’s presented to readers today bores you, now’s your chance to figure out more exciting formats — and maybe win a big cash prize doing it.

The International Center for Journalists is running a contest called TruthBuzz that seeks ideas to make fact-checking and debunking stories more appealing to readers, and to increase their chances of going viral.

From the contest description:

We want your creative solutions for taking fact-checking beyond long-form explanations and bullet points. We’re looking for ideas — from everyone, not just journalists — that turn fact-checking into engaging, visual and interactive stories that are instantly understandable and shareable.

A successful entry to TruthBuzz will refute or clarify a false or misleading report or statement in an engaging, entertaining way that convinces audiences of its veracity and encourages them to share it.

Any digital format in any language is welcome, from individuals or teams (though the application form itself must be filled out in English). What’s more, the grand prize winner will get $10,000 in cash ($5,000 and $2,500 for the second and third-place awardees), sponsored by the Craig Newmark Foundation.

Curious what the TruthBuzz judges are looking for, exactly? ICFJ recorded a webinar with a few of those judges, detailing their thinking. Some important criteria:

— “We want to be surprised. We want rich information, but conveyed in a way that surprises us,” Aimee Rinehart of First Draft News said. “How are you telling the story? Is how you’re telling it authoritative? Are you professionally developing it? Can we see the mic in the frame? Can we see sloppy code? Those are things I think that would prevent someone from winning the contest.”

— “Content is going to be at least 60, 70 percent for me, when it comes to the entries. And then we’ll come to the presentation of things,” Shaheryar Popalzai, an ICFJ Knight fellow, said. There are also existing tools that can help you improve the presentation of a fact checking item, if you don’t have the resources yourself.

More FAQ at the end of the video:

The contest ends June 30. You can enter it here.

How NPR considers what new platforms — from smartwatches to fridges — will get its programming

Here is a (far from complete) list of places where you can listen to NPR programming: Your old school radio. Your car radio. Your smartphone. Your smartwatch. Your Amazon Echo. Your Google Home. Your refrigerator?

If you own a Samsung Family Hub fridge (which features a giant screen on one of its doors), you can get a bulletin briefing of your calendar for the day, as well as an hourly news update, via NPR. (That’s in the United States. In Europe, the news partner is Upday; in Korea, it’s Kakao.)

“Folks in the building have the same questions. I heard somebody talking about the fridge the other day — ‘Is that true, we’re on a fridge?’ I said, yeah,” Ha-Hoa Hamano, NPRs senior product manager, told me, amused at my excitement. (Full disclosure: I have an explicable obsession with this fridge thing, which we first wrote about here five long years ago. I fixated on it when writing about Upday’s expansion across Europe, as well.) “But we take into consideration a lot when approaching these — the level of effort it takes on our part, whether the audience there makes sense for us, whether our audience is there already, whether we’re going to gain new audience from it. Generally, we try to get to ‘yes’ faster than we try to get to ‘no.’”

Samsung is already a technology partner for NPR, and approached NPR with a list of Samsung devices — “the pitch for the fridge was that the kitchen is the new hub for family and entertainment interaction early in the morning” — they wanted to see offer NPR. NPR One is available on the newer versions of the Samsung Gear smartwatch; the fridge integration was an easy extension.

“As opportunities like these come up, we can talk monthly, or weekly at times. For a lot of upcoming things, sometimes it works out for us to collaborate super heavily on a project, but sometimes it’s a little more far-reaching,” Hamano said. “We have a super lean team, so sometimes it has to be, ‘yes, but not right now,’ or ‘yes, but this may take a lot more time.’” The core team that works with new platform projects is indeed lean: Hamano, the same legal team that looks over contracts with partners, a designer — “the same few people working on a dozen of these projects at once.”

The NPR One API facilitates some of these partnerships. Through its API service, advertising (er, sponsorships) are built in for devices with or without screens.

Its developer center is open, and developers working on projects of any scale can first dip their toes into what having NPR on a certain device might look like. Then there’s a formal queue (of mostly of inbound requests) that ranges from kickstarted startups designing a quirky little device that allows for hands-free control of a phone to larger projects, like fridges and Lexus cars. The kickstarted device, for instance, “went out and ran a beta with users on personal keys for as long as they could, and when they were ready, came to us for full certification,” Hamano said.

Some projects, like an NPR One radio made using a Raspberry Pi, don’t require any commercial licensing, and any interested tinkerer is free to create their own little radio for personal use, a project that doesn’t require tech review and legal certification on NPR’s part.

On its end, NPR has to be cautious about maintaining technical standards for partner platforms, since “we definitely don’t want to be out there on a platform where users are super frustrated and think issues are coming from NPR when it’s a problem with the device,” Hamano said. Sometimes tech partners will pass along bug reports from users. (So far, there hasn’t been a technical failing serious enough to cause NPR to pull out of a partnership.) With projects like NPR on Amazon’s Alexa, keeping up with the evolving features of the platform itself is a challenge on its own.

Diehard NPR One or NPR app users, for instance, want specific features — being able to binge-listen podcasts in a preferred order, for instance — or want the exact same experience in their NPR app as on their Amazon Echo. A few listening habits have emerged there, according to Hamano, such as heavy usage on weekday mornings (halved on weekday evenings) and most usage shifting an hour or more later on Saturday and Sunday mornings.

“With all these platforms, it is a challenge for us to figure out: Is it the platform? Or is it the product?” Hamano said. “We always try to keep our baseline metrics about audience size and listening hours, and stay lean in terms of reacting to what audiences tell us needs to be addressed.”

A Santa Claus decor item and a vintage radio on top of the refrigerator in the Lustron Home at the 1950s exhibit at the Ohio History Center Museum in Columbus, Ohio. Photograph by Sam Howzit, used under a Creative Commons license.

This site publishes high-touch, time-intensive data visualizations (and has a business that sustains it)

Over 7,000 artists played in the New York City area in 2013. Only 21 of those later made it, really made it, headlining at a venue with an over 3,000-person capacity — among them, bigger names like Chance the Rapper, X Ambassadors, Sam Smith, and Sylvan Esso.

I learned this sort of random but fascinating tidbit from a data visualization titled “The Unlikely Odds of Making it Big,” from the site The Pudding.

The Pudding is the home to high-touch, painstakingly crafted data visualizations — what the site calls “visual essays” — that are distinct in their obsessive complexity over points of cultural curiosity. Most pieces stand wholly apart from the U.S. news cycle; no anxiety-inducing interactives around budget, taxes, health care. Want to see everywhere jazz legend Miles Davis is mentioned across Wikipedia, and how he’s connected to other people, recordings, and places? Here you go.

(Other things I’ve discovered browsing The Pudding’s interactives: that the town where I live is probably not the microbrew capital of the U.S., that there’s pretty strong evidence that NBA refs favor the home team, that the song “No Diggity” by Blackstreet is irrefutably timeless, at least based on Spotify play counts, compared to its 1990s peers.)

Pudding is the newly partitioned off editorial arm of a three-person data visualizations company Polygraph (!), started two years ago by Matt Daniels, a consultant with a digital marketing background. Daniels and his partners Russell Goldenberg and Ilia Blinderman publish sumptuous visualizations that scratch personal itches. The Pudding also works closely with freelancers on pretty much whatever questions they’re interested in exploring visually, as long as it’s based on data. Freelancers are paid a flat rate of $5,000 for each piece.

“We’re all over the map. But basically, every individual picks their idea, we vet it ourselves and make sure the data’s there, that it’s interesting, and we just go off and do it,” Goldenberg told me. (The ideas backlog for The Pudding is listed out in this public Google Doc.) “Our goal is for The Pudding to be a weekly journal. We specifically seek out stories that aren’t news related, because we don’t want to compete in that space. The Washington Post, The New York Times, FiveThirtyEight, lots of places are doing interactive graphics well, doing multiple data journalism pieces per day. That doesn’t jive with what we want to be.”

Goldenberg previously worked at The Boston Globe as an interactive news developer and Blinderman’s a science and culture writer who studied data journalism at Columbia. Despite journalistic credentials, The Pudding (and Polygraph) isn’t aiming to be a journalistic enterprise. The team might in the course of developing a visualization call up a few people to run questions by them, or have to create their own data source (this freelancer’s exploration of the Hamilton musical libretto, for instance), but most of the data it builds interactives on is already available (no FOIAing needed).

Work gets promoted on The Pudding site, and through the Polygraph and Pudding newsletters, which will eventually merge into one. Polygraph’s newsletter sharing the latest visualizations has about 10,000 subscribers; The Pudding’s has about 1,000 after launching this year. Otherwise, promotion is largely word of mouth — and some pieces have been able to spread widely that way. They’re definitely open to collaborating with “more visible partners,” Goldenberg told me, though “we’re not being aggressive about our outreach.”

(A similar project popped up last year called The Thrust, which wanted to serve as a home for data visualization projects that didn’t fit with traditional news organizations or into their news cycles. The creators left for full-time jobs at ProPublica and The New York Times and the site has stopped updating.)

The moneymaking side of Polygraph functions like a digital agency, with Daniels, Goldenberg, and Blinderman pushing out projects for large clients like YouTube, Google News Lab, and Kickstarter. Goldenberg wouldn’t disclose how much they charge for these sponsored pieces, but revenue generated from a handful of client projects funds the entire editorial side, including paying for freelancers pieces and the three current full-time staffers’ salaries.

“We try to take on client work to just support our staff and basically to sustain The Pudding, with about three to six freelancers each quarter — what we’re doing is maybe kind of backwards,” Goldenberg said. “The thing about our editorial work is that also essentially serves as marketing for us. Generally, when we publish a new project on The Pudding, we get a few business inquiries. It’s a nice symbiotic relationship.”

Polygraph is also hiring for two more full-time positions — a “maker” and an editor — both at competitive salaries, which suggests that its client-side business is going quite well. Its ambitions looking forward, though, are straightforward: publish more interesting data-driven visualizations.

“We want to push forward the craft of visual storytelling, and these are not things you do on a daily basis,” Goldenberg said. “We still want to take our time and spend a couple of weeks, maybe a month or more, on a project. Unless we have dozens of people working with us, we wouldn’t really be able to publish more than once a week or so. We’re mostly just trying to establish that rhythm, and keep pushing out good pieces.”

Bloomberg is launching a 24-hour, 7-days-a-week news channel that streams exclusively on Twitter

If you’re glued to Twitter 24 hours a day, 7 days a week anyway, Bloomberg will soon be there for you at all hours of the day, every day of the week.

Twitter and Bloomberg have partnered to create a 24-hour streaming news service, currently unnamed and launching sometime in the fall, according to The Wall Street Journal. More details will be announced today during Monday’s NewFronts, an Interactive Advertising Bureau-organized extravaganza where digital media companies pitch themselves to advertisers. (Bloomberg is up at 3 p.m.)

Bloomberg, which already has 24-hour programming on plenty of other platforms from YouTube to Roku, will be producing original material for Twitter exclusively, and will also apparently incorporate video taken by Twitter users. From the Journal:

The channel, which has yet to be named and is expected to begin operating this fall, won’t simply rebroadcast footage from Bloomberg’s existing television operation, but will be made up of live news reporting from the news outlet’s bureaus around the world, as well as a curated and verified mix of video posted on Twitter by the social-media platform’s users.

The financial terms of the deal were not disclosed, though the streaming service will be ad-supported. (I asked a spokesperson to clarify and was told to wait for the 3 p.m. presentations.)

Bloomberg had already been working with Twitter since last year to livestreamed the presidential debates with Bloomberg. The continued partnership falls in line with Twitter’s doubling down on its value as a place for breaking news and live video.

In its first quarter earnings last week, Twitter reported some much-needed good news for the company on that front, saying that its “total ad engagement grew 139 percent year-on-year in the first quarter” (while the average cost per engagement “fell 63 percent year-on-year”). It also claimed to have reached more than 45 million unique viewers for more than 800 hours of live video through its various content partnerships, such as with the NFL for its Thursday night games (though that meaty partnership has since been snatched up by Amazon).

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑