Mixed reality, computer vision, and brain–machine interfaces: Here’s the future The New York Times’ reborn R&D lab sees

This post was originally published on this site

When it comes to emerging technologies, there’s a lot to keep newsrooms busy. Virtual reality has promise, and so do chatbots. Ditto, too, for connected cars and connected homes. In fact, the challenge for most newsrooms isn’t figuring out potential new platforms for experimentation but rather determining which new technologies are worth prioritizing most.

At The New York Times, anticipating and preparing for the future is a job that falls to Story[X], the newsroom-based research and development it launched last May. A “rebirth” of the R&D Lab the Times launched in 2006, Story[X] was created to look beyond current product cycles to how the the Times can get ahead of developments in new technology. (The previous R&D Labs’ Alexis Lloyd and Matt Boggie landed just fine, taking high positions at Axios.)

Heading up the unit, which will total six people when fully staffed, is Marc Lavallee, the Times’ former head of interactive news. Lavallee said that while Story[X] will always ask how practical new technologies are, the group is more likely to err on the side of the speculative than the safe — a mental model that isn’t always easy to adopt in newsrooms full of people trained to ask pointed questions like: “Is this even real?”

While that skeptical lens is helpful, “we want to have a sense that, even if we feel like something isn’t ready today, if we feel like there’s some sense of inevitability, we want be thinking about it and experimenting,” Lavallee said. A lot of these developments will be outside the Times’ control, but if Story[x] does its job properly, there will be “fewer fire drills induced by some keynote from some tech company that changes the game and requires us to play catchup.”

In a wide-ranging conversation, Lavallee and I spoke about how the Times evaluates new technologies, which areas he believes are most ripe for expermentation, and what technology news organizations aren’t paying enough attention to. Here’s an edited and condensed transcript of our conversation.

Lavallee: With VR, we’re doing a lot of experimenting around telling individual stories that bring readers to another place. There’s a rich vein of exploration to be done there. I don’t think we’ve even fully explored that. That’s a good place to be working while we wait to see what the adoption of the current and next generation of these devices is.

The reason why I’m fixated on whether there are non-linear stories for us in VR is that that, to me, feels like the thing that’s actually going to drive wider adoption. Social experiences and gaming to me feel like the way that we’ll see this become more of a mass experience, I think. And I’m not sure that there’s necessarily a thing for us to do there. We have to keep doing what were doing and wait for other parts of the ecosystem to flesh out.

I do think that there is a tremendous potential for us in the AR space. That’s where we can do things that are more utility-driven, which is where we’re seeing today’s pickup through, for example, being able to place a virtual IKEA couch in your living room to see if it would fit.

Bilton: We haven’t talked too much about advertising, which is also a part of the mandate at Story[x]. What are the potential innovations there?

Lavallee: There are a couple of ways we’re working through it. I’m of the opinion that the full scope of the potential for the The New York Times in the 21st century is incredibly broad, because we have this brand flexibility. It’s something that is basically with you all day every day and helping guide every decision you make and being that trusted ally in your life.

We’re not going to do that alone. It does require a different kind of partnership with a bunch of different kinds of companies. The tech space is the easiest to find those kinds of opportunities. I would say the partnership with Samsung is the first of a genre of partnership that we’ll see much more of over the next couple of years, where neither of us would be able to do something like that 360 video of that scale alone. But together, we can each play our part in speeding the evolution of the technology and content in parallel, as opposed to waiting for one to happen and then doing the other.

Bilton: We’ve talked about a lot of different potential areas of innovation. Is there one that you don’t hear as much chatter about that you think has a lot of promise?

Lavallee: There’s a cluster of ideas that combine what’s happening in the quantified self movement and what’s happening in your brain at any point in time. That leads to the kind of brain-machine interface stuff that Facebook was demoing last month. They’re saying that within two years they’ll have a skull cap that will let you think at 100 words per minute.

I see that as technology that will let us understand how much attention you’re paying while reading or listening to something, what you retained, what you perk up at, and how the content experience can adapt and understand what kind of learner you are. I think there is tremendous potential to do that so the content is more tailored to your level of interest. That’s something that I’m not aware of media organizations diving into yet, but I think it’s a huge frontier for us. Over the next few years we’re going to be thinking a lot more about what’s going on inside readers’ heads.

Photo of Google Cardboard VR by Othree used under a Creative Commons license.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑