Tech giants are using open source frameworks to dominate the AI community


Tech giants such as Google and Baidu spent from $20 billion to $30 billion on AI last year, according to the recent McKinsey Global Institute Study. Out of this wealth, 90 percent fueled R&D and deployment, and 10 percent went toward AI acquisitions.

Research plays a crucial role in the AI movement, and tech giants have to do everything in their power to seem viable to the AI community. AI is mostly based on research advances and state-of-the-art technology, which is advancing very quickly. Therefore, there is no business need to make closed infrastructure solutions, because within a few months everything will be totally different.

In such a situation, the only winning strategy for tech giants is to offer open source solutions to attract members of the AI community and eventually become part of the AI community themselves. This is a relatively new model in the tech industry. Continue reading “Tech giants are using open source frameworks to dominate the AI community”

SRCCON Recap: Developing New Live Coverage Story Formats

By TIFF FEHR

Six generations of New York Times live coverage story forms.

The Times uses many different page layouts and media in our report. Of course this includes articles, our essential story form, but we also spend significant resources and time supporting other important forms: photos, slideshows, video, audio, interactives, story collections and much more. Continue reading “SRCCON Recap: Developing New Live Coverage Story Formats”

ABRA: An enterprise framework for experimentation at The Times

By JOSH ARAK and KENTARO KAJI

Systematic experimentation — in the form of A/B and multivariate testing — has fast become embedded in the workflow and culture of teams across The New York Times: Product teams test new features; newsroom editors test the framing of individual stories; and marketing tests to learn what it takes to turn casual visitors into subscribers. Continue reading “ABRA: An enterprise framework for experimentation at The Times”

React, Relay and GraphQL: Under the Hood of the Times Website Redesign

The New York Times website is changing, and the technology we use to run it is changing too.

As the new site rolls out over the next several months, a look under the hood will reveal a number of modern technologies designed to make the site faster and easier to use — for readers, most importantly, but also for our developers.

At the center of this has been our adoption of React, Relay and GraphQL.

Take a look under the hood …

The problem we’re solving

More than a year ago, when we first started talking about the technology that would power our new website, simplifying our stack was one of our biggest priorities.

Our current desktop and mobile websites are written in entirely different languages: The desktop is predominately PHP; mobile runs on Node. Other products, such as our foreign-language sites, run on their own unique codebases (Español, 中文 (Chinese). Some do not even rely on our standard CMS, Scoop. All these sites read data from different origins and endpoints in different ways. It is hard to find a common denominator between them all.

If I want to make a new app tomorrow, chances are I need to:

  • Obtain credentials for multiple web services
  • Write an HTTP client (for the umpteenth time) to talk to said services
  • Create my view layer, probably from scratch, because there is no real central repository for NYT components

We thought it would be nice if there was one place to add and retrieve data and one way to authenticate against it. It would also be helpful if there was a common language and repository for creating and reusing components. If a new developer joins our team, I want to point them at a single page of documentation that explains how to get up and running — and preferably start building apps the same day.

This is not at all a dream scenario. We are moving towards this reality. That future is Relay and GraphQL.

GraphQL and Relay

Relay is an open source project by Facebook that exposes a framework they have been using internally for years. It is the glue that binds components written in React to data fetched from a GraphQL server. Relay is written in JavaScript, and we are using it as the basis for our new website’s codebase to power our desktop and mobile versions as one on Node.

GraphQL is “a query language for APIs”, which has a default implementation in Node. Facebook developed it to provide a data source that can evolve without breaking existing code and to favor speed on low-powered and low-quality mobile devices. The schema can evolve, but should never break. Products are described in graphs and queries, instead of the REST notion of endpoints.

It works like this: GraphQL queries contain nodes, and only the nodes that are requested are returned in a given response. GraphQL nodes do not have to represent a flat data structure — each node can be resolved in a custom manner. Here is a simple example of a GraphQL query:

{
  me {
    name
    age 
    friends {
      id
      name  
    } 
  }
}

It doesn’t matter how the query is resolved. The hard initial work is designing it in a way that will survive redesigns, backend migrations and framework changes.

A query might be resolved by multiple data sources: REST APIs, database, a flat JSON file. A product might begin by returning data from a simple CSV file, and later be grow to return data from a cluster of databases or remote storage like BigTable.

GraphQL is simply a clearinghouse for queries. It also comes with a tool called GraphiQL that allows you to view and debug your queries visually. And Facebook has open-sourced a library, called DataLoader, that makes it easy to query multiple backends asynchronously without having to write custom Promise logic that ends up in callback hell.

Relay acts as a partner to GraphQL and React. A top-level query usually happens on a route — a URL pattern that loads a component when it is matched.

// queries/Page.js

import { graphql } from 'react-relay';

const PageQuery = graphql`
  query Page_Query($slug: String!) {
    viewer {
      ...Page_viewer
    }
  }
`;

// routes/index.js
import Route from 'found/lib/Route';
import Page from 'routes/Page';
import PageQuery from 'queries/Page';
<Route
  path=":slug"
  Component={Page}
  query={PageQuery}
  render={renderProp}
/>

GraphQL “fragments” are co-located with your React components. A component describes what slices of data it needs on certain types. Relay queries “spread” the fragments of other components. In this particular case, the “slug” is extracted from the URL path and passed to our GraphQL query. The Page component will be populated with a “viewer” prop that contains the data specified below:

// routes/Page/index.js
import { graphql, createFragmentContainer } from 'react-relay';
import styles from './Page.scss';
const Page = ({ viewer: { page } }) => {
  if (!page) {
    return <Error />;
  }

  const { title, content, featuredMedia } = page;

  return (
    <article className={styles.content}>
      <header>
        <h1 className={styles.title}>{title}</h1>
      </header>
      {featuredMedia && <Media media={featuredMedia} />}
      <section dangerouslySetInnerHTML={{ __html: content }} />
    </article>
  );
};

export default createFragmentContainer(
  Page,
  graphql`
    fragment Page_viewer on Viewer {
      page(slug: $slug) {
        title
        content
        featuredMedia {
          ... on Image {
            source_url
          }
          ...Media_media
        }
      }
    }
  `
);

As React components become more nested, queries can become increasingly complex. In Relay Classic, all of the query-parsing logic happened at runtime. As of Relay Modern, queries are now parsed at build time and are static at runtime. This is great for performance.

One Caveat

Migrating from Classic to Modern can be a big lift. The project has provided a compatibility guide to allow your code to incrementally adopt new features, but the fragmented nature of the Node ecosystem can make this complex. Your codebase might be on the latest version, but some of your dependencies might be pinned to an earlier version.

We handle a lot of the complexity around upgrades and dependencies using our open source project, kyt. React Modern is such a massive improvement that it requires a “cutover” of old to new code.

However, the benefits are exciting. By default, GraphQL queries are sent to the server by Relay as an HTTP POST body containing the text of a query and the variables needed to fulfill it. Queries compiled by Relay Modern at build time can be persisted to a datastore, and IDs can be sent to the GraphQL server instead. We look forward to taking advantage of this optimization.

It has been exciting moving our codebase to React, and leaning on the great features kyt provides out of the box, such as CSS Modules. We are finally creating the central component repository we’ve longed for.

As we transition from away from using REST APIs, we no longer have to query the canonical representation of an article, when all we really need in some display modes is five to seven fields.

When we want to update our design across all products, we will no longer have to make changes across several codebases. This is the reality we are moving towards. We think Relay and GraphQL are the perfect tools to take us there.

Scott Taylor is a senior software engineer on the Web Frameworks team.


React, Relay and GraphQL: Under the Hood of the Times Website Redesign was originally published in Times Open on Medium, where people are continuing the conversation by highlighting and responding to this story.

Designing a Faster, Simpler Workflow to Build and Share Analytical Insights

By EDWARD PODOJIL, JOSH ARAK and SHANE MURRAY

Data is critical to decision-making at The New York Times. Every day, teams of analysts pore over fine-grained details of user behavior to understand how our readers are interacting with The Times online.

Digging into that data hasn’t always been simple. Our data and insights team has created a new set of tools that allows analysts to query, share and communicate findings from their data faster and easier than ever before.

One is a home-grown query scheduling tool that we call BQQS — short for BigQuery Query Scheduler. The other is the adoption of Chartio, which our analysts use to visualize and share their results.

The result has been more analysts from more teams being able to more easily derive insights from our user data. At least 30 analysts across three teams now have almost 600 queries running on a regular cadence on BQQS, anywhere between once a month to every five minutes. These queries support more than 200 custom dashboards in Chartio. Both represent substantial improvements over our previous model.

What problems were we trying to solve?

This effort began when we migrated our data warehousing system from Hadoop to Google’s BigQuery. Before we built new tools, we worked with analysts to come up with several core questions we wanted to answer:

  • What patterns and processes did the analysts use to do their work?
  • Which of those processes could we automate, in order to make the process more hands-off?
  • How could we make it easier for our growing list of data-hungry stakeholders to access data directly, without having to go through an analyst?
  • How could we ensure ease of moving between business intelligence products to avoid attachment to eventual legacy software?

Until the migration to BigQuery, analysts primarily queried data using Hive. Although this allowed them to work in a familiar SQL-like language, it also required them to confront uncomfortable distractions like resource usage and Java errors.

We also realized that much of their work was very ad-hoc. Regular monitoring of experiments and analyses was often discarded to make way for new analyses. It was also hard for them to share queries and results. Most queries were stored as .sql files on Google Drive. Attempts to solve this using Github never took off because it didn’t fit with analysts’ habits.

The act of automating queries was also unfamiliar to the analysts. Although the switch to BigQuery made queries much faster, analysts still manually initiated queries each morning. We wanted to see if there way ways to help them automate their work.

Query Scheduling with BQQS

Before we considered building a scheduling system in-house, we considered two existing tools: RunDeck and AirFlow. Although both of these systems were good for engineers, neither really provided the ideal UI for analysts who, at the end of the day, just wanted to run the same query every night.

Out of this came BQQS: our BigQuery Query Scheduler. BQQS is built on top of a Python Flask stack. The application stores queries, along with their metadata, in a Postgres database. It then uses Redis to enqueue queries appropriately. It started with the ability to run data pulls moving forward, but we eventually added backfilling capabilities to make it easier to build larger, historical datasets.

A testing dashboard in BQQS

This solution addressed many of our pain points:

  • Analysts could now “set it and forget it,” barring errors that came up, effectively removing the middleman.
  • The system stored actual analytics work without version control being a barrier. The app stores all query changes so it’s easy to find how and when something changed.
  • Queries would no longer be written directly into other business intelligence tools or accidentally deleted on individual analysts’ computers.

Dashboards with Chartio

Under our old analytics system, “living” dashboards were uncommon. Many required the analyst to update data by hand, were prone to breaking, or required tools like Excel and Tableau to read. They took time to build, and many required workarounds to access the variety of data sources we use.

BigQuery changed a lot of that by allowing us to centralize data into one place. And while we explored several business intelligence tools, Chartio provided the most straightforward way to connect with BigQuery. It also provided a clean, interactive way to build and take down charts and dashboards as necessary.

One example of a dashboard generated by Chartio

Chartio also supported team structures, which meant security could be handled effectively. To some degree, we could make sure that users had access to the right data in BigQuery and dashboards in Chartio.

Developing new processes

Along with new tools, we also developed a new set of processes and guidelines for how analysts should use them.

For instance, we established a process to condense each day’s collection of user events — which could be between 10 and 40 gigabytes in size — into smaller sets of aggregations that analysts can use to build dashboards and reports.

Building aggregations represents a significant progression in our analytical data environment, which previously relied too heavily on querying raw data. It allows us to speed queries up and keep costs down.

In addition, being able to see our analysts’ queries in one place has allowed our developers to spot opportunities to reduce redundancies and create new features to make their lives easier.

Moving forward

There’s much more work to do. Looking ahead, we’d like to explore:

  • How to make it easier to group work together. Many queries end up being the same with slightly different variables and thus a slightly different result. Are there ways to centralize aggregations further so that there are more common data sets and ensure data quality?
  • Where it makes sense to design custom dashboard solutions, for specific use cases and audiences. Although Chartio has worked well as a solution for us with a smaller set of end-users, we’ve identified constraints with dashboards that could have 100+ users. This would be an excellent opportunity to identify new data tools and products that require the hands of an engineer.

Shane Murray is the VP of the Data Insights Group. Within that group, Josh Arak is the Director of Optimization and Ed Podojil is Senior Manager of Data Products.


Designing a Faster, Simpler Workflow to Build and Share Analytical Insights was originally published in Times Open on Medium, where people are continuing the conversation by highlighting and responding to this story.

Headline Balancing Act

By ANDREI KALLAUR and MICHAEL BESWETHERICK

The New York Times can be read on your phone, tablet, laptop, and on many other networked screens, and it’s impossible to know in advance how every headline appears on every display. Sometimes, the headline wraps just fine. But there are many times when they don’t, introducing unsightly widows. Even when there aren’t strict widows, instances where one line is dramatically shorter than others can still hurt legibility and reading flow.

These blemishes are easily fixed in print. On a fixed canvas, we can fit copy to fill a space, and designers can work with editors to get the text to behave just right. On the web, where space is dynamic, we can’t adjust layouts by hand. But that doesn’t mean we have to just accept bad typography, we just have to use a different approach: translate and codify good design guidelines (which can be intuitive and circumstantial) into a concrete, reusable set of instructions.

We have made several attempts to tackle this problem. For a while, we were relying on Adobe’s balance-text jQuery plugin on special feature articles. While the result looked great, performance was not ideal: sharp-eyed readers would see the headline update after the page’s fonts loaded. And since the headline is one of the first things someone will look at, this was not great.

The previous jQuery plugin (from Adobe) in action.

So during our Maker Week last summer, I suggested coming up with a more robust headline balancer that could be used anywhere — not just special features. After seeing some examples of bad wrapping, a few engineers agreed to search for a better solution. The winning idea came from one of our interns, Harrison Liddiard. He came up with a lightweight implementation (without a jQuery dependency, even!) that gave us what we were looking for.

Michael Beswetherick proceeded to make this script ready for production. Combing through hundreds of headlines of varying lengths, we measured the effectiveness and efficiency of our balancer, adjusting based on what we saw. You can see the before/after for just a few of the headlines:

And now … it can be yours

We’re more than a little excited to release our work on Github. Now, we realize that a piece of code that only works on headlines might not be very useful, so we’ve abstracted our solution and named it text-balancer:

https://github.com/NYTimes/text-balancer (and also, an npm module)

(Now, remember: moderation is best in all things. You should apply this selectively, not to everything you can get your hands on. We suggest headlines, blockquotes, and other places where you’re using large display type. We do not recommend using this on body type, buttons, or navigational links.)

Wondering how text-balancer actually works? Well, it’s a binary search algorithm applied against a text element’s max-width. We then adjust its max-width until the element squeezes right up to the point that it spills onto another line.

Here it is in action: (slowed down so you can see how it works)

We calculate the max-width to be the average of a bottom range that starts at 0 and changes depending on whether the updated max-width makes the text fall onto another line.

One of the more subtle aspects of text-balancer is that the text element will always remain the same number of lines that it was before. It will also re-adjust when the browser size changes; all you need to do is set it up once and you can rest assured that the text will always be balanced.

One Last Thing

When we were first testing it, we kept noticing that our text look…well, didn’t actually look balanced. Finally, we figured out that we were using text-balancer before the headline font had finished loading. So: you should wait to run text-balancer until after your fonts have loaded.

We looked for a way to detect when our fonts had loaded and came across Bram Stein’s https://fontfaceobserver.com/. Calling the observer’s load method returns a promise that will tell us the right time to balance our text.

const chelt = new FontFaceObserver('cheltenham');
chelt.load().then(() => {
console.log('fonts have loaded yay');
textBalancer.balanceText();
});

What Else?

In the future, we’d like to be able to place line breaks with awareness to style guide conventions: not splitting within names and phrases, not splitting after a lowercase word, and so on. (If someone wants to add this and send in a pull request, we won’t say no.)

In the meantime, give text-balancer a try and let us know what you like or don’t like!

A balanced headline on an NYTimes.com article.


Headline Balancing Act was originally published in Times Open on Medium, where people are continuing the conversation by highlighting and responding to this story.

Building a Cross Platform 360-degree Video Experience at The New York Times

By THIAGO PONTES and MAXWELL DA SILVA

Over the past few months, 360-degree videos have gained a lot of traction on the modern web as a new immersive storytelling medium. The New York Times has continuously aimed to bring readers as close to stories as possible. Last year we released the NYT VR app with premium content on iOS and Android. We believe VR storytelling allows for a deeper understanding of a place, a person, or an event.

This month, we added support for 360-degree videos into our core news products across web, mobile web, Android, and iOS platforms to deliver an additional immersive experience. Times journalists around the world are bringing you one new 360 video every day: a feature we call The Daily 360.

The current state of 360 videos on the Web

We’ve been using VHS, our New York Times Video Player, for playback of our content on both Web and Mobile Web platforms for the last few years. Building support for 360 videos on those platforms was a huge challenge. Even though the support for WebGL is relatively mature nowadays, there are still some issues and edge cases depending on platform and browser implementation.

To circumvent some of those issues, we had to implement a few different techniques. The first was the use of a “canvas-in-between”: We draw the video frames into a canvas and then use the canvas to create a texture. However, some versions of Microsoft Internet Explorer and Microsoft Edge are not able to draw content to the canvas if the content is delivered from different domains (as happens with a content delivery network, or CDN), even if you have the proper cross-origin resource sharing (CORS) headers set. We investigated this issue and found out that we could leverage the use of HTTP Live Streaming through the use of an external library called hls.js to avoid this problem.

Safari also has the same limitation regarding CORS. It seems to have been an issue in the underlying media framework for years and for this scenario, the hls.js workaround doesn’t solve the problem. We tackled this issue with the combination of two techniques:

  • The creation of an iframe with the video player embedded in it.
  • The use of progressive download renditions such as MP4 or WebM on the embedded player.

By doing this, we avoid the CORS video texture bug since the content and the iframe are in the same domain as the CDN and we were able to show the player in the parent domain, and the content inside the iframe.

Many of our users see our videos from within social media apps on their phones. On iOS, almost all of these social network applications load off-site content on their own in-app browsers instead of using the native browser, which raises a longstanding technical issue: the lack of support for inline playback video, even on iOS 10. This happens because inline playback support is still disabled by default on web views.

The technical problems listed above aside, the basic theory on how we should view 360 content is pretty straightforward. There are basically four steps to implement a simple 360 content view solution:

  1. Have an equirectangular panoramic image or video to be used as a source.
  2. Create a sphere and apply the equirectangular video or image as its texture.
  3. Create a camera and place it on the center of the sphere.
  4. Bind all the user interactions and device motion to control the camera.

These four steps could be implemented just using the WebGL API but there are 3D libraries like three.js that provide an easier way to use renderers for canvas, svg, CSS3D and WebGL. The example below shows how one could implement the four steps described above to render 360 videos or images:

CodePen Embed – sphere 360

When we first started to work on supporting 360 video playback on VHS, we researched a few projects and decided to use a JavaScript library called Kaleidoscope. Kaleidoscope supports equirectangular videos and images in all versions of modern browsers. The library is lightweight at 60kb gzipped, simple to use and easy to embed into the player when compared with other solutions.

The 360 video mobile native app experience on iOS and Android

Solving 360 video playback on iOS and Android was interesting and challenging since there wasn’t a video library that satisfied our requirements on both platforms. As a result, we decided to go with a different approach for each platform.

For the iOS core app, we created a small Objective-C framework that uses the same approach as Kaleidoscope. Initially we considered to start the development using Metal and OpenGL, but those are lower-level frameworks which require significant development work to create scenes and manipulate 3D objects.

Luckily, there’s another option: SceneKit is a higher-level framework that allows manipulation and rendering of 3D assets in native iOS apps. Investigation revealed that SceneKit provided adequate playback performance, so we chose to use it to render the sphere and camera required for 360-degree video playback.

We also needed to extract video frame buffers into a 2D texture to be applied as a material for the sphere, and to do that we decided to use SpriteKit. SpriteKit is a powerful 2D graphics framework commonly used in 2D iOS games. Our playback framework uses a standard iOS AVPlayer instance for video playback and uses SpriteKit to render its video onto the sphere.

Finally, we bind user interactions and device movements to control the camera’s motion using standard iOS gesture recognizers and device motion APIs.

By using these tools we were able to create a 360 video framework that is very similar to Kaleidoscope. We call it NYT360-Video, and we are happy to announce that we are open sourcing the framework.

On the Android platform we did a deep evaluation of some open source libraries that support 360 video and images, and after an initial prototyping, the Android team decided to use the Google VR SDK. The NYTimes Android app works on various devices and Android OS versions, and Google VR SDK has the features and capabilities that we needed and a straightforward API that allowed a relatively easy integration.

The Google VR SDK has evolved quite a lot from the day we started to work on the integration, and the Google VR team has invested a lot of time improving the project. Along the way, we worked together with Google on feature requests and bug fixes, and that collaboration gave us the certainty that we made the right decision to adopt it. The integration worked as we expected and now we have an immersive 360 video experience on Android.

The future of 360 video encoding and playback at The New York Times

We are investigating new ways of encoding and playing 360 videos to increase performance and improve the user experience. We are excited to explore other interesting features such as spatial audio, stereo images and video.

On the video transcoding side, we are exploring the use of cube map projections, which avoid the use of equirectangular layouts for a more space efficient approach. In theory, we can reduce the bitrate applied to the video by approximately 20 percent while keeping the same quality.

Below you can see a very basic example on how we could support playback of 360 videos encoded with cube map:

CodePen Embed – cubemap 360

The use of cube map projections is a more complex approach than using equirectangular projections since it would not only require changing our video player but also the way we transcode our videos. Earlier this year Facebook released a project called Transform, an FFmpeg filter that converts a 360 video in equirectangular projection into a cube map projection. We are investigating ways to integrate this into our video pipeline. We are also open sourcing the video encoding presets that we use to transcode all of our 360 video outputs.

We hope to receive your crucial feedback and generate contributions from the open source community at large. Feel free to ask questions via GitHub Issues in each project.

Check them out:
github.com/NYTimes/ios-360-videos
github.com/NYTimes/video-presets


Building a Cross Platform 360-degree Video Experience at The New York Times was originally published in Times Open on Medium, where people are continuing the conversation by highlighting and responding to this story.

Using Microservices to Encode and Publish Videos at The New York Times

By FLAVIO RIBEIRO, FRANCISCO SOUZA, MAXWELL DA SILVA and THOMPSON MARZAGÃO

For the past 10 years, the video publishing lifecycle at The New York Times has relied on vendors and in-house hardware solutions. With our growing investment in video journalism over the past couple of years, we’ve found ourselves producing more video content every month, along with supporting new initiatives such as 360-degree video and Virtual Reality. This growth has created the need to migrate to a video publishing platform that could adapt to, and keep up with, the fast pace that our newsroom demands and the continued evolution of our production process. Along with this, we needed a system that could continuously scale in both capacity and features while not compromising on either quality or reliability.

A solution

At the beginning of this year, we created a group inside our video engineering team to implement a new solution for the ingesting, encoding, publishing and the syndication of our growing library of video content. The main goal of the team was to implement a job processing pipeline that was vendor agnostic and cloud-based, along with being highly efficient, elastic, and, of course, reliable. Another goal was to make the system as easy to use as possible, removing any hurdles that might get in the way of our video producers publishing their work and distributing it to our platforms and third-party partners. To do that, we decided to leverage the power of a microservices architecture combined with the benefits of the Go programming language. We named this team Media Factory.

The setup

The first version of our Media Factory encoding pipeline is being used in production by a select group of beta users at The New York Times, and we are actively working with other teams to fully integrate it within our media publishing system. The minimum viable product consists of these three different parts:

Acquisition: After clipping and editing the videos, our video producers, editors, and partners export a final, high-resolution asset usually in ProRes 442 format. Our producers then upload the asset to an AWS S3 bucket to get it ready for the transcoding process. We implemented two different upload approaches:

  1. An internal API that supports multipart uploads, called video-acquisition-api, used from server-side clients, like small scripts or jobs.
  2. A JavaScript wrapper that uses EvaporateJS to upload files directly from the browser, which is integrated with our internal Content Management System (CMS), Scoop.

Transcoding: After the acquisition step is complete, we use another microservice called video-transcoding-api to create multiple outputs based on the source file. Currently, we create a single HLS output with six resolutions and bitrates to support adaptive streaming, four different H.264/MP4 outputs, and one VP8/WebM for the benefit of the 1 percent of our users on the Mozilla Firefox browser running on Microsoft Windows XP.

The transcoding service is by far the most crucial part of our workflow. In order to integrate with cloud-based transcoding providers, we decided to design a tiny wrapper containing provider-specific logic. This design gives us great flexibility. We can schedule and trigger jobs based on a set of parameters such as speed, reliability, current availability, or even the price of the encoding operation for a specific provider. For instance, we can transcode news clips (which are time-sensitive) on the fastest, most expensive encoding service, while simultaneously transcoding live action videos, documentaries, and animations (which are not time-sensitive) using lower-cost providers.

Distribution: The transcoding step transfers the final renditions into another AWS S3 bucket. Since we use a content delivery network (CDN) to deliver the video to our end users, we need a final step to transfer the files from S3 to the CDN (leveraging Aspera’s FASP protocol to do so). Once the files are on the CDN, our video journalists are able to publish their content on The New York Times.

Giving back to the community

Today, we are open sourcing the video-transcoding-api and the video encoding presets that we use to generate all of our outputs. We are also open sourcing the encoding-wrapper, which contains a set of Go clients for the services we support and that are used by the video-transcoding-api.

We believe the format we’ve created will be of particular interest to the open source community. By leveraging the abstractions found in the video-transcoding-api, any developer can write the code necessary to send jobs to any transcoding provider we support without having to rewrite the base preset or the job specification. Sending a job to a different provider is as simple as changing a parameter.

We currently support three popular transcoding providers and plan to add support for more. See a sample preset below, in JSON format:

{
  "providers": ["encodingcom", "elementalconductor", "elastictranscoder"],
  "preset": {
    "name": "1080p_hls",
    "description": "1080p HLS",
    "container": "mp4",
    "profile": "Main",
    "profileLevel": "3.1",
    "rateControl": "VBR",
    "video": {
      "height": "1080",
      "width": "",
      "codec": "h264",
      "bitrate": "3700000",
      "gopSize": "90",
      "gopMode": "fixed",
      "interlaceMode": "progressive"
    },
    "audio": {
      "codec": "aac",
      "bitrate": "64000"
    }
  },
  "outputOptions": {
    "extension": "m3u8",
    "label": "1080p"
  }
}

Our philosophy for presets: “Write once, run anywhere”

Our future plans

In order to fulfill our vision of having a fully open sourced video encoding and distribution pipeline, we thought it best to also tackle the issue of actually encoding the video. We’re officially taking on the development and maintenance of the open source project Snickers to serve this purpose. We’ll not only gain the freedom of deploying our own encoding service anywhere, but we’ll also be able to experiment and implement new features that may not be available with existing service providers or and respond to specific requests from our newsroom. A few examples on the horizon are the automatic generation of thumbnails and accurate audio transcripts.

We’ve also turned our sights to fragmented MP4 (fMP4), and we’ll be investing some time into fully moving to an HLS-first approach for our on-demand videos. In case you missed it, last June at WWDC 2016, Apple introduced fMP4 to the HLS protocol, making it so now almost all devices and browsers support fMP4 playback natively. This means we can now eliminate the overhead of having to transmux the MPEG-TS segments into fMP4 on the fly when playing videos using our video player (we use hls.js to do this) and instead just concatenate and play fMP4 fragments on our local buffer.

Lastly, content-driven encoding is a trendy topic within the online video community, especially after the release of VMAF. We are planning to adopt this approach by splitting the content-driven encoding project into two phases:

  1. Classify our content into four different categories, each with its own preset. For example, animation videos, like the ones we have for our Modern Love show, require fewer bits than our high-motion videos, like some of our Times Documentaries, to achieve the same content fidelity.
  2. Create and integrate an additional microservice within the Media Factory pipeline for the purpose of checking the quality of our outputs using VMAF and triggering new re-encoding jobs with optimized presets.

Come along with us!

Our Media Factory team (Maxwell Dayvson da Silva, Said Ketchman, Thompson Marzagão, Flavio Ribeiro, Francisco Souza and Veronica Yurovsky) believes that these projects will help address the encoding challenges faced by many of you in the online video industry. We hope to receive your crucial feedback and generate contributions from the open source community at large.

Check them out:

https://github.com/NYTimes/video-transcoding-api
https://github.com/NYTimes/video-presets
https://github.com/NYTimes/encoding-wrapper
https://github.com/snickers/snickers

And feel free to ask questions via GitHub Issues in each of the projects!


Using Microservices to Encode and Publish Videos at The New York Times was originally published in Times Open on Medium, where people are continuing the conversation by highlighting and responding to this story.

Continuous Deployment to Google Cloud Platform with Drone

By TONY LI and JP ROBINSON

Over the course of the last year, the software development teams at The New York Times have been evaluating Google Cloud Platform for use in some of our future projects. To do this, we’ve surveyed a wide variety of software management techniques and tools, and we’ve explored how we might standardize building, testing and deploying our systems on GCP.

Our newly formed Delivery and Site Reliability Engineering team came up with two methods of deployment with Google Container Engine and Google App Engine for computing environments and the open source implementation of Drone as a continuous integration tool. As a result of this work, we are open sourcing two plugins for Drone: drone-gke and drone-gae respectively.

Container Engine is Google’s managed Kubernetes container orchestration platform. Kubernetes is an open source project that provides a declarative approach to managing containerized applications, enabling automatic scaling and healing properties. It encourages a common standard of how our applications are designed, deployed, and maintained across many independent teams. And because Kubernetes pools compute resources, developers can run many isolated applications in the same cluster, maximizing its resource usage density.

App Engine is a mature serverless platform Google has offered since 2008. It is capable of quickly scaling up and down as traffic changes, which is ideal for many scenarios at The New York Times when you consider possible sudden spikes from breaking news alerts or the publication of the new daily crossword every weekday at 10 p.m.

Drone is an open source continuous integration and delivery platform based on container technology, encapsulating the environment and functionalities for each step of a build pipeline inside an ephemeral container. Its flexible yet standardized nature enables our teams to unify on a plugin-extensible, ready-to-use CI/CD pipeline that supports any custom build environment with isolation, all with a declarative configuration similar to commercial CI/CD services. The result is the ability for developers to confidently ship their features and bug fixes into production within minutes, versus daily or weekly scheduled deployments. As a containerized Go application, it is easy to run and manage, and we hope to contribute to the core open source project.

Google provides an excellent set of command line utilities that allow developers to easily interact with Google Container Engine and Google App Engine, but we needed a way to encapsulate those tools inside of Drone to simplify the workflow for developers. Luckily, plugins for Drone are simple to create as they can be written in Go and are easily encapsulated and shared in the form of a Docker container. With that in mind, the task of creating a couple reusable plugins was not that daunting.

drone-gke is our new plugin to wrap the gcloud and kubectl commands and allow users to orchestrate deployments to Google Container Engine and apply changes to existing clusters. The Kubernetes yaml configuration file can be templated before being applied to the Kubernetes master, allowing integration with Drone’s ability to encrypt secrets and injecting build-specific variables.

Here is an example Drone configuration to launch a Go application in a container via a Kubernetes Deployment resource into Google Container Engine:

# test and build our binary
build:
  image: golang:1.7
environment:
    - GOPATH=/drone
commands:
    - go get -t
    - go test -v -cover
    - CGO_ENABLED=0 go build -v -a
when:
    event:
      - push
      - pull_request
# build and push our container to GCR
publish:
  gcr:
    storage_driver: overlay
    repo: my-gke-project/my-app
    tag: "$$COMMIT"
    token: >
      $$GOOGLE_CREDENTIALS
when:
      event: push
      branch: master
# create and apply the Kubernetes configuration to GKE
deploy:
  gke:
    image: nytimes/drone-gke
zone: us-central1-a
    cluster: my-k8s-cluster
    namespace: $$BRANCH
    token: >
      $$GOOGLE_CREDENTIALS
vars:
      image: gcr.io/my-gke-project/my-app:$$COMMIT
      app: my-app
      env: dev
    secrets:
      api_token: $$API_TOKEN
when:
      event: push
      branch: master
And the corresponding Kubernetes configuration:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: {{.app}}-{{.env}}
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: {{.app}}
        env: {{.env}}
    spec:
      containers:
        - name: app
          image: {{.image}}
          ports:
            - containerPort: 8000
          env:
            - name: APP_NAME
              value: {{.app}}
            - name: API_TOKEN
              valueFrom:
                secretKeyRef:
                  name: secrets
                  key: api-token
---
kind: Service
apiVersion: v1
metadata:
  name: {{.app}}-{{.env}}
spec:
  type: LoadBalancer
  selector:
    app: {{.app}}
    env: {{.env}}
  ports:
    - port: 80
      targetPort: 8000
      protocol: TCP
And the corresponding Kubernetes secrets configuration:
kind: Secret
apiVersion: v1
metadata:
  name: secrets
type: Opaque
data:
  api-token: {{.api_token}}
drone-gae is our new plugin to wrap the gcloud and appcfg commands and allow users to make deployments to Google App Engine standard environment with Go, PHP or Python or to the flexible environments with any language.
Here’s a very basic example of all the configuration required to launch a new version of a Go service to Google App Engine’s standard environment with a second step to migrate traffic to that version:
deploy:
  # deploy new version to GAE
  gae:
    image: nytimes/drone-gae
environment:
      - GOPATH=/drone
action: update
    project: my-gae-project
    version: "$$COMMIT"
    token: >
      $$GOOGLE_CREDENTIALS
when:
      event: push
      branch: master
# set new version to 'default', which migrates 100% traffic
  gae:
    image: nytimes/drone-gae
action: set_default_version
    project: my-gae-project
    version: "$$COMMIT"
    token: >
      $$GOOGLE_CREDENTIALS
when:
      event: push
      branch: master
Deploying new versions to the flexible environment requires a little more work, but it’s straightforward when using the plugin. We first use a build step to test and compile the code, then a publish step to build and publish a Docker container to Google Container Registry (via the drone-gcr plugin) and finally, we kick off the deployment via our new plugin.
# test and build our binary
build:
  image: your-dev/golang:1.7
environment:
    - GOPATH=/drone
commands:
    - go test -v -race ./...
    - go build -o api .
when:
    event:
      - push
      - pull_request
# build and push our container to GCR
publish:
  gcr:
    storage_driver: overlay
    repo: my-gae-project/api
    tag: "$$COMMIT"
    token: >
       $$GOOGLE_CREDENTIALS
when:
      branch: [develop, master]
      event: push
deploy:
  # deploy a new version using the docker image we just published and stop any previous versions when complete.
  gae:
    image: nytimes/drone-gae
action: deploy
    project: my-gae-project
    flex_image: gcr.io/my-gae-project/api:$$COMMIT
    version: "$${COMMIT:0:10}"
    addl_flags:
     - --stop-previous-version
    token: >
      $$GOOGLE_CREDENTIALS
when:
      event: push
      branch: develop
We hope open sourcing these tools helps other engineers who want to leverage Drone as a continuous delivery solution. We are also looking to the community to take a look and help us harden our systems. Please raise an issue if you find any problems and follow the contributing guidelines if you make a pull request. For further reading and more documentation, you can visit the code repositories on Github:
github.com/NYTimes/drone-gae
github.com/NYTimes/drone-gke


Continuous Deployment to Google Cloud Platform with Drone was originally published in Times Open on Medium, where people are continuing the conversation by highlighting and responding to this story.

Quick and Statistically Useful Validation of Page Performance Tweaks

By JUSTIN HEIDEMAN

Improving page performance has been shown to be an important way to keep reader’s attention and improve advertising revenue. Pages on our desktop site can be complex and we’re always looking for ways to improve our performance. Since 2014, when our desktop site was last rebuilt, there have been big changes in client-side frameworks with great improvements to performance. Some of those site improvements will take us some time. We wondered if in the shorter term there are some smaller changes we could implement to make www.nytimes.com more performant.

A quirky problem we ran into was how to effectively measure modest performance changes when a page has many assets of variable speed and complexity that can impact its performance. We used the magic of statistics to compensate for the variability and allow us to get usable, comparable measurements of a page’s speed.

In order to make our site faster, we have to figure out what is slow first. We do fairly well with much of the attainable low-hanging page performance fruit: compression, caching, time to first byte, combining assets, using a CDN. Our real bottleneck is the amount and complexity of the JavaScript on our pages.

If you look at a timeline of a typical article page in Chrome’s Developer Tools, you’ll see that there is an uncomfortably long gap between DOMContentLoaded event and the Load event. Screenshots show that the page’s visual completion roughly correlates with the Load event. The flame chart shows a few scripts that take a fair amount of time, but there isn’t any one easily fixable bottleneck that could be removed to make our site faster. Slow pages are death by a thousand protracted cuts. Some of those cuts are our own doing and some of them stem from third-party assets. The realities of the publishing and advertising world demand that we include a number of analytics and third-party libraries, each of which impose a performance cost on our site.

In order to start weighing the impacts and tradeoffs of the logic and libraries we have on our page, we wanted to get real timing numbers to attach to potential optimizations. For instance, in one experiment we investigated, we wanted to be able to know how much time is consumed rendering the ribbon of stories from the top of the story template and how much faster the story template could render if the ribbon were to be removed.

One way to do this would be to use the User Timing API and measure the time it takes from when the ribbon initializes to when its last method completes. This works for when we have things we control and can easily modify the code for. It’s not as easy when we want to weigh the impacts of a third-party library because we can’t attach timing calls to code we don’t control. There is another problem with this approach: instrumenting one module provides an incomplete picture. It doesn’t show the holistic down-the-timeline impacts that an optimization may have, or account for the time it takes a script to download and parse before it executes.

An even more fundamental problem is that any type of performance measurement on a page will give different timing values each time a page is reloaded. This is due to fluctuations in network performance, server load, and tasks a computer is doing in the background, among other factors. Isolating a page’s assets might be one way to solve it, but that is impractical and will give us an inaccurate picture of real-world performance. To correct for these real-world fluctuations and attempt to get usable, comparable numbers, we ran our timing tests multiple times, collected the numbers and plotted them to make sure we had a good distribution of results.

The values that the graphs show aren’t specifically important, but the shape of the curve is; you want to see a clear peak and drop off of values to indicate you have enough sample points and are distributed in a logical manner. We found using the median of the collected timing values to be the most useful comparison number for our tests. The median is typically most useful when a dataset has a skewed central tendency, like ours, and is less susceptible to outlying data points.

Gathering enough numbers by hand (e.g., reloading Chrome, writing down numbers) would be tedious, though effective. We use some open-source browser automation tools for functional testing of our sites, but they require some careful setup and retrieving page performance numbers out of them is not straightforward. Instead, we found and used nightmare, an automatable browser based on Electron and Chrome, and nightmare-har, a plugin that gives access to the http archive for a page full of useful performance information.

Here’s what a simple script looks like to get the load event timing for a page:

We want to get multiple timing numbers to be useful, so we need to loop the test. Unfortunately, with the HAR plugin, nightmare acts erratic when being looped, so our solution is to wrap the Node script with a simple loop in a shell script, like so:

You might notice in the above script, we’re actually testing two URLs. This is due to another intricacy of the oscillations of page performance. We found that if we ran two tests sequentially, e.g., 40 runs of control, 40 runs of our test, our numbers were perplexing and sometimes did not match what we expected from our optimizations. We found that even an hour separation in time could produce variations in timing that would obfuscate the performance changes we were attempting to see. The solution is to interleave tests of the control page and the test page, so both are exposed to the same fluctuations. By doing this, we were able to see performance deltas that were consistent across network variations.

Put it all together, let it run (go get a cup of coffee), and you’ll get two columns of numbers, easily pastable into the spreadsheet of your choice. You can then use them to make two nice histograms of your results, like so:

These are still not real, in-the-wild numbers, but they are easier to attain than setting up a live test on a site, which we’re planning to do in the near future. Quick testing like this gives us the confidence to know that we’re on the right track before we invest time in making more fine-grained optimizations.


Quick and Statistically Useful Validation of Page Performance Tweaks was originally published in Times Open on Medium, where people are continuing the conversation by highlighting and responding to this story.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑