Improving Our Video Experience

This post was originally published on this site


Part One: Our On-Demand Video Platform

This is the first post in a series about the progress and achievements of our video delivery platform. We’ll start with detailing what has changed since the launch of the microservices we implemented for encoding and publishing our on-demand videos. If you want to know more about this project, please read this post.

Since the release of our new publishing pipeline, we’ve encoded and published a total of 133,452 videos between H264/MP4, VP8/WebM and HLS H264/MPEG-TS levels. We’ve also received and accepted some external pull requests on the open source components of the pipeline. Three more encoding providers were added. The rollout was considered a huge success for both technology and the newsroom, as speed of encoding and overall system stability has improved dramatically.

Until the middle of this year, all the video assets were hosted and served by a content delivery network (CDN). As part of the company’s technology strategy, we recently decided to move all product deployments to Google Cloud Platform, including how we run, host, and serve our video content. In order to achieve this, we started a project to migrate our library and add support for Google Cloud Platform to our publishing pipeline.

On-The-Fly Generation of Adaptive Formats

Before starting the migration process, we discussed whether there might be a better way to migrate and serve our content. Inspired by Ellation, it seemed like a smart idea to use the nginx open source module created by Kaltura that is capable of generating adaptive bitrate formats from H264 encoded files.

Deploying this module would not only bring us the ownership and control of the origin, but also save storage costs by using our existing H264 files for serving the adaptive formats. The module would also give us MPEG-Dash and Microsoft Smooth Streaming support for free. In addition, with this setup, we could benefit from the progress in the module coming from the strong open source community, like the support for serving fragmented MP4s, which is one the items we have in our backlog.

Project Execution

Before starting the assets transfer, we wanted to make sure we weren’t impacting the user’s experience with this new approach. In fact, we were thrilled to improve the QoS/QoE after the migration process. In order to measure the current QoS/QoE score, we first integrated our web video player with Mux. Once the player integration was complete, we had a better picture of our playback experience across devices and browsers.

The second step was to migrate our static assets. The on-the-fly packager works by dynamically chopping the existing video files in small chunks and generating manifest files for guiding the players to download and play them in an adaptive streaming fashion. This way, we transferred all our MP4 files to a Google Cloud Storage (GCS) bucket to be used by the module. We also transferred a VP8/WebM rendition for fallback purposes on old browsers that don’t have the ability to playback H.264 content.

With all the assets in place, we set up four nginx servers running the on-the-fly packaging module in a Kubernetes cluster on Google Container Engine (GKE) to serve as origin servers. Next, we set up a Fastly layer for caching the segments and playlists that were generated on-the-fly by the origin. We also created another location on the same nginx servers that route requests directly to the GCS bucket, allowing players to play the MP4s and WebM files too.

For the nginx module to find the files on the GCS bucket, we developed a component called gcs-helper. When given a video slug from our CMS, the GCS helper service finds all available H264/MP4 renditions for the on-the-fly packager and levels for the adaptive formats.

Before and after. Less storage, more formats delivered.

We deployed the new approach for 10% of users and set up an A/B experiment on Mux. After some tweaks on the segment size, caching timeouts and hls.js parameters, we were able to achieve a higher playback experience score for the users watching from the on-the-fly packager. This is especially true when we have traffic peaks where more edge servers have the segments cached, allowing users to download them faster. We increased the number of users consuming videos from the new endpoints and investigated the server’s load and cache hit ratio until we finally launched for 100% of the users.

As a final step, we added the support for sending the new encoded videos to GCS buckets on the distribution component of our pipeline.We also removed the generation of HLS levels during our transcoding process, speeding it up and helping our journalists to publish our news clips and time-sensitive content faster.

Future

As mentioned above, the nginx module is now able to generate fragmented mp4 files for HLS. Since we are using hls.js for all client playbacks except for Apple devices/browsers, we want to give it a test and compare the performance between fragmented MP4 versus MPEG-TS. Avoiding the transmuxing on the browser will probably help a bit on the startup time and save some battery on mobile devices. We also want to try a faster start with smaller segments on the beginning of the playlists. Both tests will be executed in A/B fashion.

When it comes to new features, during our last Maker Week, we played with some open source projects for thumbnails generation. We will revisit the project this year, and the goal is to make it ready for real users. We also have plans to work on a metadata service to generate extra information about the video to help with searchability and to feed our recommendations systems.

If you want to know more about the components we are using for our on-demand publishing and on-the-fly packaging, GCS-Helper and the docker image are available as open source software. The encoding profiles are available here.

Coming Up Next

For our next post in our Improving Our Video Experience series, we will walk through the problems we had on our Live Streaming infrastructure and explain how we solved them. We will also describe how we made it possible for journalists and publishers to create and manage live streaming events without the need of technical support.


Improving Our Video Experience was originally published in Times Open on Medium, where people are continuing the conversation by highlighting and responding to this story.

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑