Last week a new research paper hit the internet from the Engaging News Project on one of my favorite topics: the website redesign.
I pay particular attention to website redesigns because of what they tell us about the latest trends in how users experience news online and how we want users to experience the news. Layout decisions are informed by the biggest forces pressing on the news industry, including shifts in search and social referral traffic, the inexorable rise of mobile devices and digital subscriptions as a key indicator of the financial health of the news business.
The formal objective of the Engaging News Project study was to test whether online testing could be of value to news organizations looking for audience feedback on a redesign. The study design was clever, and fairly ambitious. Working with two separate news outlets in the process of actually launching site redesigns, one in the U.S. and one in Canada, the researchers collected before-and-after data using an online survey system as well as real-world data from each of the news outlets web analytics software. What they found was that the online experiment’s results were similar enough to the live, real-world data to be useful in decision-making. That’s a significant finding in and of itself for news outlets planning a site redesign and wanting to get audience feedback.
Redesigning a news website’s homepage could improve page views, time on page https://t.co/ZOkKM12PuI
— EngagingNewsProject (@EngagingNews) August 7, 2017
I want to focus on a few details of the research that suggest something important about where news metrics, and homepage metrics in particular, stand at this point in 2017. For more on the general study results, visit the ENP site or read this write-up at Nieman Lab. I emailed Emily Van Duyn, research associate for the Engaging News Project, a few of my questions.
Did the choice of web metrics—page views, bounce rate, average time per visit, scroll depth—come from the news outlets or ENP? Why those homepage metrics vs others?
Emily Van Duyn: These are standard metrics that the newsrooms were already gathering and fit with the goal of the project, which was to figure out how A/B testing can be used to improve site design for users. We found these useful in that they tell us something about the nature and the extent of user engagement with the site. For instance, from these metrics we can tell the scope of the interaction at the site level (page views), and at the homepage level (scroll depth, bounce rate, average time per visit).
I’m excited about article recall as a key metric, since that’s not something publishers can get from an analytics dashboard with A/B testing. Can you talk about your decision to include that here?
Van Duyn: We’re excited about this metric, too! It was an easy choice for us to include since we’re also interested in the larger, democratic effects of site design on the public. Of course, it’s challenging to match up qualitative responses with quantitative measures of recall, but certainly worth the effort. If site design makes a difference to what people can remember from the news, as our findings suggest, then that gives newsrooms another way to make their reporting more effective. It was also important to measure recall as a growing portion of news is consumed exclusively online. The topic of recall in online news is still relatively underexplored. Including measures of recall gave us information on how to optimize site design so that they benefit the public in addition to the news organization itself.
A big-picture question from the perspective of engaging news: What’s the goal of a good homepage in 2017?
From a normative and a commercial standpoint, a good homepage promotes interaction and information. Our past studies on site design suggest that a homepage to that end is sensitive to the user. We’ve found that the amount of scrolling required and a contemporary homepage design (see our previous report here) matters to how much the user interacts with the site. We’ve also found that positioning can influence article recall. By positioning, we mean which articles go in which columns and how much competition for attention there is between articles. For example, for the Canadian redesign, articles in the left-hand column were recalled more than others. These aren’t exhaustive design features that influence interaction and information, but they’re a starting point.
As researchers, what did you learn about methodology for testing redesigns while doing this research? Happy surprises or unexpected limitations?
Van Duyn: As you might imagine, coordinating the launch of two concurrent studies with two major news organizations is challenging. Our partnering organizations were committed to maintaining the integrity of our data, and we were able to make some meaningful comparisons between our experiment and the live A/B tests on the news organizations’ sites.
This research process also presented the unique challenge of comparing those who visited the site of their own volition (on the live webpage) and those who were prompted to visit through our online experiment. It is difficult to address whether these two populations are different enough to influence results. For our online experiment, we did include a measure of whether or not the respondent had previously used the news site in order to gauge whether or not they would have a familiarity bias. Previous use did seem to influence some site ratings between the old and new sites. Future variations of this project should further explore any differences between people who access a site on their own and respondents who access it through an online experiment.
Our findings are also promising for news organizations. The kind of testing we did in this study, an experiment through a survey platform, could serve as a less expensive but comparable way to test site redesign before embarking on a full launch. This means news organizations could use this method to more easily optimize their sites to the preference and benefit of their users, which is good for the bottom line as well.
Finally, I was curious about not seeing anything about ad recall or intent to purchase/subscribe here. Is that a future research question?
Van Duyn: While we didn’t cover that in this study, we do have a project in the works addressing subscriptions/membership—stay tuned!
Jason Alcorn (@jasonalcorn) is the Metrics Editor for MediaShift. In addition to his work with MediaShift, he works as a consultant with non-profits and newsrooms.