Designing a Chatbot

I have been extremely lucky to get a chance on designing a chatbot for one of our clients and the learning in the process has been massive. Most of the notions that I thought were true were discarded by research and a whole new world of possibilities just opened wide. Below, I have shared some of my crucial understandings along the path of designing a bot, hope you like it. Continue reading “Designing a Chatbot”

My First 6-Day Design Workshop

So yeah, I am back! This is going to be my last medium post for this year. Today, I’m gonna share my experience about the Winter school I attended few days back.


Our workshop ran from 4th- 9th December, 2017. It was basically on User Experience Design. Interaction design is divided into four categories-

  1. HCI (User Centric Design)
  2. Activity Centred Design
  3. System Design
  4. Genius Design

In this workshop we focussed on User centric design approach. User Centred design (UCD) is a design process that focuses on user needs and requirements. The consistent application of human factors, ergonomics, usability engineering, and other techniques is what keeps UCD revolving around the users. The aim is to produce highly usable and accessible systems, aiming for user satisfaction while averting negative effects on health, safety, and performance. Continue reading “My First 6-Day Design Workshop”

6 common cognitive biases UXers should know

When conducting user research and testings, are you aware that cognitive biases can occur to both ourselves and the users? These biases threaten the validity of the research, making research insights less applicable.

A cognitive bias refers to a systematic illogical thinking pattern that affects judgments and decisions. These biases allow us to make decisions quicker and easier, but sometimes it also hinders us from generating accurate judgment.

In this article, I have listed out 6 common biases that you may encounter during user research and approaches you can take to avoid them. Continue reading “6 common cognitive biases UXers should know”

How to Stop UX Research being a Blocker

Fitting research into agile teams

At its best, UX research surfaces insights and enables progress; at its worst it just gets in the way. There are many objections to conducting UX research, but the most common I hear is ‘We don’t have enough time.’ I can certainly sympathise with this inclination to ‘just get it done’. With the move away from overly managed waterfall projects, to cross-functional agile teams, it can be hard to find where in the process research best fits. Continue reading “How to Stop UX Research being a Blocker”

7 Things I Learned from Running My First GV Design Sprint

Most people have participated in a sprint. Some have heard about a Design Sprint. At Kulina, we ran our first Design Sprint about two weeks ago. It was a new concept to our team members, and pretty much to the startup community here in Yogyakarta (known as Jogja), Indonesia (see where it is on the map here).

This exercise has been an excellent tool to change the way our team members think about approaching problems, designing solutions and developing products. We were lucky to have our CEO, designer, marketing lead, developer & customer support lead attended. I highly encourage everyone to give it a try.

Here are the seven things I learned from running our first design sprint:

1. Be prepare for the sprint preparation

Our co-founders, Andy Fajar Handika and Andy Hidayat (yes, both of them named Andy) and I decided Thursday the week before our sprint that it would be best for me to move to Jogja since most of our developers and marketers are based there. And, my first task? Running a design sprint. So, I bought the book Sprint and hopped on the plane. This means that I hate to finish the whole book over the weekend.

We also needed to prepare writing materials, block out the room and clearing the team’s schedule to be ready before the meeting. If you have an amazing HR team like we do, they could be a great help!

In addition, everyday after running the whole day meeting, I came back home, wrote a summary of the day’s meeting, reviewed the next day’s section in the book, and jotted down notes. The next morning I arrived early, prepared the whiteboard and other materials to be ready for the meeting. But trust me, it is all worth it!

Note to facilitator: everyone else’s sprint might be done at 5pm, but yours doesn’t. Be prepared for the week!

2. It also required you to think on your feet

Sprint book is an amazing resource, no doubt. However, there will be a lot of things you have to improvise and adjust on the spot. Small little details like suggested supplies in the book couldn’t be found in Jogja, so what should we use?

We scheduled things for 1 hour, but it took a little bit longer. Things we scheduled for a couple hours took 30 minutes. We had to adjust on the spot.

I added a few techniques that I used at my previous companies. I like the idea of doing double crazy 8s where we posted our first solution on the wall, explained our solutions, looked at others and redid it one more time. My team found that to be very useful.

Be ready to adjust, adapt and innovate!

3. It’s about bringing people together

You would assume that smaller teams like in startups would communicate better. WRONG! It still amazed me how we sit next to one another but didn’t really know what others were working on.

Through running this design sprint, the team now sees the values of learning from other departments, and how their work might affect others. Our developers used to deploy new features without informing our customer support team, so they only heard about those features through customer complaints, HA!

Our customer service lead realizes her team had to be more active and get involved early on in the development process. Our team now values working cross-functionally and they are craving more time to work together like this again.

4. Failure = Amazing Learning

Our sprint question was “will customers understand the values of our new cashback feature”? The answer we got from the user interviews was a harsh “no.” However, what we learned along the process is much more useful than getting this right the first time.

For the first time, we actually spent time understanding the shared goal.

Our Incomplete Customer Journey & Interactions

For the first time, we actually understand the whole customer journey! How customers experience our product. Can you believe it?

5. Working individually together actually worked

Everyone talks about brainstorming sessions, and wants to do more of those to generate innovative ideas. But most of us hate attending one because it doesn’t go anywhere, and usually it’s one or two people that talk.

During the design sprints, everyone writes their own ideas and share it after. You will be amazed by when I asked the team what we should do, everyone was silent. But when I told them to write it down with 5 minute times, we have the room full of post-it notes. I now apply this technique to most of my meetings.

6. Design sprint plays the office politics perfectly.

How many times have we voted on an idea, and one of the cofounders, C-level executive or your boss came in and changed the entire thing? During the design sprint, we involved the HiPPO early on, so they felt like they were part of the team. We let the team vote on the best idea (everyone loves democracy), and our HiPPO make final decisions. Perfect! Now we can move on!

7. Anything can be tested, and it should be

On the 3rd day of our design sprint, our team was a little bit ahead of a schedule, so we decided to test whether what we have been working on makes sense to others. In addition to prototyping how different pages flow, we were curious to see if the copy we wrote together is easy to be understood. We tested of solution with a couple of non-participating members, we realized we were pretty terrible at writing it together. So, I decided to let each team member come up with their own copy. We then put it right next to one another, name it differently and test it with other non-participants.

Our coworkers to read, explain and rank the solutions based on their preference and clarity. Who do you think come up with the easiest to understand explanation?

A marketer?

A designer?

Nope. It’s our developer.

To make it even more interesting, the solution of our beloved CEO got the worst vote consistently. It was painful for him to watch this happen live, but I am sure the team learn the benefits of testing our solutions before making final decision.

Our Sprint Team Celebrating the Week

In short, my team and I have learned a lot throughout this process. It has transformed the way we think and do things. I would like to thank the Fira, Rebotak, Andy, Imung and Thomas for working on the holiday to finish our sprint. I would also like to thank the team (Jake Knapp, John Zeratsky, Braden Kowitz) at Google Ventures for sharing this wonderful tool with the world.

If you have any question or suggestion, feel free to contact me at, LinkedIn or Twitter!

I’m currently the Head of Product & Growth at Kulina, food subscription service and startup based in Indonesia. Previously, I led the product marketing team at Product Madness in San Francisco. I also advise startups in the US & Asia on product design and growth. During my free time, I write to share my learnings on Medium.

7 Things I Learned from Running My First GV Design Sprint was originally published in Muzli -Design Inspiration on Medium, where people are continuing the conversation by highlighting and responding to this story.

Here’s the thinking behind my new MA in Data Journalism


Cogs image by Stuart Madeley

A few weeks ago I announced that I was launching a new MA in Data Journalism, and promised that I would write more about the thinking behind it. Here, then, are some of the key ideas underpinning the new course — from coding and storytelling to security and relationships with industry — and how they have informed its development.

1. Not just data journalism, but data storytelling: video, audio, text, visuals, interactivity — and UX

In designing the course I wanted to ensure that students thought about how to tell their data stories across all media — not just text and datavis.

I created a central Narrative module which gives students the technical and editorial skills to report a story across multiple platforms and media. That includes video and audio, techniques of longform immersive storytelling, social media-native data journalism, and visual journalism techniques (“Overview, zoom and filter, then details on demand“).

The module also looks at how to employ narrative techniques in interactivity too — after all, what is the “user journey” in UX, but another narrative?

2. Coding in a journalistic context, not a computing class

It’s no surprise that I’ve decided to make coding-as-journalism a central part of the MA in Data Journalism.

We do not send journalism students to the Law faculty to learn Media Law, or to the English faculty to learn about subbing and style, so I wanted to ensure students learned coding in a journalistic context too.

Doing so means students get editorial, ethical and legal guidance in class alongside technical support. Hackdays, Hacks/Hackers meetups and other collaborations with computing and other faculties provide opportunities for cross-disciplinary innovation around shared objectives.

Equally importantly, teaching this way makes for a more efficient and pedagogically effective experience for the student: being taught how to make a for loop or generate a range of numbers within the context of writing a scraper or an interactive story makes for a much more rewarding learning experience than learning the same skill in an unrelated context (indeed, it’s why I wrote my books on scraping and spreadsheets).

A final reason for keeping coding teaching in-house is that journalists are ultimately judged on their reporting over their coding, and I felt data journalism teaching and assessment should reflect this: the point of the modules is not to merely demonstrate technical excellence, but rather to demonstrate how technical skills can be used to facilitate journalistic excellence.

Striking, original stories made possible through creative application of coding and other data skills will impress potential employers much more than something technically impressive but journalistically basic.

3. Three languages — and computational thinking

programming - the IT crowd

Teaching coding this way means I can introduce students to at least three different programming languages in at least three different modules across the course: R and JavaScript in the first semester, then Python for advanced scraping and other investigative techniques such as network analysis in Specialist Reporting, Investigations and Coding.

SQL, regex, command line and Git are all covered too.

Teaching 3 languages allows students to learn the underlying techniques of coding which are language-neutral: being able to identify an editorial challenge using computational thinking, and then find the relevant libraries and examples which will help them to solve it (and understand the documentation).

It also means they can adapt to a newsroom which prefers one or more of those particular languages, or communicate with developers who use different languages.

4. (Re)inventing the data journalism workflow

telegraph newsroom

Newsroom image by Rob Enslin

Most newsrooms have gone through some form of reorganisastion in the last decade, and are likely to do so again in the next.

The introduction of data journalists or interactive project teams has often been part of that — but we still don’t know the best way to fit those data journalists into the wider organisation, or organise those teams.

We also need to be thinking about how the integration of data journalism and its workflows affect the journalism itself. To list just five questions facing us:

  • To what extent are data journalists choosing to report on certain subjects over others because the data is more readily available?
  • How does a journalist in a broadcast organisation work differently from one in a print or online-only publisher?
  • When developer time is expensive and a bottleneck to innovation, how does that shape what can be done editorially?
  • What automation can we build into our workflows — and what issues does that raise?
  • And of course, how does the CMS limit us editorially — and how do journalists get around those limitations?

I wanted to make sure that they had an opportunity to explore these questions in practice as they organise their own newsrooms alongside students on the MA in Multiplatform and Mobile Journalism.

When in their later career they begin to form their own data units in media organisations, or are invited to contribute to yet another reorganisation, it’s important that they are able to make informed decisions.

5. Media law and ethics — and technical defence

I wanted the law element on the MA Data Journalism to not only address regulatory frameworks, but also specific considerations when dealing with data. That includes information security, the ethics around issues such as personalisation and mass data gathering; legal considerations such as data protection; and the use of laws such as Freedom Of Information.

It is one of the pecularities of our age that it is no longer enough for a journalist to be able to mount a legal defence to protect their information and their sources; they must now be able to mount a technical defence as well.

6. Specialist and investigative skills alongside technical skills

MA student Carla Pedret's final project was shortlisted for the Data Journalism Awards

MA student Carla Pedret’s final project was shortlisted for the Data Journalism Awards

Most data journalists operate much as specialist or investigative reporters do: focusing on a particular field and trying to understand how information is collected and stored within that.

I wanted students to have an opportunity to develop that specialist knowledge, and exercise data journalism skills alongside other important techniques such as analysing company accounts, interviewing, and understanding how a system works.

After all, a data journalist can only work with the data they have access to. And having good data often means knowing where to look, who to ask, and how to understand the context in which it has been gathered. This part of the course also provides an opportunity to create striking original reporting which builds the student’s reputation.

7. Working with industry — and communities of practice

Many current data journalists arrived in their roles through internal routes having worked in a freelance role on particular projects that needed data skills and those roles either being extended or made permanent.

I regularly field calls from media organisations asking for students with data journalism skills to help with a story, and so for the MA in Data Journalism I worked to formalise those relationships in a range of ways.

These relationships cover local and national newspapers, magazines, broadcasters and online-only publishers both in the UK and internationally.

It means that students have access to a range of opportunities to work on industry projects, and can easily seek out potential clients whose problems they can take on for the module addressing enterprise and entrepreneurship. The idea is not just to create opportunities for students — but also hopefully building capacity in the industry itself.

Just as important as industry are the wider communities engaged in data journalism (more here): a lot of research has been done into the intersections between journalism culture and hacker culture, and I believe that it is important that data journalism students engage with the various online networks surrounding data and coding.

Those are the communities which will support the student long after graduation, as new tools and techniques come along — and new stories.

I’d welcome any thoughts on the course and other elements which should be included.

Filed under: online journalism Tagged: automation, communities of practice, ethics, investigative journalism, javascript, MA Data Journalism, narrative, newsrooms, Python, R, scraping, security, ux

Welcome to eLife 2.0

We’re moving the online journal forward, and we hope you’ll join us on our journey.

Illustration by Davide Bonazzi

Why do we need an eLife 2.0?

When I joined eLife just short of two years ago, I moved from a long stint in the corporate high-tech space into the scientific publishing realm, and got a serious taste of professional culture shock. Having spent much of my career as a user experience practitioner, I was taken aback at the state of online science journals when compared directly against the well-established standards of usability, accessibility and design that have become commonplace in the online services, retail, technology and social media spheres.

Responsive design, the ability for content to adjust its presentation seamlessly from one device to another, is still a rarity in the journal space. So are applied UX design principles, affordances for multiple reading and browsing behaviours, offline reading and performance optimisations, and a number of other practices most of the rest of the web takes for granted.

The eLife website did many of these things right, but it was still far from ideal. Born of a need to get up and running quickly in the fledgling days of the organisation, and tasked as an effective delivery vehicle for an innovative peer-review process, the site followed the standard online journal practice of displaying as much information about the article as possible, without a great level of care about whether, and how, this information should be prioritised.

The new eLife homepage, left, next to the old

The result, again typical of many online science journals today, was a somewhat cluttered article view brimming with seldom used links and metadata, a sub-optimal visual hierarchy, and an overall presentation that most users found less preferable than the stalwart format for research consumption, the venerable PDF.

The observation that the PDF, a proprietary legacy of the printed page era, still represents the dominant form through which the latest research is accessed, suggested that we as a journal had a real opportunity to look closely into what we could do to help our users transition towards more dynamic, rich, and interactive research artefacts that can more fully and faithfully convey the nuances of scientific discovery.

Thus began the effort that became eLife 2.0, a user-centred, bottom-up overhaul of our online journal aimed at delivering three key goals: deliver the best article reading experience, highlight content that enriches the research, and make it all easy to find. All while setting down a modern foundation for future innovation in the research publishing space.

Goal 1: deliver the best article reading experience

In an era of dynamic web content, open data, and increasingly powerful browsers, the PDF still rules supreme as the primary form through which new research is read. This is a shame, because it means fewer researchers are taking advantage of opportunities to more fully convey the full, reproducible narrative of their research through tools like interactive figures, executable code, and live data visualisations.

Our user research indicated a number of rather predictable reasons for PDF’s continued dominance as a research consumption format, including portability, offline reading use cases, and good old fashioned familiarity. But one of the most prevalent reasons given by our users for preferentially using PDF really surprised us: “because it looks nicer”.

Aesthetics are easy to underestimate in the realm of reason and scientific rigour, but even the most analytical mind is still human after all and, whether consciously or not, appreciates reading experiences that minimise visual noise, reduce distractions, and therefore limit sources of cognitive load beyond what’s needed to interpret the content being read. In the case of research articles, “looking nice” is in fact a proxy for some important aspects key to an article’s readability:

  • A clean layout, with a clear visual hierarchy
  • No intrusive sidebars, calls to action, thumbnails or adverts
  • A legible, high-contrast font
  • Good-sized figures that can be consumed alongside the article text

These aspects formed the basis of eLife 2.0’s article page design, and went on to influence every major design decision throughout the rest of the site.

The new Article page (left) drastically simplifies scanning for relevant information against the previous design

Inspired in part by research into reading behaviours, as well as reading-focused platforms from Medium to the Amazon Kindle, the new article page in eLife 2.0 provides a clean, distraction-free layout with a flexible navigation structure that supports how reading and skimming behaviours vary across desktop and mobile. In addition, eLife 2.0 article pages more seamlessly embed our signature Lens view (for simultaneous text and figure browsing), which is now accessible by selecting the Side by Side option alongside the traditional Article and Figures views.

The new article pages are also optimised for reading across a wide range of devices and screen sizes, and should load significantly faster than our previous site thanks to a streamlined page asset budget and our new IIIF-enabled figures. Standing for International Image Interoperability Framework, IIIF allows us to more efficiently serve the large images behind our figures by efficiently delivering only the bare minimum image data needed to render an image at any given screen size.

IIIF also forms the basis of further enhancements to the site. Shortly after launch, we will be deploying an enhanced asset viewer that will take full advantage of IIIF’s feature set. This new viewer, which we’ll also release as a standalone open source component, will allow figures viewed in full screen, and all their supplements, to be zoomed with very fine granularity thanks to OpenSeaDragon. This will let users interact with very large, highly detailed images with no performance penalty on almost any device. In the future, IIIF will also allow us to explore the ability to let users annotate our figures, as well as a number of other interesting use cases.

And what about offline reading? We’ve now laid the groundwork for the implementation of Serviceworker, a technology that will soon let eLife 2.0 articles cache locally on your machine, so that once you’ve browsed to an article you can go offline, reload it from your browser history or a bookmark at a later date, and it will still be there waiting for you. Or, you could download the article into ScienceFair and take advantage of its powerful manuscript management functions while enjoying your papers though our eLife Lens view.

Goal 2: highlight content that enriches the research

eLife’s in-house teams generate a large amount of content, such as Insights, Feature Articles and Editorials, that add both context and perspective to the science we publish. These articles also appeal to a broader, less specialised audience, and promote a shared understanding of scientific discoveries across disciplines.

In eLife’s previous iteration, this content could be quite difficult to find (Some users didn’t even know we had a regular podcast), so we wanted to make sure that all this content had a proper home, a portal where readers could more easily access this material.

Our brand new Magazine section highlights the content that builds on the research.

The Magazine section of the site was created as just such a portal, showcasing everything from our latest Insights, Feature Articles and Editorials to subject-specific Collections, our Podcast and a range of content for and about early-career researchers.

It is our hope that the Magazine will establish itself as an alternative entry portal to eLife’s content that appeals both to specialists who are keen to read about developments in other fields, and to non-specialists looking for a perspective on the latest research across the life sciences.

Goal 3: make it all easy to find

Search was never a strong use case for the eLife website, given the majority of our visitors originate from search engines such as PubMed and Google Scholar, and enter our site directly at the individual article level.

But search is not the only way to find relevant content, which is why we have completely overhauled both the site’s navigation as well as its content recommendations infrastructure. What this means, is that specialists can jump directly into their subject of interest from the home page without having to so much as scroll, or get there directly from an article by clicking the appropriate tag.

In eLife 2.0, research categories are now treated like mini home pages in their own right, presenting relevant, up-to-date Research and Magazine content in a single organised view — allowing users so inclined to bookmark their research category of choice and streamline their browsing experience.

The new research category pages (left) provide custom landing pages for research specialists.

Finally, when you’re done reading an article in eLife 2.0 you will be discreetly introduced to a set of closely related articles, to help you discover more relevant science, or to more easily follow the full narrative of a discovery without leaving the article page.

And yes, we have also overhauled our global search capability to cover all content types across the site and provide more intuitive filtering, so you should always get the most relevant results whether you’re searching for an author name or a model organism (or anything in between).

Performance, patterns, and giving it all away

eLife 2.0 is the result of roughly eighteen months of user-centred design, development and testing, with many key site features having seen dozens of iterations as a result of direct user testing and performance optimisations. And while we’ve made every effort to ensure the site serves as a good example of UX and accessibility best practices, we know others can do even better.

To that end, we will soon be releasing all the UI patterns, style guides and key visual and code assets developed for eLife 2.0 under a CC-BY license as part of eLife’s Continuum publishing platform, in the hope that they will lower the barrier to entry for other publishers to use what we learned in the making of eLife 2.0 to improve their own user experience.

By using these assets, other publishers will also be able to take advantage of the substantial performance gains delivered by our minimalist design patterns, while making their own sites substantially more maintainable by adopting our novel approach to the use of PatternLab for the modular implementation of the site’s front-end (a post on the subject will follow soon).

We have worked hard to make sure that eLife 2.0 brings across-the-board improvements to our users, and now that the site is out we have many new features in the pipeline, to be announced in the coming months. We will be measuring the success of the project not only by its impact on the usual visitor metrics, but also on whether and how the assets and resources we release to developers are used, and whether they are judged as helpful by the community.

If you have any feedback on the new site, we’d love to hear from you. If you like what we’ve done, please tell your friends and colleagues. If you don’t, please tell us!

Thanks for joining us on the next phase of our journey.

What makes Top Tech Companies Successful?

Lessons learned from Google

I recently read a book called “How Google Works” by the founders of Google, Eric Schmidit and Jonathan Rosenberg. It was an amazing read, filled with numerous insights to the basic values Google was built on and their reasons to how they became so successful, from hiring top talent to centering their products around their users (with great stories to support their reasoning!)

Using Google as a reference to look at any top tech company’s success, how can we apply their advice to our design work? I compiled insights from the book that really resonated with me and how they connect to thinking about the design process and leveraging that to create a successful product or service.

They put their users first, always

The reason why a product or platform is so good is because their emphasis is on their users, not the product itself. This means fulfilling a need or problem users never realized was a problem to begin with until they provide a solution that transcends current product offerings.

Remember when Google released their search engine and that search engine became the best engine everyone uses today? This is because their primary goal was to provide the best experience for users to look up publicly accessible information which wasn’t something people would have thought of changing, given the existing search engines at the time. The existing search engines would provide users information when they typed something but they would often get poor results. It was like if you typed in a specific title of a car, but instead the first link would be to an art show with the name of the car included in the link description.

Serving our end users is at the heart of what we do and remains our number one priority (214)

When Google or any company says their “end users”, they mean users who have been enlightened as the result of their products. The product doesn’t just address their needs but provides a new perspective; a new way of doing things.

Google believed they could create a better product than what existed at the time, with one purpose: to create a search engine that had the single goal of surfacing accessible information in the context of what a user needs. With the rise of web pages and digital information, how could Google facilitate a seamless connection that would provide users what they were looking for and ultimately a solution that would address the unknown pain point of finding relevant information in one search? They started with their algorithm, PageRank, which would rank web pages based on what the user typed. Now with a plethora of data which exists now and the amount of users who use the internet, Google search has become an necessity when searching up any information, given the refinement of the algorithm and additional features added to make Search as diverse and valuable for users in need of quick, accessible information.

They value technical insights over market research

When Google Search launched, they differentiated themselves in that they placed emphasis on credible sources such as academic websites. This came from the insight in that the quality of a web page and how well it a user’s search would be based on the number of pages that led to a page (The more referrals to a web page = the higher the content)

Since then, most of Google’s successful products have been made with technical insights in mind, while the least successful ones have not.

Technical insights build great products. People will know a good product regardless of how it is marketed. Bad marketing can’t save a mediocre product.

In the book, it lists some of Google’s most successful products with the technical insights they are based upon:

AdWords– Ads could be ranked and placed on a page based on their value as information to users, rather than just by who was willing to pay more

Google News– Stories could be algorithmically grouped by topic, not source.

Google Chrome– Browsers needed to be reengineered for speed as websites grew more complex and powerful.

The point is that any successful product uses technical insights to find new ways of developing technology. This can be “driving down the cost or increase the function and usability of the product by a significant factor” (71).

Giving the customer what he wants is less important than giving him what he doesn’t yet know he wants (73)

Technical insights are different in market research in that marketing research only tells you what already exists. It does not give you insight to user behavior or how to do something new. In fact, marketing research narrows down your scope to thinking about the future, making you focus on solving existing problems or instead not telling you how to solve problems which users think don’t exist. As Henry Ford said, don’t look for faster horses.

Technical insights are based on the concept called “combinatorial innovation”. This is combining or recombining what already exists to create something new.

One way of developing technical insights is to use some of these accessible technologies and data and apply them in an industry to solve an existing problem in a new way (75)

Components that are currently driving the wave of new inventions include (digital) information, connectivity and computing. We have the opportunity to use copious amounts of the world’s information, computer power, open source software, and APIs, which can provide access to information platforms with vast amounts of data such as weather, economic transactions and even human genetics. These tools can be used to develop powerful insights and drive change in an industry.

They care about growth first

When building up their product, companies like Google and Amazon prioritized scale over what was considered “growth” in companies. For companies, this meant that they became big by first creating a product, achieving success (locally or regionally) and then grow the company by building sales, distribution and service channels, also ramping up manufacturing capability to match the progress. In order to get to the point of success, it would often be slow and time consuming. This won’t bode well in the internet world where you have competitors doing things quicker and more effectively. You need to have a “grow big fast” strategy.

Tech giants re-invented the meaning of what growth could become

Successful companies scaled their product, focusing on how fast they could release products to their consumers, growing their company very quickly and eventually globally. With a platform, it is easier to scale and reach a broader audience vs a product which doesn’t provide much room for connection. They understood how to create and quickly grow platforms which connected a wide range of users and providers to cater to multi-sided markets.

By building a platform, you can support a network that can connect millions of people and provide value for everyone in a short period of time. An example is YouTube which is a platform that lets you create videos and share them with a global audience. By adding videos and joining the community, you are adding more value and contributing to the growth of what the platform can offer in the future. For users, it is quality content and new information, for Google, it is more investment and for investors, it is ROI.

Platforms, not products

On the verge of being bought out, Google and Facebook decided to focus on building a platform first vs profiting on their “product” in the short term; it became “more valuable, attracted more investment and helped improve the products and services the platform supports”. When you monetize your product too soon, you are sacrificing your brand as well as your user experience (ads over product functionality is a big no no)

They encourage relationship building with other companies and themselves

When your platform is open, it tends to scale more quickly. The internet is a good example of an open platform in that so many people have contributed to it, allowing for connection and communication with a wide range of networks. It may seem counter intuitive to share your intellectual property with other people in fear of losing your competitive advantage but you are sacrificing control for scale and innovation. With open networks, you drive innovation into the product ecosystem while lowering the cost to build. This leads to more value for users (users can have ownership of the products they want to keep on using) and more growth for the ecosystem.*rYqRq7wWXuc61-pg2oc8KQ.png

Google utilized the talent of thousands of users to help them innovate on their products, such as Google Translate which gets help from users all around the world to constantly improve the translation quality of different languages. Another example is their mobile system, Android, which grew immensely due to the growing need for smart phones and can be seen in a wide range of different phones due to its optimization for other products.

But not all successful platforms are open. Apple is a closed system for understandable reasons. They didn’t want to sacrifice the quality of their products and wanted to have full control in providing the best products for their users (establishing themselves as the only company who created products in a different way). Steve Jobs believed that the only way to do this was to provide a controlled and predictable environment in which Apple products were to be used.

They don’t follow the pack, they lead

When we focus too much on our competition, it causes us to fall into mediocrity. We spend too much looking at what our competitors are doing and when we want to try something new, we often don’t take big risks which lead us to developing incremental, low impact changes. In other words, by focusing too much on competition, you will never deliver anything truly innovation. Successful companies allows competition to keep them sharp and then diverge to create something different, something better.

When Microsoft released Bing in 2009, Google saw the need to diversify their search engine. With that in mind, they created new features such as Google Instant which provides search results while you are typing and Image Search which allows you to drag an image in the search box and find that image. This created a distinction between the two search engines.

Google Instant
Image Search

How exciting is it come to work if the best you can do is trounce some other company that does roughly the same thing? (91)

The best companies do their best not to follow the competition, but instead using their competition as a way to be better. Instead of fighting with their competitors, they continue to improve their products and expand their platform to stay on top of the game. They always think ahead and think about what they can do in the future, instead of now.


How did the companies we know and love today become so successful to begin with? Simple. They weren’t afraid to make a difference even if it meant starting from nothing because they could find new ideas from what already exists and innovate upon them.

You will never disrupt an industry or transform your business, and you’ll never get the best smart creatives on board, if your strategy is narrowly based on leveraging your competitive advantage to attack related markets

Disruption and innovation go hand in hand in that disruption allows for innovation to happen which is the result of incremental change and the opportunity for everyone to innovate. This is what creates products that are “new, surprising and radically useful”

They also aren’t afraid of failure. Successful companies have constantly outputted products and failed numerous times to get to where they are now. They still continue to do this in order to stay on the top of the internet space.

Focus on the user and all else will follow

When creating products, a company’s goal should be to produce as much as possible and invest as little as possible until they are able to validate their ideas and know for sure that their product will satisfy users. This is an important step to take become scaling their platform and growing their product ecosystem. This can be said for not just Google, but for Amazon, Apple, Facebook, etc.

If you have questions or just want to chat, feel free to connect and message me on Linkedin 🙂

If you liked my post, please recommend it!

Links to some other cool reads:

What makes Top Tech Companies Successful? was originally published in Muzli -Design Inspiration on Medium, where people are continuing the conversation by highlighting and responding to this story.

Refreshing The Atlantic Homepage in 2017

Process, methodology, and the path forward

Our last major redesign in April 2015, explored the idea of “a real time magazine” and the implications therein. Stories had large images, the page was fully curated, and there was modularity, which in theory provided editorial flexibility. The hope was that we could give the entire page the same care and attention as the top of the page. Over the next year, through A/B testing and analytics, we found that putting more stories higher on the page and reducing the size of the images increased engagement. So while the original vision was off the mark, through iteration we ended up with a performant page that has served us well for the past two years.

But as time went on, and editorial needs changed, we started thinking about where we wanted to go next. We began by talking to our readers, to find out why they use our homepage. We spoke with our editors, to learn about operational pain points and how they are thinking about The Atlantic going forward. And finally, we spoke with our sales team to better understand how the homepage fits into their strategy.


We embarked on a series of user tests in 2016 to help us better understand how readers use the homepage. We spent two days with 10 readers where we tried to contextualize their habits and bring nuance to our quantitative data. Some highlights:

  • Readers mental model is much different than our own. Those that use the homepage treat it as an index of the site’s content, not a subset.
  • Related, it was difficult for users to find the latest stories.
  • Readers often have preferred writers that they look for.
  • When browsing on a traditional computer, e.g. something with a keyboard and mouse, they almost all had two modes: discovery and consumption. In discovery mode, most readers opened multiple stories in new tabs before switching to consumption mode. As one reader said: “I do the filtering first and the reading second.”
  • A large portion of homepage readers access the site through the homepage only.
  • Those who watch video tend to do so in the evenings.

Editorial Staff

We interviewed the editorial team to find out how The Atlantic editorial strategy has changed and what we can do to improve their workflow. We discovered that an entirely curated homepage, while nice in theory, is operationally difficult and taxing on the team. However, it was clearly expressed that they wanted to maintain a curated presence, albeit a dramatically reduced one. The other request was more density and clearer hierarchy so that they could be more flexible when there are multiple stories that they want to highlight.


Previously, we shared how the engineering team is working on ways to monitor and improve ad metrics, primarily viewability and engagement, two things that are impacted by page speed and position. As we approached the homepage, we had a mandate to take a proactive approach and think about ad performance holistically. We also wanted to update our native promos to be, well, more native while still being distinct from editorial content.

The Solution

We distilled our competitive, stakeholder, and user research into two points:

  • What’s important?
  • What’s new?

Designing based on these questions, we ended up with two distinct sections: important and curated (top) and new and programmatic (bottom). By establishing this framework, we were able to meet editorial and reader needs in an elegant and coherent manner.

Important and Curated

Anchoring the top is a lead story, a familiar paradigm to readers and editors alike. Then we have two areas which serve to showcase our breadth: a discrete auto-populated News section and a group of three curated stories which can each have related links to provide more depth and context.

One of the major changes we implemented was adding a new area called Featured. By playing with the hierarchy, this area serves to highlight stories which might not be in the news, but that we think are worth our readers time.

New and Programmatic

This section is anchored by the Latest module, a reverse chronological river of stories, so it’s easy for frequent visitors to catch up on what they missed. We redesigned our river item to be at once more scannable for readers and flexible for editors. Given our increased focus on video, we also added an inline video module to showcase our latest content.

Second, we moved our Popular module up the page. Users would go hunting for a list of popular articles, so rather than make them look, we surface it as soon as possible.

Third, we implemented a new Writers module which functions similarly to the river: when a writer publishes a new article, they appear at the top, and as more stories are published, they move down the module. The intention is to expose both established and newer writers to our audience in order to build both trust and familiarity.

Finally, we implemented multiple promotional modules that were designed to be flexible and able to promote anything from a newsletter, to a podcast, to a new magazine issue.


Five years ago our President (then Editor), Bob Cohn, reflected on the role of the homepage in 2012. He noted that “the homepage is the single best way for editors to convey the sensibilities and values of their websites […] the homepage is, as the marketing team would put it, the ultimate brand statement.”

I think those words still ring true. A “brand statement” for The Atlantic in 2017 meant that we should focus on the stories, make it easy for people to find something to read, and get out of the way. While obvious in retrospect, aligning the company around that vision takes a lot of upfront work, but in the end is worth it. However, despite our research, prototyping, and deep focus on our users, I suspect we got some things wrong. We already have multiple A/B tests planned for launch and I’m excited to see how the design evolves.

Refreshing The Atlantic Homepage in 2017 was originally published in Building The Atlantic on Medium, where people are continuing the conversation by highlighting and responding to this story.

Death to Assumptions, Long Live User Testing

How The Atlantic incorporates customer feedback into its product development cycle

“You are not your average customer.” I tell this to new product managers, and it’s a refrain that we repeat often on our team. We know our site inside and out, and can easily be blinded by our own organizational preferences and biases. In an effort to see more clearly, testing has become critical to our product development and user experience work. We need to observe how real customers, not Atlantic staff, experience our site.

At The Atlantic, we do two types of testing: on page multivariate testing and usability testing. We began robust A/B testing in early 2015, and have recently launched a new effort to make usability testing part of our regular product development cycle. We are a relatively small team comprised of two designers, four product managers and seven developers responsible for all digital display and distribution of and Our ability to incorporate testing demonstrates that it doesn’t require a large team or lots of money to generate valuable insights.

Some people ask what usability testing adds that we can’t learn from analytics. One of the main benefits is hearing about our customers’ thought processes and perceptions of the site. It’s helpful to hear people describe how they pick stories they want to read. Likewise, it’s important to understand which parts of our site are frustrating to use. For example, analytics can tell us how many newsletter subscribers we have. The number alone tells us if people are subscribing, but without usability testing we don’t know if the process is frustrating or confusing. What’s more, without talking with customers we don’t know how many people might have been interested in newsletters, but didn’t know they existed or where to sign up.

Our Experience with Usability Testing

Our team has experimented with usability testing in the past — particularly before important launches. We wanted to create a process that would be affordable, informative and easy to deploy regularly for our small team.

Before joining The Atlantic, I worked in new product development for a major financial services company. We did extensive user testing and I spent many hours behind one-way glass watching real customers interact with products in development. At The Atlantic we wanted to incorporate the same level of testing rigor without the research facility budget or personnel.

Here’s how we did it and what we learned along the way.

Set the Scope

For the pilot round of our new approach to usability testing we focused on our core homepage audience. We have a very stable, loyal and engaged group of people who visit the homepage regularly. The homepage was originally designed to meet their needs with curated top stories, topic-specific modules and frequent updates. We wanted to understand when, why and how this group visits the homepage; what they love or hate; and whether we have the right mix of content for them. Our goal was to improve the customer experience, find ways to increase the number of articles read per visit and encourage people to return more often.

Recruit Participants

We recruited by posting a request for volunteers on our homepage. We narrowed the list down to folks who lived in the DC area, visited the homepage multiple times per week, and agreed to let us film the conversation for internal use. Then we invited a mix of age, gender, magazine subscribers and non-subscribers, into the office to meet us in person. As a thank-you, we offered participants Atlantic swag.

Write the Script

To start the process we clearly outlined our goals and what we wanted to learn, and from there devised questions and tasks for the participants. When we wanted to understand how easily customers could accomplish basic objectives like changing their print mailing address or finding an author they’d heard about on TV, we wrote tasks and observed how participants approached them. For more behavioral insights, like when and how people watch video, we simply asked customers about their habits.

Practice Makes Perfect

Then it was time to practice interviewing people so we learned how to listen, probe for more information and avoid guiding the participant to an answer we wanted to hear. It’s awkward sitting in silence next to someone while they try to figure out how to use your site. Practicing on a safe person, coworker or otherwise, is time well spent so the real thing goes smoothly.

The Big Day

Our product and dev team took turns moderating interviews in a conference room, while a rotating cast of observers watched and took notes in another room. From our final script, we created a note-taking template for the observers to quickly track key takeaways and interview details.

We used a speakerphone and a screen share to let the backroom listen and watch the sessions, which were also recorded for future reference. After each session and at the end of both days, the moderators and observers regrouped on what we learned, the trends that were emerging, and whether we needed to adjust the script.

What We Learned

The feedback we heard was full of encouraging reinforcement about what we’re doing right. We learned how many regular homepage readers use the page as an index in a way we hadn’t expected. We also had several a-ha moments where we realized our site had confusing UI or unclear naming conventions.

We summarized all of our findings into an internal cross-functional presentation. The recommendations included immediate changes to the site that we’ve already made, such as including more links to service print accounts, ideas to A/B test, like new modules on our homepage, and challenges for our designers to work on resolving, including upcoming improvements to our site navigation.

Refining the Process

We also evaluated our approach to usability testing to identify what worked well and what we could improve. We found our first round of testing, which had ten interviews over two days, offered diminishing returns after the seventh or eighth interview as we stopped hearing new insights. In the future, we’ll cap the number at seven. We also learned we budgeted too much time between conversations and that 30 minutes is sufficient. Fewer sessions, and a denser schedule, means we can fit all the interviews in a single day.

The hardest thing to predict was the pace and length of the script. We thought we had plenty of questions planned with extra topics in case a session moved especially quickly but we still ended a few conversations slightly early. Time with readers is precious so it felt like a shame not to use every second. In other conversations, we had to truncate sections of the script in order to make it through all topics. We will tweak our script in the future and include more backup questions to make full use of our allotted time.

We have since done a second round of usability testing, focused on the nav and article page, and the changes we made to our process were definite improvements.We’ll keep optimizing as we go but we’re pleased with the process we’ve put in place.

Death to Assumptions, Long Live User Testing was originally published in Building The Atlantic on Medium, where people are continuing the conversation by highlighting and responding to this story.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑