Each rose to power by a different route, with Facebook and Google dominating digital media, and Amazon close on their heels. Today, Facebook, Google and Amazon offer up their own brand of the marketer’s holy grail: people-based targeting. With a slight catch, of course, that this is only within their walled gardens.
Each platform provides people-based targeting capabilities against unique consumer profiles, and are well positioned to help marketers at different points during the path to purchase. These 3 titans also have a common denominator: Identity Resolution.
Identity resolution – the ability to connect people, data, and devices – is the foundation for powering people-based targeting. The big 3 can identify consumers with precision at scale across desktop and mobile- no small feat in today’s fragmented digital media landscape – and apply data on those individuals to each and every marketing touchpoint. And it’s hard to argue with the results they’ve seen.
So how do the big 3 compare, and how can marketers tap into these powerful capabilities outside of the walled gardens?
But first, what is Identity Resolution?
Identity resolution, at its core, is the ability to link people, data, and devices. It allows marketers to recognize individuals across channels and devices, and use first, second, and third-party data to apply context – like interests and past purchases – to each interaction. It is the foundation for all people-based marketing.
So why do the big three have such strong identity resolution capabilities, and how do they use them to the advertiser’s benefit? The answer is their users must be logged-in in order to use their services. So they know who is viewing their ads.
The logged-in state is a powerful form of identity because it’s a more accurate data source based on verified personal information, in this case, discrete consumer accounts. And while marketers are not keen on the vice-like grip these walled gardens hold on their own data, each offers tremendous scale and the ability to track a variety of behaviors and interests at the individual level, giving them unparalleled targeting capabilities.
But which key data points should form the basis of effective identity resolution? Google’s version? Facebook’s? Maybe it’s based on Amazon’s take on what makes a customer unique. Ultimately, the context in these environments has a profound impact on where each platform really succeeds.
To Facebook, you are what you like
When you see a person on their smartphone, it’s a safe bet that they’re on Facebook “liking” things: movies, celebrities, products.
That huge investment of time on the platform means consumers leave behind hour-by-hour logs of their days that can easily translate to specific points along the path to purchase. As Kathy Norford, vp media director at Spawn Ideas, described, the platform’s tactics “complement each stage of the consumer journey.”
This is where the power of Facebook’s customer profiles and identity resolution capabilities come in. All that profile data, collected in the logged-in state, is linked to individual users’ emails. The social network also knows which ads individual consumers have seen and can constantly take the pulse of its over 1 billion users’ brand and product awareness.
To Google, you are what you search
Google’s treasure chest has always been users’ search histories. But from a people-based perspective, the real treasure comes from the scope of what it means to be logged into Google’s network.
When consumers log into Gmail, Google Maps, Google Calendar or even just their Android phones, they’re identifying themselves via their email addresses. Google also collects identifiers such as users’ phone numbers and second or third email addresses. Ever been asked to verify your account with another email, or a text message? Google has your number.
Pile a comprehensive search history on top of that valuable identity information, and marketers can target individual consumers based on their searches, hobbies, and possibly even what they’re going to be doing next month. “Maybe they’re going to be traveling and are researching a trip,” said Tim Woo, vp research and analytics director at RPA, in describing how his agency taps into Google’s unique customer insights.
To Amazon, you are what you (plan to) buy
From shower curtains to textbooks, you can buy pretty much anything on Amazon—and consumers do, frequently. Typically logged in and returning regularly for supplies, Amazon users are legion—about 300 million—and they all come attached to detailed contact information.
Amazon’s identifiers include emails, mailing addresses, billing addresses, and likely phone numbers. They may even include a user’s work address or parent’s address if those are more convenient for deliveries.
As to connecting data to consumers, Amazon’s value lies in users’ “actual purchases and purchase intent,” explained Woo. This becomes even more powerful considering Amazon’s recent developments for advertisers.
Using Amazon Media Group, agencies can now buy ads directly through the e-commerce giant. The pitch is that Amazon’s data can tell advertisers not just how users forge their paths to purchase on Amazon, but also how they do it just about everywhere.
To marketers, you are all of the above
While it’s clear that Facebook, Google, and Amazon each dominate their own realms when it comes to engaging real people, marketers haven’t had an easy time marrying these separate social, search and e-commerce campaigns. Luckily, independent identity resolution solutions exist to unify people-based targeting and measurement across these platforms and the rest of digital and offline channels, because your customers are everywhere and your marketing should be too.
Manager of media strategy for Honda and Acura, Phil Hruska, noted a “real challenge” in unifying marketing across Facebook, Google, and Amazon channels. “All of the major media companies are kind of contained environments,” he said. “They all have their own identities.” Which makes robust measurement a notorious challenge.
As savvy marketers increasingly invest in their own identity resolution capabilities, omnichannel nirvana is starting to come more and more into focus.
“ AI-driven voice interfaces, such as Amazon’s Alexa, Google’s Home and Assistant, Microsoft’s Cortana, and Apple’s upcoming HomePod [are] potentially bigger than the impact of the iPhone. In fact, I’d describe these smart speakers and the associated AI and machine learning that they’ll interface with as the huge burning platform the news industry doesn’t even know it’s standing on. Continue reading “TLDR: The voice interface is the future of news and media”→
There’s a race heating up in the world of artificial intelligence, and it involves smartphones, consumers, and their voices. Voice-activated smartphone apps use a combination of artificial intelligence, cloud-based natural language processing (NLP) and machine learning to power their services – and with all the buzz around these apps like Siri and Alexa, it’s clear that tech giants are competing for yet another area of consumer attention.
While this is currently a small share of the app landscape, it’s also a rapidly evolving and expanding one, and includes apps such as Microsoft’s Cortana and a constellation of apps from Google, like Allo and Google Now. But what type of consumers are actually using them? And what are they using them for? Does voice guided user interfaces, and cloud-based AI apps, provide for the fourth big disruption in the world of the Internet — the first ones being the birth of the world wide web, the second being the rise of social media, and the third being the emergence of mobile apps?
Who’s using personal assistant apps? The average user a 52-year-old woman
According to Verto’s data on the user base of AI-powered personal assistant apps on mobile devices, the personal assistant “superuser” – someone who spends more than twice the average user’s monthly time spent on personal assistant apps – is a 52-year-old woman who spends 1.5 hours per month with personal assistant apps.
In fact, personal assistant apps seem to be more popular among women overall: Verto’s data shows that women (54% of total user base) use personal assistant apps slightly more frequently than men. And interestingly, there is a trend toward personal assistant app usage in older age groups, especially adults in the 45-54 and 55-plus age groups. Based on use cases and ease of functionality, these apps could see wider adoption in older generation as voice-activated apps become integral to assisted living communities. While most apps generate buzz or downloads from groups like millennials or Gen Z, personal assistant app users do not conform to typical “early adopter” consumer profiles, despite being a relatively new app category.
Personal assistant app behavior varies across genders
In a follow-up survey to better understand why consumers use personal assistant apps, and if these use cases differ across different gender or age demographics, results show personal assistant app usage remains highly utilitarian. A majority (71%) of respondents indicated that asking a question or searching for something specific was one of their primary reasons for using a personal assistant app. This use case was particularly popular among women (who comprise 61% of respondents), and especially among women under the age of 30 (27%) and women over the age of 55 (12%). In contrast, men under the age of 30 comprised just 7% of respondents who cited asking questions or searching for something specific as one of their primary reasons for using personal assistant apps, while men over the age of 55 comprised just 8% or respondents.
Other (but not as popular) reasons for using personal assistant apps include initiating a call or text message or to check the weather. So whether it’s a mobile app or physical device, consumers haven’t fully embraced the idea of a personal assistant into their daily lives, and this presents a challenge for companies like Amazon and Google to continue enhancing their platforms and create the desired need in the market. At the same time, competitors like Apple see this gap as an opportunity to enter the race with their own services in attempts to become the complete, all-in-one packaged personal assistant for consumers.
What does this mean for assistant app publishers?
Think about the early stages of Instagram, Twitter, or even a music streaming service – most of these apps it took a while to become fully integrated into our lives. Now, these apps have become crucial platforms for publishers to deliver targeted content based on consumer use. For personal assistant app publishers, it appears that there’s still a long way to go before consumers (of any gender or age group) begins adopting assistant apps more comprehensively throughout their daily digital behaviors.
But even in this nascent stage of the market, some clear consumer preferences are starting to emerge, and as data shows, we can map these preferences and behaviors to distinct consumer demographic groups. With women seemingly embracing personal assistant apps more than their male counterparts, and on both sides of the age spectrum, publishers, brands, and advertisers could take advantage of this opportunity to create unique experiences to these specific groups. Using AI to power these apps allows companies to offer personalized content for each individual user and execute different campaigns at varying times of the day, based on peak usage and engagement.
However, we still need to remember, in light of all of the data, that personal assistant apps and devices, are still living their first era – it is mostly early-adopter users who use them actively, with high frequency. Like many successful technologies, they already show significant potential to replace and reshape many widely and frequently needed tasks and actions, so we need to keep an eye on this evolution going forward!
Dr. Hannu Verkasalo is the CEO of Verto Analytics, a pioneer in digital multiscreen media measurement services.
Documentary film deal between streaming video service and UC’s Investigative Reporting Program tests a new model for producing and distributing public interest reporting.
We know a few things.
Investigative journalism can be hugely important. It also can be hugely expensive.
And when it touches on unaccountable power, it needs institutional backup.
So how to support it? How to make it possible for our country to have more of it?
Our answer, at the Investigative Reporting Program (IRP) at UC Berkeley, is a novel one. We’ve set up a nonprofit company to produce, distribute and monetize the stories developed by our staff and students. And that new company, Investigative Reporting Productions, Inc. (INC), has signed what’s known as a “first look” deal with Amazon Prime Video, which is paying for the right to consider our stories before any other outlet sees them and then develop them.
Ever since the collapse of the advertising model that at one time made producers of news profitable, philanthropy has been the most obvious answer to the question of how to support investigative reporting. We see encouraging examples of that — ProPublica, the Center for Investigative Reporting, the Center for Public Integrity and others, including the IRP, a specialized institute within Berkeley’s Graduate School of Journalism sustained almost entirely by donors. Most try to sell their journalism in the marketplace, usually at prices well below the cost of production, or simply give it away.
But it’s hard to believe philanthropy is a sustainable model for the long term, with the possible exception of a small number of organizations. We’ve seen a huge and encouraging surge of demand and financial support for investigative reporting from the public in response to the election of President Donald Trump and his attacks on the press. But how long will that last?
Which brings me to the role of universities, and their journalism schools. I believe there’s a still largely untapped opportunity for journalism programs to play a more significant role in providing quality information to their communities and the nation. If they would pursue investigative journalism and act more often as news organizations, they could fulfill the mission of the university to conduct original research, educate the next generation and spread knowledge. At the same time, they could teach students the highest standards of the craft by giving them opportunities to work on stories alongside professionals.
We have seen that happen here at Berkeley with my colleague Lowell Bergman’s “teaching hospital” approach, where students, postgraduate fellows and staff contribute to large-scale investigative efforts for broadcast, print and web. Lowell is an extraordinary reporter and gifted storyteller. He’s received what seems like every award in print and broadcast journalism, but is probably best known because Al Pacino played him in the Academy Award-nominated feature film “The Insider,” which dramatized his 60 Minutes investigation of the tobacco industry. Award-winning projects the IRP has produced under his direction have appeared on Frontline and Univision, the Canadian Broadcasting Corporation, NPR and Reveal from the Center for Investigative Reporting, in the pages of The New York Times and Washington Post, The Atlantic and The Los Angeles Times.
Working with Lowell over the past year — after a career outside the academy, in leadership roles at The Washington Post, First Look Media and the late-Rocky Mountain News, among others — I’ve seen that a university provides a lot of the necessary institutional infrastructure for investigative reporting. Young journalists, aka students, hungry to do meaningful work. Faculty with the experience to do it. Supportive alumni and donors. A rich intellectual environment that can be tapped for diverse expertise.
Our team of professional journalists, postgraduate fellows and students is already working on a wide range of challenging stories, in the United States and internationally. Topics include labor trafficking, climate change, juvenile justice, the military’s failure to protect the lives of U.S. servicemen and women, local corruption, the accountability of public employees and the student loan crisis.
This teaching hospital model can fill the void now that newspapers and broadcasters no longer provide on the job training for the next generation of reporters, editors and producers.
But something is still missing from this equation. Major public research universities are not built to operate as production companies, as news organizations. Their policies and procedures aren’t designed to support nimble decision making. They are, to put it kindly, complicated bureaucracies designed for a different purpose. And while the funding from public sources has dramatically decreased, the restrictions on these institutions allowing them to operate in the marketplace have not.
Until now, the way the IRP dealt with that was to use private companies to produce its documentaries, to separate them entirely from the university. But there were a number of problems with that approach. In the end, PBS and other outlets wound up with all the rights, even when a substantial amount of the research and reporting was paid for by the IRP and its donors. That meant the university didn’t get any revenue from rights and wasn’t necessarily able to produce future works based on the reporting of its own staff and students. If an outlet decided to kill a story, the IRP had essentially no recourse. Its work was lost. At the same time in my view, the university was subsidizing other organizations. Of course, there were benefits, too. Much of the cost of the IRP’s work was covered by partners. And those partners gave the work public exposure.
But new opportunities have emerged with the rise of streaming video services and the resulting demand for quality nonfiction programming. While the financial terms of our agreement are confidential, it’s widely known that documentaries can cost hundreds of thousands of dollars, and it’s safe to say funding is one important reason we’re joining up with Amazon. And while we’re already discussing projects with Amazon, we’re not precluded from publishing with print, web, cable or broadcast news organizations.
Setting up our new production company will allow us to create significant new revenue streams to support the IRP’s work. And we hope it will allow us to do stories that legacy media organizations often shy away from.
The affiliation agreement, gives the company the right to license the intellectual property generated by the IRP’s staff. In turn, the work, which will adhere to the highest standards, will carry the imprimatur of the university, helping to ensure the public of its reliability. The affiliation agreement is an unprecedented step by the university. Its new Chancellor, Carol Christ, has publicly voiced her support, hailing it as a way of helping to ensure the future viability of the IRP itself and the opportunity for students to have a richer educational experience.
We’re not sure where this will lead. We just announced both the agreement with the university and the deal with Amazon. But we do know that today, the university is about to enter the world of exploring high-stakes stories and delivering them to the public. We also know that our agreement with Amazon gives our work the possibility of reaching a bigger and broader audience while creating a new revenue stream for public interest journalism and higher education.
Publishing to a new different platform than your website is sometime necessary but could be scary. Discover how we adapted our editorial and analytics tools to better understand our audience and the potential of some of the new publishing platforms.
Those platforms where users will read content, or be notified that a new content is available, are called “off-platform” in opposition to the news organisations own websites, referred as “on-platform”.
Amazon Alexa-enabled device like the Echo will soon be able to deliver news, weather, or health-related alerts with notifications from their Echo smart speaker.
The move would make Alexa the first among companies like Google and Microsoft whose third-party voice apps can proactively send notifications.
Announced in a blog post this morning, Washington Post, Life 360, Just Eat, and AccuWeather will be among the first four skills with the ability to share alerts.
Users will be required to opt in to receive alerts.
“When available, users will be able to opt-in to notifications per skill using the Amazon Alexa App and will be alerted when there’s new information to retrieve by a chime and a pulsing green light on their Amazon Echo, Echo Dot, or Echo Show device,” said head Alexa evangelist David Isbitski in a blog post today. “When users enable notifications on a skill like The Washington Post, the skill will send status updates to the device. Users can simply ask, ‘Alexa, what did I miss?’ or ‘Alexa, what are my notifications?’”
The ability to make phone calls and proactively send push notifications were among predictions about intelligent assistants in 2017 made by VoiceLabs CEO Alex Marchick in the 2017 Voice Report. The ability to make phone calls and send messages with an Alexa-enabled device was made available last week.
In an interview earlier this year, Marchick told VentureBeat he believes the ability to send push notifications and connect with friends will be critical to creating the killer app for Alexa.
Another prediction from Marchick: These notifications should be limited to three or four a day, but unfortunately for users, just like mobile, voice app notifications will get out of hand.
“First there’s going to be the capability of a push notification, and it will probably be abused, and then it will get cracked down on, and then they’ll realize that you’ve got to do this intelligently,” he said.
Amazon has not shared an expected release date for Alexa skill notifications.
This week, we’re launching the Digiday+ Slack channel. We’ll kick it off on Thursday at 1 p.m. ET with a chat I’ll lead with Neil Vogel, CEO of Dotdash (formerly About.com). Neil will discuss the decision to mothball the Internet 1.0 brand, the shift to verticals in digital media and probably some Philadelphia sports. Please join us, and stop by the Slack channel to say hello and share what would be most valuable there. And please contact me with any questions about the chat or problems with the channel.
Inside Axel Springer’s headquarters
Last week, I was in Berlin, where we held our first Digiday Brand Summit Germany, gathering together 50-plus top marketers from around Europe. More on that below. There, I stopped by Axel Springer to meet executives from the German publishing giant.
The most immediately apparent thing about Axel Springer for an American is its size. Springer isn’t well-known in the U.S., although it has gotten more attention for its purchase of Business Insider and investments in digital media businesses like Group Nine, Mic and Ozy. In Europe, Springer is a giant. Its bustling headquarters tower sits on Axel Springer Strasse. Springer boasts ownership of Bild and Die Welt, as well as a healthy marketplaces business.
I agree to keep some of the meetings on background, but one striking aspect is the confidence its executives have as such a dominant player. German media is used to a comfortable position of power, and the alarm has clearly been sounded when it comes to platforms. “We are taking our strategic role as the biggest publisher seriously,” one executive told me. “The biggest leverage is if you stand together. It’s an influence leverage. Today we have it. I don’t know if we have it in five years.”
Springer is also pouring resources into Upday, a news aggregator app trying to basically redo what News Corp attempted with The Daily. The difference: Springer has linked with Samsung for distribution, basically serving as the Apple News on Samsung handsets in Europe. “We know how important distribution is,” Upday CEO Peter Würtenberger told me, citing the Bild’s top billing on newsstands throughout Germany. Another upside: “We want to be less dependent on Google and Facebook.”
Upday is an interesting experiment in a publisher linking with a tech giant. The startup, tucked into its own space in Springer headquarters, now has eight editor hubs, 50 editorial staff and editions for 16 countries. Of course, Springer is still in the position of being dependent on a tech platform, since its distribution advantage — it claims 8 million active users a month, 3 million a day — will evaporate should Samsung’s priorities change.
For brands, Amazon is the duopoly
Talking to marketers, Amazon looms larger than the duopoly of Facebook and Google. As one marketer explained at our event, Amazon plays two sides with many retailers.
Said one: “They have an inherent conflict of interest. They buy from us, but they want to sell advertising to us. When you talk to them, you don’t know what their interest is.”
Added another: “Amazon is more enemy than friend.”
For all the focus on Facebook and Google — and these marketers have their qualms with both, especially around data — Amazon is often somehow overlooked. The vast amount of consumer data it has rightly makes WPP worried about its media ambitions.
While it remains a specialized technology, voice is booming. EMarketer released its first forecasts for the voice assistant market, and it’s grown faster than many analysts expected after Amazon introduced the Echo to a wide audience in 2015.
Here’s what’s happening in voice, in five charts.
Amazon dominates, and cheaper devices are a major reason
It’s easy for consumers to try something new if it’s cheap. While the Amazon Echo ($150) and the Google Home ($129) cost about the same, the Echo Dot, the pint-sized Alexa speaker Amazon introduced last fall, beats them at a cost of just $50. Amazon dominates the speaker market on the whole, with nearly 70 percent of the market, according to eMarketer, and the Dot has been the driving force. “The Echo Dot definitely helped drive Echo sales since the release of the second generation model in October 2016,” said Josh Lowitz, partner and co-founder of Consumer Intelligence Research Partners. “Amazon priced it aggressively.”
Younger audiences are playing with these things the most
Just as millennials and their younger siblings drove adoption of streaming video and augmented reality, millennials are using voice technology far more than other cohorts. According to eMarketer, nearly twice as many millennials (29.6 million) will interact with voice assistants on a monthly basis as their Generation X counterparts (15.3 million), and the usage gap between millennials and older consumer segments is projected to widen over the next three years.
By 2019, the number of millennial voice assistant users is expected to grow over 30 percent to 39 million, while adoption among Gen Xers will grow just 10 percent. Baby boomer use will remain close to flat.
… and they are asking questions
People are already using voice for a variety of purposes. Voice’s wide array of use cases partly stems from its availability not just in speakers like the Dot, but on all the major mobile platforms (iOS, Android and Windows), as well as a growing range of automobiles. While Business Insider Intelligence research suggests that over half of all consumers using voice technology use it to text, that’s something that you can only do on an Echo if you’re an AT&T customer.
Competition among publishers is already stiff
The number of skills available on a platform like Alexa has already soared past 7,000 in January, according to the voice analytics platform VoiceLabs.
News-focused skills are the second-most popular type of skill on Amazon’s platform, second only to games and trivia bots, VoiceLabs data suggests. Publishers like Hearst are already building centralized voice teams, which will be expected to develop products that can be used on both Alexa and Google, as the platforms begin to diverge from one another.
Users are unforgiving You only get one chance to make a first impression, and that is especially true in voice. While many publishers are building regular user bases on voice assistants, it’s hard to get people to come back. The below chart, which shows new versus returning users for a Google Home app in late December, shows that just 3 percent of all skills users are active in the second week.
Those numbers may bend as the platforms introduce mechanisms for promoting skills. For now, research suggests any offered skills must be ready for prime time once they hit the platforms’ stores.
A recently as a decade ago, the world was largely dominated by “pipeline” businesses with linear value chains. We would buy products at retail outlets, or possibly their online versions, stay in hotel chains when traveling and hail taxis one the street and nobody thought much about it.
Clearly, a lot has changed. Today, platforms like Amazon, Airbnb and Uber are dominating those earlier, linear business models. Two new books by prominent economists, Matchmakers and The Platform Revolution, ably explain the dynamics of how platforms like these function as multi-sided markets.
Yet while understanding how platforms work as economic entities is both interesting and important, unless we’re planning on designing a platform ourselves — and very few of us are — it isn’t very helpful. The real value of platforms for most businesses today is that they allow us to access ecosystems of talent, technology and information.
Ecosystems of Talent
In 2001, when Fabio Rosati left his job as Global Chair of Strategic Consulting for Capgemini to lead Elance, the company was a startup in transition. Originally conceived as a platform to match companies with freelance contractors, it was now entering the nascent market for vendor management software.
Under Rosati, the business grew and began making money, yet he saw darker days ahead as competition stiffened. So he agreed with the investors to sell the software business in 2006, although the company would retain its name, a small staff, and some intellectual property to pursue an even bigger opportunity by returning to the platform model.
However, its experience in vendor management software showed that it could do vastly more than make matches between firms and contractors, it could widen and deepen the connections between them by monitoring work, offering training and certification in crucial skill areas and developing algorithms that would lead to more successful engagements.
Today, after having merged with its chief rival oDesk, Elance has been rebranded as Upwork. With over 3 million jobs are posted annually, worth a total of $1 billion USD, it is by far the world’s largest freelancer marketplace. 90% of its corporate customers who use the service rehire there.
Ecosystems of Technology
In truth, platforms are nothing new. In medieval times, village markets and fairs served as platforms to facilitate connections between ecosystems of merchants and ecosystems of customers. More recently, enterprise software companies like SAP and Oracle used the database as a platform to control software ecosystems, much like Microsoft used the operating system to dominate PC’s.
Yet, Christian Gheorghe, CEO of Tidemark, sees two problems with that model. First, it inhibits innovation. Outside developers can only do what their proprietary partner allows them to. Second, with more powerful open technology like Linux, Hadoop and Spark, proprietary solutions are often at a disadvantage.
“We built Tidemark on top of open technologies from the start,” he told me, “because we believed that it offered much greater functionality and flexibility.” Not only can the firm build solutions on top of those systems, it can also offer other developers API’s so that they can build more applications on top of Tidemark’s, just as Tidemark can do with theirs.
So today’s open technology platforms allow us to access vastly more technological capability than any one organization could provide by itself and do so at far lower cost. Any firm that would try to go it alone simply wouldn’t be able to compete. That’s why today, even Microsoft loves open software.
Ecosystems Of Information
Clearly, Amazon is the 800-pound gorilla of e-commerce. In 2015, it accounted for a full 60% of US online sales growth. That gives it a leg up on traditional retailers because it can leverage its unique access to data about consumer behavior to outperform any other online retailer.
However, BloomReach offers traditional retailers a platform that allows them to compete on a much more even playing field. Because its technology powers a full 20% of web commerce in the US, it can offer its clients insights on far more than their own sales. Although the data is non-personally identifiable, it allows retailers to benefit from insights that boost sales.
Credit bureaus work in a similar fashion. By offering their data on customer transactions to credit bureaus, companies can benefit from the creditworthiness of potential customers that they have never dealt with before. Consumers, for their part, benefit from much broader access to credit than they would have had otherwise.
We all contribute to data ecosystems everyday, such as when we enter a query into a search box. We then use those platforms to gain access to those ecosystems, which makes us vastly more productive.
A Change In The Basis Of Competition
Business theorists have long thought of strategy as a game of chess. By making the right moves, managers could diminish the threat of new market entrants and substitutes and improve bargaining power with buyers and suppliers. That, it has long been thought, was what led to sustainable competitive advantage.
Yet strategy in a networked world is different. Competitive advantage is no longer the sum of all efficiencies, but the sum of all connections. Strategy, therefore, must be focused on deepening and widening networks of talent, technology and information and we do that by accessing ecosystems through platforms.
So rather than making strategic moves to undercut new market entrants, many firms are establishing internal venture capital operations and incubators to get in on the action. And instead of mere trying to improve bargaining power with buyers and suppliers, they are partnering with them to co-develop new products and services.
Today, power is shifting from corporations to platforms and the best way to become a dominant player is to become an indispensable partner. Smart strategic moves today are not necessarily the ones that allow us to control value chains, but those that will move us closer to the center of networks.
I spent the past five months developing an AI-powered VoiceOps interface for the enterprise. Since Amazon Lex wasn’t available at the time, my team open-sourced the project. We successfully launched it into production in February; it is now live with several customers, helping them prioritize problem resolution for enterprise apps. More recently, we started beta testing with Lex. Now that Lex is widely available to developers, I wanted to share with others some advice and best practices, based on our experiences.
1. Limit intent scope
One of the foundations of building a voicebot is to be able to infer the user’s intention from a spoken phrase. One of the things we quickly learned was to avoid combining too many related phrases into a single intent. For example, we originally mapped questions like “Are there any problems at the moment?” or “Have any problems affected travel this week?” to the same problem intent, which just overloaded the bot’s logic. A better approach is to keep the scope of each intent as limited as possible.
2. Use natural-ish language
You’re going to be tempted to add as many phrases as possible so that the user can speak naturally without having to worry about a specific required syntax. However, too many similar but different phrases can confuse the classifier and cause unexpected results. We found that it’s much better to have a smaller subset of specific phrases that are sufficiently distinct. Lex may be more restrictive than true natural language, but it makes the results more consistent.
3. Context matters
It might seem that matching a phrase with a specific intent would be straightforward but, without context, it becomes confusing. For example, if a user says “yes” to a question, you need to store enough contexts in the voicebot to know exactly which question the user is answering.
4. Accommodate accents and special words
Accents and different pronunciations can be tricky for voicebots. In my experience, Lex works best with a Midwest American accent! But if you know your application is going to be used by people with a range of accents, it all comes down to training the system. You have to accurately train the voicebot to learn each very specific phrase and pronunciation so that the system knows how to match it with the desired intent. The same goes for specific words. For example, our company’s name, Dynatrace, is not in Lex’s dictionary. It hears “diner” or the name “Dinah,” so we had to train it to recognize not just the word, but also the specific actions and intents associated with its use when spoken.
5. Be extensible
The key to being successful with voicebot development is to think broadly and laterally about all its potential use cases. Don’t limit yourself.
6. Prepare to fail fast and often
There’s plenty of documentation that comes with Lex, but it can be technical and overwhelming — so much so that you might not want to experiment. But the good news is that, despite the complexity of building voicebots, Lex makes it simple to create test apps and quickly discover what works and what doesn’t so you can learn as you go.
7. Look back to the future
Most parsing services such as Amazon Lex will assume a future intent. For instance, if a user says “What happened on Thursday?” she is obviously asking about something that happened in the past. However, Lex doesn’t inherently understand that. This is more of a challenge with the current state of natural language processing (NLP) and not a Lex-specific problem. As companies like Amazon invest in NLP, I’m hopeful we’ll be able to distinguish timeframes with more clarity. But in the meantime, making tense super-specific is something developers will need to factor in.
Michael Beemer is a DevOps engineer at Dynatrace and led the VoiceOps development team for Davis, Dynatrace’s AI-powered digital performance virtual assistant.
Above: The Machine Intelligence Landscape This article is part of our Artificial Intelligence series. You can download a high-resolution version of the landscape featuring 288 companies by clicking the image.
But there were two talks in particular that I thought Nieman Lab readers might be interested in seeing, from America’s two top newspapers, The New York Times and The Washington Post. Both Andrew Phelps, an editor on the Times’ Story[X] newsroom R&D team, and Joey Marburger, the Post’s director of product, spoke about how they were using bots in their news operations.
Today, we’re publishing transcripts (lightly edited for clarity) of their two talks. Below is Joey’s talk; Andrew’s is over here.
Where basically, like, robots aren’t supposed to kill you — until they try to kill you. Hopefully, a conversational journalism won’t ever try to kill you.
So I developed basically three quick laws. But they’re they’re pretty spot on to the Laws of Robotics, which is: We don’t want to spread false information. It should follow what a human journalist tells it to do, unless the human tells it to spread false information.
We’ve done a lot of experiments on bots. And we’re very excited about it, because it’s this great, simple experience, and the technology is getting so much better for it: AI’s getting better. big data’s more accessible. So we knew we wanted to try a bunch of things and see what’s out there, because it’s kind of hard to have a ton of successes when you’re on the bleeding edge.
I’m going to go over three bots, which are kind of our favorites, but we actually have almost 100 bots actually. Like 99 percent of them are internal, though.
So this is our most successful reader-facing bot: It’s called the Feels Bot. About 30 days prior to the U.S. presidential election, if you opted into it on our politics Facebook page, we would message you in the evening and ask you how you felt about the election. And it was just five emoji responses, from super angry to happy — and we would curate all that. Then in the morning, we would show you a graph of how people were feeling.
We knew that we had to have a cadence in alerting people but not annoying people, because we had already built a bot for that. It was just a general news bot which didn’t do very well — which we figured would happen. Even though there are a billion people on Facebook Messenger, I don’t think anyone’s built a bot that has that many users.
So this was really fun to work on — and it was curated by a human. It had a low user account — like less than 10,000 people. But the engagement — meaning people actually answered the question every day for 30 days — was greater than 65 percent, because it’s simple. It asks you a simple question; it was a very charged election. And, you know, if you ask people how they feel, turns out they’ll tell you, which is great.
So we’d generate these social cards from it and highlight a few. Some of the best responses we’d share on Twitter, put them up on our site. We generated these little graphics out of it were really fun, and we did this every day for 30 days, which is a great exercise. Empathy is a powerful driver in conversation.
Another thing we call our Virality Oracle is a Slack bot in a Slack channel — a public channel inside the Post — that is powered by a really amazing algorithm from our data science team. From the second that a story is published, it starts monitoring it and it knows within the first 30 minutes of publishing if it’s basically going to be viral. (It’s really “popular” — “viral” is kind of a loaded word.) And it notifies the channel, so we can maybe go in and add something of the story, or start writing off of it a little bit. We get about three to five stories like this in a day. And then it also models out a 24-hour traffic window, and then the bot also emails us to digest, so we can see like the lifecycle of stories. This is really a bot as a tool — so it’s like a service bot or utility bot. It’s very handy.
So this is actually the data behind the bot, which I’m not going to go into in super detail. Our prediction model is taking in all these data points — this bot is just eating and gobbling up. We ran it for a long time, almost a year, for the machine learning to get really accurate. And we ran it on every story published — about 300 stories a day.
And we found we’d add in a new metric and it would get a little better. And now we’re at about 80 percent competency.
This is everyone’s favorite — the MartyBot. So Marty Baron is our editor, and this is tied into our publishing scheduling system called WebSked.
Whenever a reporter starts a story, they actually put in when they plan to publish it. So what is the deadline — which can always be changed. So if you’re behind, it will tell you: Hey, you’re either really close to deadline or you missed your deadline. It personally messages you — it doesn’t, like, shame in a channel or anything. And it’s really funny when it messages Marty — which I think has maybe happened once.
So this is a pretty cool thing too, called Heliograf, which is another way to think of a bot. It’s not a conversational bot, but it takes in data points from a feed and can basically craft stories very simple short stories, based on templates. Anybody every play Mad Libs? You know, put in a noun, pick an adjective, whatever? This is kind of what that does.
So we used this for the Olympics and for elections. We published a story on every single Olympic event, because of Heliograf. And then for elections, we posted a story on every single race in the U.S. on Election Night, and generated newsletters, generated tweets. We did all sorts of fun stuff from it. So it was a bot that was helping us do better journalism.
Audio bots are super, super huge right now. Amazon doesn’t call Alexa a bot, even though pieces of it inside are a bot. They like to refer to it as like it’s an operating system, as audio AI.
Our politics Flash Briefing was one of our fastest-growing products last year. We caught the wave just right — there’s a reason that the Echo is out of stock on Amazon all the time. They’re actually outselling a lot of their other hardware. Jeff Bezos, our owner, is personally driving this road map, which also gives you an indicator of how successful it is. And it’s super fun.
But what we’re thinking about bots and how it plays into your day-to-day life and your habit is: Bots can do very simple tasks. It shouldn’t do everything, because then you’ve got a lot of cognitive overhead, it’s a lot of work. Sometimes you don’t know what to ask a bot, other than, like, “What’s the news?” So we’re thinking about — the future’s here. You can build these things — and actually now there are a lot of tools, you can build them pretty easily. Amazon has a tool called Lex, which — point-and-click, you can build a pretty robust bot without any code. So the future is here, it’s just not evenly distributed — which is a quote from William Gibson, another science fiction writer. And I think this is super true for bots. Bots aren’t totally new — they’re just getting more accessible. It’s almost becoming a household name.
So we think bots can fill all these spaces between platforms — like, on different platforms, but also they fill in these gaps a little bit between things. A bot could notify you to catch you up on where you left off in a story while you were listening to it on the train into work. You sit down your desktop, and it’s like: “Hey buddy, here’s where you were in that story.” It like fills that space a little bit. This is what we’re starting to work on a lot right now; we’re calling it a handoff bot.
I remember bringing this up in the newsroom — nobody really understood it. “Why would we do this?” Especially when you do the first one and it gets like five people that use it — you’re like, “We got to keep doing it!” And it turns out that you learn a lot from experimenting. When things are really simple and really hard, it’s very attractive to a designer and a product person. So we’ll be we’ll be iterating on bots for a long time to come.
Photo illustration based on robot photo by Michael Dain used under a Creative Commons license.
Amazon’s CEO annual letter to his shareholders is a must-read. Customer focus, decision-making or the importance of writing down important things… Here are my takeaways from Jeff’s latest.
Whatever we think of its founder and CEO, Amazon remains a remarkable example of great management. Since its 1994 start, the company enjoyed steady growth, relentlessly conquering new markets and sectors, coupled to exceptional resilience shown when the company weathered two market crashes (2000 and 2008). In addition, Bezos has demonstrated a consistent ability to convince his board and shareholders to let expansion take precedence over profits and dividends. (No one can complain: thousand dollars invested in Amazon’s 1997 IPO are now worth more than half a million, a 500x multiple).
This didn’t happen without damage. By some measures, Amazon isn’t an enviable place to work and the pressure it applies to its suppliers rivals the iron fist of Walmart’s purchasing department. All things considered, Amazon’s level of corporate toxicity remains reasonable compared to Uber, as an example.
Jeff Bezos is also able to project an ultra-long term vision with his space exploration project for which he personally invests about a billion dollars per year.
Closer to our concerns, he has boosted a respected but doomed news institution — The Washington Post — thanks to a combined investment in journalistic excellence and in technology, two areas left fallow by most publishers.
That is why I thought Bezos’ written addresses to his shareholders (here) are worth some exegesis.
Let start with last week’s letter. (Emphasis mine, and while quotes are lifted from the original documents, some paragraphs have been rearranged for clarity and brevity).
Bezos starts his 2016 missive with a question asked by staffers at all-hands meetings:
“Jeff, what does Day 2 look like? (…) [Bezos reply:] Day 2 is stasis. Followed by irrelevance. Followed by excruciating, painful decline. Followed by death. And that is why it is always Day 1.”
Then he enumerates the three obsessions that make Amazon what it is today:
True Customer Obsession
There are many advantages to a customer-centric approach, but here’s the big one: customers are always beautifully, wonderfully dissatisfied, even when they report being happy and business is great. Even when they don’t yet know it, customers want something better, and your desire to delight customers will drive you to invent on their behalf. (…)
[Y]ou, the product or service owner, must understand the customer, have a vision, and love the offering. Then, beta testing and research can help you find your blind spots. A remarkable customer experience starts with heart, intuition, curiosity, play, guts, taste. You won’t find any of it in a survey. (…)
Good inventors and designers deeply understand their customer. They spend tremendous energy developing that intuition. They study and understand many anecdotes rather than only the averages you’ll find on surveys. They live with the design.
A few things here. Legacy media are plagued by a poor customer consideration. After several decades in this business, I see little or no improvement in that area. This is deeply rooted in the persistent superiority complex of large newsrooms — which sounds weird for a population so hardly hit by an economic downturn. In response to that decline, owners have turned to business people who, unfortunately, focused their energy on the short term. The journalistic profession’s failure to properly manage the business side has led to a backlash, to MBAs taking over. And, as it happens in ailing sectors than cannot pay much, you rarely get the best people.
This shift also left by the wayside the notion of product creation and management. Such notion fell through the cracks left open by journalists convinced (until recently) that their “mission” had nothing to do with any form of customer-oriented marketing, and by spreadsheet jockeys who felt that Malthusianism was the only way— “doing more with less”, and cut everything to the bone and beyond. The product was the main casualty of this process.
Hence the implicit first lesson from Jeff Bezos: put the product and those who will use it at the center of your operations. Hire, train and transform the mentalities toward that goal. Media needs product people. They should be the stars of any company as much as great bylines.
Next, stay ahead of customer demands:
No customer ever asked Amazon to create the Prime membership program, but it sure turns out they wanted it, and I could give you many such examples.
This tune is a variation of Steve Jobs who said once that no customer figured out they wanted the iPod. The inability to anticipate customers’ unexpressed desires is the collateral damage of the management failure stated above. Don’t expect to innovate with inflated egos in the newsroom and with accountants in the C-suite.
About processes in large corporations:
As companies get larger and more complex, there’s a tendency to manage to proxies. (…) A common example is process as proxy. Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing the process right. Gulp. It’s not that rare to hear a junior leader defend a bad outcome with something like, “Well, we followed the process.” A more experienced leader will use it as an opportunity to investigate and improve the process. The process is not the thing. It’s always worth asking, do we own the process or does the process own us?
On this, everyone will find tons of examples in their organization. Again, the reliance on proxies is a direct consequence of a poor distribution of responsibilities. In a previous job, I tried to foster the Directly Responsible Individual principle, borrowed from Steve Jobs’ doctrine of product development. I tried to apply it from the smallest project (a newfeature in a mobile app) to larger endeavors involving multiple high-ranked managers. I succeed for the former and failed for the latter (after a certain age and above a certain level, people tend to flee the risks associated with responsibilities).
Embrace External Trends
The outside world can push you into Day 2 if you won’t or can’t embrace powerful trends quickly. If you fight them, you’re probably fighting the future. Embrace them and you have a tailwind.
… While reading this my thoughts wander to the people in the industry (managers, editors, owners) I saw fighting tooth and nail to preserve old models, losing years if not a decade, as opposed to a small group who decided early on to turn the ship around and take the tailwind… Today’s trends express by Jeff Bezos are of a different nature:
These big trends are not that hard to spot (they get talked and written about a lot), but they can be strangely hard for large organizations to embrace. We’re in the middle of an obvious one right now: machine learning and artificial intelligence. (…)
But much of what we do with machine learning happens beneath the surface. Machine learning drives our algorithms for demand forecasting, product search ranking, product and deals recommendations, merchandising placements, fraud detection, translations, and much more. Though less visible, much of the impact of machine learning will be of this type — quietly but meaningfully improving core operations.
On this one, the media industry — including, for a large part, newcomers — has my sympathy more than my criticism. Embracing machine learning and AI will require huge investments which, I’m afraid, only large tech companies will be able to pony up. I’m not talking about having a couple of geeks in residence tinkering some convolutional neural network, I’m referring to the journey towards deployment of systems critical to the news industry, to name but a few:
Real-time fact-checking based on large and complex datasets
Individualization of content matching user profiles and habits
Adjustment to users’ consumption modes (home, office, commute)
Ability to predict which reader will drop her subscription or, to the contrary, which one is about to convert
Sophisticated recommendation engines (great Amazon and Netflix successes, and media industry abysmal failure)
Profile-based search engines and curation systems.
My guess is the future will be owned by the Facebooks and the Googles for the most part, and corporations like Cambridge Analytica for a lesser part. I’m referring to the East Coast consulting firm that played a crucial role in Donald Trump’s victory with its ability to fine-tune political messages thanks to about 5,000 data points for each US voter. By comparison, Facebook offers about one hundred data points (listed here) while most publishers only have few dozens on their own. Several people I talk to here at Stanford believe the next leap in political campaigning will be the ability to tailor one bespoke message to a single individual, and to do so in real time.
Can the media industry integrate such decisive trends? Realistically, not many players will be able to catch the wave beyond superficially gestures.
First, only a few insiders actually grasp the issue (sometimes, I feel I’m speaking Urdu when I venture to raise the subject with media execs…) Second, the manpower required to build such systems will remain scarce and therefore expensive for a while. Unless the industry decides to organize itself around full-fledged cooperation. (It could be a great endeavor for trade organizations such as Digital Content Next, or INMA…)
Lastly, Jeff Bezos on decision-making process:
High-Velocity Decision Making
Day 2 companies make high-quality decisions, but they make high-quality decisions slowly. To keep the energy and dynamism of Day 1, you have to somehow make high-quality, high-velocity decisions. Easy for start-ups and very challenging for large organizations. Speed matters in business — plus a high-velocity decision making environment is more fun too. We don’t know all the answers, but here are some thoughts.
First, never use a one-size-fits-all decision-making process. Many decisions are reversible, two-way doors. Those decisions can use a light-weight process. For those, so what if you’re wrong? I wrote about this in more detail in last year’s letter.
Here is the relevant excerpt from the 2015 letter:
Some decisions are consequential and irreversible or nearly irreversible — one-way doors — and these decisions must be made methodically, carefully, slowly, with great deliberation and consultation. If you walk through and don’t like what you see on the other side, you can’t get back to where you were before. We can call these Type 1 decisions.But most decisions aren’t like that — they are changeable, reversible — they’re two-way doors. If you’ve made a suboptimal Type 2 decision, you don’t have to live with the consequences for that long. You can reopen the door and go back through. Type 2 decisions can and should be made quickly by high judgment individuals or small groups. As organizations get larger, there seems to be a tendency to use the heavy-weight Type 1 decision-making process on most decisions, including many Type 2 decisions. The end result of this is slowness, unthoughtful risk aversion, failure to experiment sufficiently, and consequently diminished invention.
Media organizations often like to see every decision as Type 1. Again, aversion to risk, propensity to single out failure are the main culprits. By comparison, most newcomers have built their model on agility and systems that reward risk. On this, Bezos adds in 2015:
Most large organizations embrace the idea of invention, but are not willing to suffer the string of failed experiments necessary to get there. Outsized returns often come from betting against conventional wisdom, and conventional wisdom is usually right. Given a ten percent chance of a 100 times payoff, you should take that bet every time. But you’re still going to be wrong nine times out of ten.
This whole decision-making process is core to the “Bezos way”. Here is a summary of two other principles:
First, we usually want too much information to make up our mind on something:
[M]ost decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you’re probably being slow. (…)
Second element: knowing how to disagree but support a proposal; Bezos calls it “disagree and commit.” To illustrate it, he uses a telling example:
I disagree and commit all the time. We recently greenlit a particular Amazon Studios original. I told the team my view: debatable whether it would be interesting enough, complicated to produce, the business terms aren’t that good, and we have lots of other opportunities. They had a completely different opinion and wanted to go ahead. I wrote back right away with “I disagree and commit and hope it becomes the most watched thing we’ve ever made.” Consider how much slower this decision cycle would have been if the team had actually had to convince me rather than simply get my commitment. Bezos, who can’t be suspected of succumbing too easily to mellow compromise justifies his view by the core competencies of the Amazon Studios team: And given that this team has already brought home 11 Emmys, 6 Golden Globes, and 3 Oscars, I’m just glad they let me in the room at all!
To recall his vision, and its remarkable consistency, in each letter to his shareholders, Jeff Bezos likes to attached his original 1997 letter.
By all means, Amazon founder and CEO likes to show the importance of writing down key elements of strategic or components of a decision making process. He’s known to start meetings with his senior staff by requiring them to read in silence detailed single-spaced memos rather that enduring yet another slide presentation. Too many CEOs and execs tend to forget the important of genuine writing.