Amazon Alexa skills to accept payments

Developers and businesses making skills for Amazon’s Alexa will soon be able to accept Amazon Pay and make purchases directly within voice apps from the Alexa Skills Store. The news was announced today during the Alexa State of the Union at AWS re:Invent in Las Vegas. Other Alexa news shared today includes plans to bring Alexa to Australia and New Zealand in early 2018 and adding $100 million to the Alexa Fund for international investment. Continue reading “Amazon Alexa skills to accept payments”

Google gives developers more tools to make better voice apps


Google Assistant received some major upgrades in recent days, and today Google Assistant product manager Brad Abrams announced a series of changes to help developers make voice apps that interact with Google’s AI assistant, including ways to give them more expressive voices and send push notifications, as well as new subcategories for the Assistant’s App Directory.

One of the coolest new features coming to Google Assistant is something called Implicit Discovery. Instead of saying “OK Google, talk to Ray’s Auto Shop app” and then asking to schedule an appointment, Implicit Discovery will let you say “Book an appointment to fix my car” then offer an app recommendation. The same should apply if you say “I need to book a flight” to summon something like the Kayak app or say “I need a ride” to interact with Uber or Lyft.

Implicit Discovery may seem simple, but it’s going after one of the biggest challenges for AI assistants, which is: Without a visual interface, how does a user figure out how to get things done or remember the names of favorite or useful apps? Implicit Discovery seems to be an effort to tackle this. It’s also a feature already available in Amazon’s Alexa.

Another feature added today to improve discovery of third-party apps is subcategories in the App Directory, so instead of just being listed in the Food and Drink category, apps can be slated into subcategories like “Order Food” and “View a Menu.”

The App Directory was first introduced at the I/O developer conference this spring.

Other changes on the way for the App Directory include badges to indicate if a voice app is family friendly and support for third-party apps in languages beyond English. Until today, Google’s voice apps were only available for English speakers in the United States, Canada, Great Britain, and Australia. Voice apps will soon be available in Portuguese in Brazil, English in India, and Spanish in the U.S., Mexico, and Spain.

Google announced today that developers in the United Kingdom can begin to make apps that can carry out transactions, a feature that until now was exclusive to the U.S. The Google Payment API expanded to include Google Assistant users in the U.S. in May.

A series of new APIs has also been rolled out, including one that gives apps the ability to send push notifications, first over the phone and in the future with voice or auditory sounds through a Google Home smart speaker. Alexa notifications first launched in September.

An API to link an account to an app for personalized results, and another that gives developers the ability to transfer a conversation from a smart speaker to a smartphone also launched today.

Beyond push notifications, voice apps can now deliver daily updates or notifications about certain kinds of content.

The Actions on Google platform for the creation of voice apps by third-party developers first became available roughly a year ago, in December 2016. Since then, hundreds of voice apps have been made available to do a range of things, from playing ambient sounds like crashing waves to offering local deals for a pizza from Domino’s.

It’s been a pretty busy week for Google’s intelligent assistant. On Monday, Google announced that Home speakers can now be used as an intercom system. The Google Broadcast feature, first announced at the Made by Google hardware event last month, allows you to deliver a message through all your Google Home devices. The app also gained the ability to deliver music and movie recommendations from streaming services and control sound by adjusting things like bass and treble, a clear plus for prospective owners of Google Home Max, which is scheduled to hit store shelves next month.

Taken together, the announcements made today will give voice apps the ability to be a much more vocal, vital part of the Google Assistant experience, and continue to evolve the ecosystem surrounding Google’s AI assistant.

This time last year, Google Assistant was only available in the Allo chat app. Today you can speak to Google in Android TVs, three Google Home smart speakers, Android smartphones, the Pixel Chromebook, and Pixel Buds, the first headphones made by Google that began to roll out last week. Support for Google Assistant in tablets using Android is also reportedly on the way.

Alexa and Hearst Team Up on ‘Voice-First’ Brand

My Beauty ChatWhat if publishers started programming the new generation of audio assistants as a kind of hybrid of daily news, on demand radio/podcasting and information resource? That’s the sort of experiment Hearst recently launched for the Amazon Echo with its “My Beauty Chat” voice-first brand. Once the skill is enabled, asking Alexa to open the app offers you a choice of hearing a morning or afternoon 5-10 minute beauty program or a tip of the day. With launch support from sole sponsor L’Oreal, Hearst is programming this project aggressively, with two daily shows (one available before 4 p.m. and the other after) as well as a daily beauty tip.

Continue reading “Alexa and Hearst Team Up on ‘Voice-First’ Brand”

Voice interfaces will revolutionize patient engagement


The healthcare industry is abuzz over consumer engagement and empowerment, spurred by a strong belief that when patients become more engaged in their own care, better outcomes and reduced costs will result.

Nevertheless, from the perspective of many patients, navigating the healthcare ecosystem is anything but easy.

Consider the familiar use case of booking a doctor’s appointment. The vast majority of appointments are still scheduled by phone. Booking the appointment takes on average ten minutes, and the patient can be on hold for nearly half of that time.

These are the kinds of inefficiencies that compound one another across the healthcare system, resulting in discouraged patients who aren’t optimally engaged with their care. For example, the system’s outdated infrastructure and engagement mechanisms also contribute to last-minute cancellations and appointment no-shows—challenges to operational efficiency that cost U.S. providers alone as much as $150 billion annually.

Similarly, long waits for appointments and the convoluted process of finding a doctor are among the biggest aggravations for U.S. patients seeking care. A recent report by healthcare consulting firm Merritt Hawkins found that appointment wait times in large U.S. cities has increased 30 percent since 2014.

It’s time for this to change. Many healthcare providers are beginning to modernize, but moving from phone systems to online scheduling, though important, is only the tip of the iceberg. Thanks to new platforms and improved approaches to integration of electronic medical records (EMR), the potential for rapid transformation has arguably never been greater.

This transformation will take many shapes—but one particularly excites me: voice. While scheduling and keeping a doctor’s appointment might be challenging today, it’s not far-fetched to envision a near future in which finding a doctor may be as simple as telling your favorite voice-controlled digital assistant, “Find me a dermatologist within 15 miles of my office who has morning availability in the next two weeks and schedule me an appointment.”

How voice has evolved in healthcare: The rise of technology platforms

Voice technologies have been generating excitement in the healthcare space for years. Because doctors can speak more quickly than they can type or write, for example, the industry has been tantalized by the promise of natural language processing services that translate spoken doctors’ notes into electronic text.

No single company or healthcare provider holds all the keys to this revolution. Rather, it hinges on a variety of players leveraging technology platforms to create ecosystems of patient care. These ecosystems are possible because, in contrast to even a few years ago, it’s eminently more feasible to make software interoperate—and thus to combine software into richer services.

Developers can leverage application programming interfaces (APIs) that provide access to natural language processing, image analysis, and other services, enabling them to build these capabilities into their apps without creating the underlying machine learning infrastructure, for example.

These apps can also leverage other APIs to connect disparate systems, data, and applications, anything from a simple microservice that surfaces inventory for medical supplies to FHIR-compliant APIs that allow access to patient data in new, more useful contexts. Understanding the possibilities and challenges of connecting these modern interfaces to EMR systems, which generally do not easily support modern interoperability, may be one of the biggest obstacles. Well over a quarter-million health apps exist, but only a fraction of these can connect to provider data. If voice-enabled health apps follow the same course, flooding the market without an approach to EMR interoperability, it could undermine the potential of these voice experiences to improve care.

Fortunately, as more providers both move from inflexible, aging software development techniques such as SOA to modern API-first approaches and adapt the FHIR standard, these obstacles should diminish. FHIR APIs allow providers to focus on predictable programming interfaces instead of underlying systems complexity, empowering them to replace many strained doctor-patient interactions with new paradigms.

As it becomes simpler for developers to work with EMR systems alongside voice interfaces and other modern platforms, the breadth and depth of new healthcare services could dramatically increase. Because developers can work with widely adopted voice assistants such as Google Assistant, Apple’s Siri, and Amazon’s Alexa, these new services won’t need to be confined to standalone apps. Instead, they can seamlessly integrate care and healthier activity into a user’s day-to-day routines.

Many of us already talk to our devices when we want information on things like traffic conditions, movie times, and weather forecasts. Likewise, many of us are already accustomed to taking advice from our digital assistants, such as when they point out conflicts on our calendars or advise us to leave in order to make it to a meeting on time. It’s natural these interfaces will expand to include new approaches to care: encouraging patients to exercise, reminding them to take medications, accelerating diagnoses by making medical records more digestible and complete, facilitating easier scheduling, etc.

Indeed, research firm Gartner’s recent “Top 10 Strategic Technology Trends for 2018” speaks to the potential of voice and other conversational interaction models: “These platforms will continue to evolve to even more complex actions, such as collecting oral testimony from crime witnesses and acting on that information by creating a sketch of the suspect’s head based on the testimony.”

As voice and other interfaces continue to evolve from scripted answers to more sophisticated understandings of user intent and more extemporaneous, context-aware ways of providing service, the nature of daily routines will change. For example, whereas many patients today feel anxiety over finding the time and focus to pursue better care, in the near future, this stress will likely diminish as more healthcare capabilities are built into platforms and interaction models consumers already use.

What comes next?

It’s clear that providers feel the urgency to improve patient engagement and operational efficiency. Research firm Accenture, for example, predicts that by the end of 2019, two-thirds of U.S. health systems will offer self-service digital scheduling, producing $3.2 billion in value. That’s a start, but there’s much more to do.

More capabilities will need to be developed and made available via productized APIs, platforms will need to continue to grow and evolve, and providers must adopt operational approaches that allow them to innovate at a breakneck pace while still complying with safety and privacy regulations.

But even though work remains, voice platforms and new approaches to IT architecture are already changing how patients and doctors interact. As more interoperability challenges are overcome, the opportunities for voice to be a meaningful healthcare interface are remarkable.

For the biggest changes, the question likely isn’t if they will happen but how quickly.

Aashima Gupta is the global head of healthcare solutions for Google Cloud Platform where she spearheads healthcare solutions for Google Cloud.

BBC is launching an interactive radio show for Echo


The future of entertainment is here. The BBC, in collaboration with Rosina Sound, is working on an interactive radio play for artificial intelligence-enabled home chatbots like Amazon’s Echo and Google Home.

The production will be the first of its kind — the first to use this kind of technology and to function in this way. It plans to release this futuristic, high-tech play by the end of the year.


Do you have an AI strategy — or hoping to get one? Check out VB Summit on October 23-24 in Berkeley, a high-level, invite-only AI event for business leaders.


The play

The story, called the Inspection Chamber, will work similarly to choose-your-own-adventure books and games, in which users can influence the direction of the story by the choices they make.

The creators of the Inspection Chamber, though, are seeking to take that idea a bit further and make listeners really feel like they’re in the story.

The story’s narrator will ask you, the listener, questions throughout the story. Your answers to those questions will change the outcome of the narrative.

The questions are designed so the listener doesn’t have to step out of the story to consider their decision, but instead feels like they’re a character in the story. It’s meant to feel like you’re interacting with the other characters in the play.

The creators of the play said they took inspiration from games like The Stanley Parable and Papa Sangre, and authors such as Franz Kafka and Douglas Adams. The story became, in the creators’ own words, “a comedy science-fiction audio drama.”

The technology

The sci-fi elements fit well with the medium through which the story will be presented. The show’s creators say they’ve built a “story engine” that lets the story work on a variety of different voice devices.

First, the Inspection Chamber will come out on Amazon Echo and Google Home, but the BBC is looking into other devices, like Apple’s HomePod and Microsoft & Harman Kardon’s Invoke speaker, as well.

The project comes out of a wider BBC initiative called Talking With Machines that is exploring spoken interfaces. It’s looking at ways to share content through these technologies and improve interactive audio interfaces. It also aims to create a platform for these interfaces that works across devices, instead of relying on one particular device.

Merging art and technology

In some ways, the plot of the Inspection Chamber had to conform to the limitations of the technology used to share it. For example, Amazon’s Alexa requires users to speak every 90 seconds, and these devices only understand a limited number of phrases. The story’s writers had to come up with a way to incorporate these phrases and time requirements into the story, without making it feel forced.

The use of this technology to tell a story may be experimental now, but as the technology improves, this type of content will likely become easier to create with fewer limitations on creativity. This presents some interesting ideas about the future of creative fields and technology. Rather than shy away from tech in favor of the traditional, the BBC is going full force into it.

Physical books and theater productions may never go completely out of style due to their many virtues, but using new technologies creates new possibilities with a plot, user experience, and more.

Kayla Matthews is a technology writer interested in AI, chatbots, and tech news. She writes for VentureBeat, MakeUseOf, The Week, and TechnoBuffalo.

Voice technology will change your relationship with customers


Artificial intelligence is at the root of several entirely new platforms on which customers and companies can interact. Voice augmented reality and chatbots are powered by natural language processing, computer vision, and machine learning AI algorithms. Each technology offers considerable opportunities for companies to deliver a more personal, useful, and relevant service to their customers.

Conversational interfaces are already here

Voice-controlled user interfaces have been around since 1952 when Bell Labs produced Audrey, a machine that could understand spoken numbers. But the current wave of voice technology was started by Amazon just a couple of years ago.

In 2015, Amazon launched the Echo, which introduced its AI-powered voice service, Alexa. At the time, the general response was one of confusion and frustration. As Farhad Manjoo, The New York Times’ tech columnist, wrote at the time, “If Alexa were a human assistant, you’d fire her, if not have her committed.”

But in the past two years, a lot has changed. Today, the Echo is recognized as a product that is leading a major shift in how humans engage with technology — and, by extension, how customers engage with brands.

It’s taken more than six decades, but now increasing processing power and advances in AI have technology giants locked in an arms race to create the dominant voice-based assistant. Some of the advances of key focus include machine learning, self-improving algorithms, speech recognition, and synthesis for developing conversational voice interfaces.

Voice can deliver better customer experiences

As the technology improves, the opportunity for companies to use voice to improve customer relationships grows.

Via an Alexa skill (Amazon’s term for an Alexa app), home cooks can ask for advice from Campbell’s Soup, shoppers can pay their Capital One credit card bills, and BMW drivers can check fuel levels remotely. Alexa, of course, is not alone. Apple Siri, Microsoft Cortana, Google Assistant, and other voice-enabled platforms are vying for attention.

For example, Xfinity’s latest TV remote is voice-enabled; Samsung Bixby controls a phone with voice commands; and Ikea is considering integrating voice-enabled AI services into its furniture.

Customer-focused companies must consider three areas in which voice can have an impact on their relationship with their customers.

  • More personality leads to deeper relationships: By its very nature, voice technology allows brands to move from text-based interactions with customers to something that feels more human. However, there is a high bar to meet. If customers feel they’re engaging with something closer to a “real person,” their expectations will change. If a conversational voice assistant makes a mistake or loses the context, it will be important for human backup to intercede. In addition, injecting an ambient conversational intelligence into people’s lives and homes will require deeper levels of trust that an individual’s privacy won’t be violated.
  • More engagement leads to more data, which gives companies further opportunities to understand their customers: Customers now expect omnichannel service, meaning they take for granted that companies will interact with and respond to them across any and all channels, including voice. From a company’s perspective, those voice interfaces can provide a rich additional set of data on its customer interactions. Companies will be able to use phrasing, tone, accent, and speed of delivery to learn far more about their customers than ever before. More data means companies can get better at understanding customer intent and attitude, such that they can take proactive steps to optimize the customer experience.
  • Voice presents opportunities for new types of engagement: Customers increasingly expect companies to respond to their queries immediately, whether during business hours or not. Voice and AI-powered conversational technology can help companies measure up to those expectations.

Intelligent conversational interfaces allow companies to scale up their capacity to engage with customers. The result is reducing customer service hold times, resolving simple issues more quickly, and triaging complex questions before directing them to the appropriate department. Intelligent, personalized voice-enabled assistants could also help health care companies scale “virtual medicine” and in-home care, and they could give financial services companies the capacity to handle customer service and provide financial advice at scale.

Voice is the most natural interface for humans. As conversational interfaces continuously learn, become smarter, and grow more aware of each individual’s preference, they will become more valuable in augmenting the customer experience and building deeper relationships with brands.

Clement Tussiot is director of product management at Salesforce Service Cloud, which delivers customer service software in the cloud.

TLDR: The voice interface is the future of news and media

By Nieman Lab (these highlights provided for you by Annotote)

The future of news is humans talking to machines #voice interface #no UI #end-to-end audio

AI-driven voice interfaces, such as Amazon’s Alexa, Google’s Home and Assistant, Microsoft’s Cortana, and Apple’s upcoming HomePod [are] potentially bigger than the impact of the iPhone. In fact, I’d describe these smart speakers and the associated AI and machine learning that they’ll interface with as the huge burning platform the news industry doesn’t even know it’s standing on. Continue reading “TLDR: The voice interface is the future of news and media”

Onet revamps mobile news app with voice control feature

“This new app had to embrace the most common needs of mobile news media users: breaking news, quality content, fast load, error proof, and data that was planned efficiently for video consumption. We asked our users how, when, and why they were using our app. Part of that research revealed that our audience was often unable to consume our content because of their changing environments. Morning commuting that may involve subways, driving a car, or walking resulted in a news consumption experience that was not seamless. We knew that our new app needed to be hands free, and so we decided that our “wow” moment would be voice controls and audio consumption of news and articles.”

Read full story on INMA Ideas Blog

Why the typical voicebot user is a 52-year-old woman


There’s a race heating up in the world of artificial intelligence, and it involves smartphones, consumers, and their voices. Voice-activated smartphone apps use a combination of artificial intelligence, cloud-based natural language processing (NLP) and machine learning to power their services – and with all the buzz around these apps like Siri and Alexa, it’s clear that tech giants are competing for yet another area of consumer attention.

While this is currently a small share of the app landscape, it’s also a rapidly evolving and expanding one, and includes apps such as Microsoft’s Cortana and a constellation of apps from Google, like Allo and Google Now. But what type of consumers are actually using them? And what are they using them for? Does voice guided user interfaces, and cloud-based AI apps, provide for the fourth big disruption in the world of the Internet — the first ones being the birth of the world wide web, the second being the rise of social media, and the third being the emergence of mobile apps?

Who’s using personal assistant apps? The average user a 52-year-old woman

According to Verto’s data on the user base of AI-powered personal assistant apps on mobile devices, the personal assistant “superuser” – someone who spends more than twice the average user’s monthly time spent on personal assistant apps – is a 52-year-old woman who spends 1.5 hours per month with personal assistant apps.

In fact, personal assistant apps seem to be more popular among women overall: Verto’s data shows that women (54% of total user base) use personal assistant apps slightly more frequently than men. And interestingly, there is a trend toward personal assistant app usage in older age groups, especially adults in the 45-54 and 55-plus age groups. Based on use cases and ease of functionality, these apps could see wider adoption in older generation as voice-activated apps become integral to assisted living communities. While most apps generate buzz or downloads from groups like millennials or Gen Z, personal assistant app users do not conform to typical “early adopter” consumer profiles, despite being a relatively new app category.

Personal assistant app behavior varies across genders

In a follow-up survey to better understand why consumers use personal assistant apps, and if these use cases differ across different gender or age demographics, results show personal assistant app usage remains highly utilitarian. A majority (71%) of respondents indicated that asking a question or searching for something specific was one of their primary reasons for using a personal assistant app. This use case was particularly popular among women (who comprise 61% of respondents), and especially among women under the age of 30 (27%) and women over the age of 55 (12%). In contrast, men under the age of 30 comprised just 7% of respondents who cited asking questions or searching for something specific as one of their primary reasons for using personal assistant apps, while men over the age of 55 comprised just 8% or respondents.

Other (but not as popular) reasons for using personal assistant apps include initiating a call or text message or to check the weather. So whether it’s a mobile app or physical device, consumers haven’t fully embraced the idea of a personal assistant into their daily lives, and this presents a challenge for companies like Amazon and Google to continue enhancing their platforms and create the desired need in the market. At the same time, competitors like Apple see this gap as an opportunity to enter the race with their own services in attempts to become the complete, all-in-one packaged personal assistant for consumers.

What does this mean for assistant app publishers?

Think about the early stages of Instagram, Twitter, or even a music streaming service – most of these apps it took a while to become fully integrated into our lives. Now, these apps have become crucial platforms for publishers to deliver targeted content based on consumer use. For personal assistant app publishers, it appears that there’s still a long way to go before consumers (of any gender or age group) begins adopting assistant apps more comprehensively throughout their daily digital behaviors.

But even in this nascent stage of the market, some clear consumer preferences are starting to emerge, and as data shows, we can map these preferences and behaviors to distinct consumer demographic groups. With women seemingly embracing personal assistant apps more than their male counterparts, and on both sides of the age spectrum, publishers, brands, and advertisers could take advantage of this opportunity to create unique experiences to these specific groups. Using AI to power these apps allows companies to offer personalized content for each individual user and execute different campaigns at varying times of the day, based on peak usage and engagement.

However, we still need to remember, in light of all of the data, that personal assistant apps and devices, are still living their first era – it is mostly early-adopter users who use them actively, with high frequency. Like many successful technologies, they already show significant potential to replace and reshape many widely and frequently needed tasks and actions, so we need to keep an eye on this evolution going forward!

Dr. Hannu Verkasalo is the CEO of Verto Analytics, a pioneer in digital multi­screen media measurement services.

The rise of the voice interface


By now you are probably talking to machines more often than you are to your neighbors, your old college friends, or even your mom. Voice interfaces are everywhere. We put an Amazon Echo or Google Home in our kitchens and they quickly became part of our morning routine. We talk to Siri to find a good movie to watch. We search, send messages, control our connected devices, and we shop —  all by voice. For brands and marketers, this provides a unique opportunity to converse directly with consumers; however, it’s not as easy as it may appear to be.

There is a reason why voice interfaces, quirky and novel only a few years ago, are seeing such a speedy adoption. They adapt to human behavior even better than interfaces that came before them. This includes: The GUI and a mouse, the Touch UI and our fingers. Now it’s the Conversational UI and voice. With each advancement of the human-to-machine interface, we got better at making the interaction more human. And, as you might expect, marketers jumped at this opportunity.

And our kids “get it” first. When the iPad debuted in 2010, kids quickly figured out how to use it by swiping the screen. But swiping a printed magazine didn’t work. They expected the rest of the world to work like the iPad, but all they got was a broken interface. Voice UI is following a similar pattern. In no time at all, our kids are talking to the machines. But when they say “Alexa, open the car window,” or “Google, fix the TV signal” nothing happens. As with the iPad, they are waiting for the world to catch up to them. As much as a mouse and touch were the interfaces of adults, voice is the one kids will know the best

Voice UI is big, and it will only get bigger as technology advances. We’ve already built a ton of Alexa skills, Google Home actions, and Siri extensions. And we are actually getting really good at explicit interactions like: “Alexa, how long is my commute this morning?” or “Ok Google, tell me the weather.” Life is great, right? Not quite. We love to talk to our bots, but we abandon them quickly. According to VoiceLabs, there is only a 3% chance a person will continue to use a Google Home Action after the first week. That’s not a good statistic if you are a brand marketer trying to build one-to-one interactions with customers. Also, building more complex interactions that go beyond a simple coordination is very hard.

So, why is that? For the most part, we are not creating conversations, we are building old-school commands hidden behind voice requests. These work well when we want to add something to a to-do list, play a song, or set an alarm to wake us up the next morning. Clear and deterministic. But these fail when there is more room for ambiguity. More complex interactions require collaboration, not just coordination. For example, I would like to talk to my virtual assistant on Saturday morning and decide what to do that weekend. I have a goal — a fun weekend — but not a clear way to achieve it. My assistant and I should be able to have a quick chat and together decide what to do. Additionally, it should already know me pretty well from previous conversations. It should know my preferences, aversions and what I can sometimes be talked into.

Designing conversations is not new. Paul Pangaro, a conversation theorist, is probably the front-most authority. The architecture he proposes defines simple elements and flow of a conversation. In it, participants share a common context and language. They define goals, evaluate and exchange information repeatedly until they reach an agreement. Concise and simple design. Perfect as a building block for creating better Conversational UI.

Some of the best tools today that help in the creation of conversations are PullString and Dexter. They try to present a friendly interface to writers, while still remaining flexible and powerful for developers. But, to create better interfaces, we need to evolve these tools. For example, extract the business logic from the conversation layer to create a human-first tool. A writer’s tool. A distraction-free interface where a writer can concentrate on the art of conversation. And an AI augmented interface for developers where logic is partially inferred from what the writer writes. Both aided by design patterns like the one defined by Pangaro. Voice AI developers need to start acting like brands, developing personalities and voices that bring the brand to life and are able to interact directly with consumers in a human-like way.

Some companies are already doing this, but it takes a lot of work. For example, PullString created an app for Barbie that lets the doll communicate directly with you. According to Oren Jacob, CEO of PullString, the Hello Barbie companion app has 8,000 lines of dialogue, “… which become tens of thousands of different intents, different context, and ways in which the conversation can branch and change.” Successful as this is, it demonstrates how much time and energy need to be invested to make even the most simple of conversations happen between a brand’s voice technology and the consumer.

For marketers there is a strong benefit to creating real conversations — they drive deeper connections and establish relationships between brand and consumer. Exchanging ideas, sharing goals and forming agreements help establish a common history and build trust and unity. The more trust we have in the Amazon Skill or Google Home Action we’re installing the more likely we are to keep using it. The brands that understand the power of a real conversation will be the ones fostering brand loyalty and creating future applications we can’t live without.

Martin Legowiecki is the Technology Director at advertising agency Deutsch and head of the agency’s new AI practice, Great Machine.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑