This week I’m rounding off the first semester of classes on the new MA in Data Journalism with a session on artificial intelligence (AI) and machine learning. Machine learning is a subset of AI — and an area which holds enormous potential for journalism, both as a tool and as a subject for journalistic scrutiny.
With so many technological innovations now transforming our lives, it should be noted that the ideas for these innovations have existed for decades in science fiction novels and television. The capacity to talk to a computer (and have it talk back) was a staple of Gene Roddenberry’s, Star Trek, where the Starfleet computer was voiced by Roddenberry’s wife, Majel. The 1970 movie, Colossus: The Forbin Project, featured a supercomputer that was intended to prevent war and proclaimed itself “the voice of World Control.” And before Google’s self-driving cars, the 1980s brought us KITT, an advanced artificially intelligent, self-aware, and nearly indestructible car from the TV show, Knight Rider. Continue reading “SEO is not enough in the age of voice”→
In the 20 years I’ve covered the great digital transition, empty buzzwords and phony trends have been the rule, not the exception. Bullshit is not a byproduct of a startup economy of building scale before substance; it is actually a doctrine.
And yet, in this past year there have been some trends that almost certainly will have a deep impact on the survivability of media businesses into the future. Here I explore some of these trends and try to cast a little insight as to why they matter, and how they might continue to shakeup the landscape as we look ahead to 2018. Continue reading “Believe the Hype: The Most Buzz-Worthy Trends of 2017”→
In a world with limited doctors, emerging diseases and superbugs, and sharply rising healthcare costs, how can we successfully tackle healthcare problems at scale?
This is just one of the critical challenges India’s explosive startup community hopes to solve by implementing AI in new and innovative ways to serve the needs of 1.324 billion citizens. This is a feat that carries huge implications for the US and other healthcare ecosystems around the globe.
To understand how dire the situation is, it’s worth considering India’s health paradox. The country’s deep demographic dividend — which occurs when the majority of a country’s population consists of working-age individuals — is driving rapid and unprecedented growth, but it is also a ticking time bomb. With an average age of 27, India has one of the youngest and most educated populations in the world. Since 1991, this phenomenon has fueled approximately 7% annual growth, produced new goods and services, and reduced dependents in the economy.
But in order to keep reaping the benefits of this dividend, India’s young population needs to have access to quality nutrition and healthcare. In addition, as the dividend declines (as we are witnessing in China), the country will need new infrastructure in place to care for its aging population. And unfortunately, the infrastructure that is necessary doesn’t exist today.
The doctor-to-patient ratio in India is one of the worst in the world, with just 0.2 doctors for every 1,000 Indians (for comparison, there are 1.1 doctors for every 1,000 Americans in the US). Modern medical facilities — and as a result, doctors — are heavily concentrated in urban areas. In addition to heart disease, cancer, and is so bad, for instance, that it was deemed equivalent to smoking 44 cigarettes per day.
The fundamental reason behind India’s healthcare issues is resource scarcity. India needs more medical facilities and more medical expertise, and both of these require time and billions of dollars to develop. But such resources are not easily obtainable, so we must consider other ways to dramatically increase access to existing resources in an effective and inexpensive way.
This is where AI has the potential to reshape India’s healthcare problem. Manu Rekhi, Managing Director of Inventus, says, “Indian AI platform companies are building upon two decades of India’s IT industry expertise. They are supercharging how software and human intelligence can partner to create new human-in-the-loop AI systems for global markets as well as the bottom of the pyramid.”
Indeed, a number of Indian startups have implemented deep AI expertise to move the needle on specific health conditions and disease. In some cases, these companies offer technology and distribution opportunities, which attract Fortune 500 giants to partner with them both for the India market and globally.
One such company is Tricog Health, a startup handpicked by GE’s healthcare accelerator program for its cloud-based cardiac diagnosis platform. Coronary heart disease is increasingly prevalent in India, having escalated from causing 26% of adult deaths in 2003 to 32% in 2013. Tricog increases access to cardiac care across 340 cities in 23 states, including in some of the most remote locations in India. The company’s platform collects physiological data and ECGs from medical devices in the field then uses specialized AI to process the data in real-time and provide a diagnosis to the cardiologist.The cardiologist then reviews and recommends next steps to the GP or nurse in the field instantaneously using the Tricog mobile app. By using Tricog’s AI engine, a few specialists can diagnose over 20,000 patients.
Another startup, Bengaluru-based Aindra Systems, is using AI to tackle cervical cancer, which is the second most common cancer in Indian women between the ages of 15 and 60. In fact, India represents a whopping one-third of the total global incidences of cervical cancer. Aindra’s solution can detect cervical cancer in its early stages and measurably increase the odds of survival. The company increases the productivity of pathologists who screen cervical cancer samples, who otherwise typically need to manually examine each sample and flag cases with a high cancer probability to an oncologist for further review.
Adarsh Natarajan, Founder and CEO of Aindra Systems, says “Our vision is to implement mass cervical cancer screening using AI, and help the 330 million Indian women in the at-risk age bracket. With early detection, up to 90% of deaths can be avoided with appropriate treatment. Aindra’s computational pathology platform includes an affordable and portable, ‘point-of-care’ cervical cancer screening device to automate deep learning analysis and bring down the screening time significantly to help detect cancer at an early stage.”
The AI boom in healthcare is just starting, and the up-and-coming list of players is endless. Niramai is working on early detection of breast cancer. Ten3T is providing remote health monitoring services via AI to detect anomalies and alert the doctor. HealthifyMe, a Bangalore startup, is working on lifestyle diseases like obesity, hypertension, and diabetes. With its AI-enabled nutrition coach, Ria, HealthifyMe brings the best of elite nutrition expertise with AI in the loop.
And of course, global corporate leaders like Google bring their capabilities to India as well. Google recently partnered with Aravind Eye Hospitals to use image recognition algorithms to detect early signs of diabetic retinopathy, an eye disease that can cause blindness in diabetics if not treated early. Aravind Eye Hospitals is the largest eye care group in the world, having treated 32 million patients and performed 4 million surgeries. They have provided 128,000 retinal images to Google that have been invaluable for the application of AI to detect diabetic retinopathy in 415 million at-risk diabetic patients worldwide.
With a bevy of solutions on the rise, India is poised to leapfrog some of the key barriers of conventional healthcare, which of course has profound implications for healthcare delivery in other countries, including the US. With rising costs and unfavorable government policies, an increasing number of people are priced out of access. The burden on emergency rooms across the country is increasing as more people are unable to afford preventative care at primary care centers. AI-assisted technologies could reduce the costs in the US using the same mechanism — affordably scaling access to millions of people.
These startup-driven innovations and global platforms are just the tip of the iceberg. AI can ultimately become a force multiplier in bringing preventative healthcare facilities to anyone and everyone, rather than just urban or affluent communities. As you’ll often hear AI experts say, “more data beats better algorithms.” In other words, simpler algorithms only need a larger training dataset to generate accurate, valuable predictions for both payers and providers. With 1.3 billion citizens, India has the potential to provide the vast amounts of data needed to improve the accuracy and precision of our algorithms and empower both startups and large companies to help solve healthcare problems around the world.
Pranav Deshpande works with Startup Bridge at Stanford, an annual conference held in December at Stanford where top technology innovators from India and Silicon Valley build strategic partnerships to innovate for the world.
Researchers estimate we will speak to chatbots more than we speak to our spouses by 2020. Obviously, companies that implement chatbots are doing something right. However, businesses still have a hard time determining whether or not their bots are up to snuff. While there are plenty of effective chatbots on the market, there are also many that don’t quite meet consumers’ needs. So how do you measure the success of your chatbot?
This is the dilemma facing an increasing number of companies that use chatbots as part of their customer experience. 80 percent of businesses want to implement a chatbot by 2020, but many still face the challenge of gauging the efficacy of the technology.
Google’s Chatbot Analytics platform recently opened up to all, but it is still necessary for businesses to develop and understand their own chatbot success metrics to effectively use the platform.
The process of defining the best KPIs for your company’s bot will depend on your business goals and the functions you want your bot to perform.
Here are seven metrics of success you can use to identify opportunities for improvement in your company’s chatbot.
The first question any prospective investor wants to know about a company is whether or not it makes money. Therefore, the best indicator of a chatbot’s value is its financial benefit.
There are many ways to evaluate a bot’s impact on revenue – the best one for your bot will depend on its purpose. Another interesting wrinkle is that your chatbot can have a knock-on effect on a number of areas.
For example, you can measure a customer service bot’s profitability growth by the amount of money it saves the company compared to maintaining a customer service team 24/7. But you will want to take the bot’s impact on customer service into account. If self-service rates are higher and clients are more satisfied, that will result in repeat customers and higher online sales, thus impacting top-line revenue growth.
Nirvana comes for businesses the moment a user gets exactly what they want from the chatbot without any human input.
If your chatbot’s goal is to change a user’s password, you would measure success by the percentage of user interactions that end with this as a result.
The self-service rate closely correlates the cost savings aspect of revenue growth – in other words, how much money did your chatbot save?
What better way to find out exactly how well your chatbot is doing than to ask the very people who use it?
Your chatbot can help you determine this metric by asking the key question for the Net Promoter Score – “On a scale of 1-10 how likely is it that you would recommend our chatbot to a friend/colleague?” As a lead indicator of growth, the NPS provides a crucial foundation for understanding your chatbot’s customer experience performance.
At this point, it’s worth reflecting on AARRR and its importance in measuring the success of your business.
The activation rate in the context of a chatbot refers to when a user responds to its initial message with a question or an answer which is relevant to your business goals.
For example, a chatbot designed to provide you with weather updates would receive an activation rate when you enter your location – thus allowing the bot to provide you with the information.
How can this KPI help? If for some reason people were not responding when the weather chatbot first reached out to them, the botmaster would be able to tinker with it to enable a more satisfactory outcome.
Unfortunately, even bots with the most robust natural language processing are unable to understand everything a user says.
These errors are a useful indicator for measuring whether or not you need to improve your chatbot’s matching.
Bear in mind there are three different triggers, each of which necessitates its own type of response.
There is first the simple confusion from the bot if it cannot understand a comment. A basic “Sorry, I didn’t understand that. Can you ask again in a different way?” response would suffice.
Second is if the user sends a number of messages which are outside the remit of your chatbot. After a couple of attempts, it would be worth programming your bot to relay a message that reminds the user of its exact purpose.
The final trigger is if the bot forces a user to speak to a customer service agent after the interaction. Each of these will tell you something different about how your chat agent is performing.
Once again referring to AARRR, the retention rate represents the percentage of users who return to the chatbot over a specified period of time.
This timespan would vary between the bots depending on their purposes. For example, a fitness chatbot would require daily interaction and would benefit from analyzing its 1-day retention.
Artificial intelligence/machine learning rate
How strong is the AI in your chatbot? You can use the percentage of user questions that are correctly understood to measure this.
Which leads us the million, if not billion dollar question — can my chatbot learn independently?
Chatbots with machine learning can measure progress by comparing the improvement in self-service rate over a period of time without human intervention.
An agent with robust machine learning will be able to continually run its own gap analysis to highlight potential areas of improvement.
The demand for chatbots among Millennials is clear. Consumers are asking for simple and effective customer service, but not every chatbot is capable of delivering on this promise without a few tweaks. In a market that is becoming increasingly crowded, these KPIs can help you keep your chatbot one step ahead of the pack.
Jordi Torras is CEO and founder of Inbenta, an artificial intelligence technology company.
Bots are gaining lots of attention, thanks to the momentum in artificial intelligence and natural language processing. In fact, according to this Business Insider study, 80 percent of businesses are using, or intend to use, chatbots by 2020. While simplified rules-based bot builders like QNAMaker.ai make entering the realm of bots easy, truly conversational and awe-inspiring brand experiences need to come with a thoughtful and strategic approach. This means that before you jump in and start writing (or hiring out) a single line of code, you’ll want to do some planning. Let’s start with the most obvious question. Continue reading “3 questions marketers must answer before launching a chatbot”→
One day, artificially intelligent robots will replace human beings as the Earth’s dominant form of life, or so says Stephen Hawking (and Elon Musk). However, for all the impressive progress AIs and robots have made in recent years, the most urgent and real danger they pose to society lies elsewhere.
More specifically, the threat lies not with the possibility of AIs transcending the specific purposes they now serve (such as managing hedge funds or recruiting new employees) and rebelling against their owners. Rather, it resides more with the opposite scenario, with just how supremely efficient AIs are in acting on behalf of their masters.
The healthcare industry is abuzz over consumer engagement and empowerment, spurred by a strong belief that when patients become more engaged in their own care, better outcomes and reduced costs will result.
Nevertheless, from the perspective of many patients, navigating the healthcare ecosystem is anything but easy.
Consider the familiar use case of booking a doctor’s appointment. The vast majority of appointments are still scheduled by phone. Booking the appointment takes on average ten minutes, and the patient can be on hold for nearly half of that time.
These are the kinds of inefficiencies that compound one another across the healthcare system, resulting in discouraged patients who aren’t optimally engaged with their care. For example, the system’s outdated infrastructure and engagement mechanisms also contribute to last-minute cancellations and appointment no-shows—challenges to operational efficiency that cost U.S. providers alone as much as $150 billion annually.
Similarly, long waits for appointments and the convoluted process of finding a doctor are among the biggest aggravations for U.S. patients seeking care. A recent report by healthcare consulting firm Merritt Hawkins found that appointment wait times in large U.S. cities has increased 30 percent since 2014.
It’s time for this to change. Many healthcare providers are beginning to modernize, but moving from phone systems to online scheduling, though important, is only the tip of the iceberg. Thanks to new platforms and improved approaches to integration of electronic medical records (EMR), the potential for rapid transformation has arguably never been greater.
This transformation will take many shapes—but one particularly excites me: voice. While scheduling and keeping a doctor’s appointment might be challenging today, it’s not far-fetched to envision a near future in which finding a doctor may be as simple as telling your favorite voice-controlled digital assistant, “Find me a dermatologist within 15 miles of my office who has morning availability in the next two weeks and schedule me an appointment.”
How voice has evolved in healthcare: The rise of technology platforms
Voice technologies have been generating excitement in the healthcare space for years. Because doctors can speak more quickly than they can type or write, for example, the industry has been tantalized by the promise of natural language processing services that translate spoken doctors’ notes into electronic text.
No single company or healthcare provider holds all the keys to this revolution. Rather, it hinges on a variety of players leveraging technology platforms to create ecosystems of patient care. These ecosystems are possible because, in contrast to even a few years ago, it’s eminently more feasible to make software interoperate—and thus to combine software into richer services.
These apps can also leverage other APIs to connect disparate systems, data, and applications, anything from a simple microservice that surfaces inventory for medical supplies to FHIR-compliant APIs that allow access to patient data in new, more useful contexts. Understanding the possibilities and challenges of connecting these modern interfaces to EMR systems, which generally do not easily support modern interoperability, may be one of the biggest obstacles. Well over a quarter-million health apps exist, but only a fraction of these can connect to provider data. If voice-enabled health apps follow the same course, flooding the market without an approach to EMR interoperability, it could undermine the potential of these voice experiences to improve care.
Fortunately, as more providers both move from inflexible, aging software development techniques such as SOA to modern API-first approaches and adapt the FHIR standard, these obstacles should diminish. FHIR APIs allow providers to focus on predictable programming interfaces instead of underlying systems complexity, empowering them to replace many strained doctor-patient interactions with new paradigms.
As it becomes simpler for developers to work with EMR systems alongside voice interfaces and other modern platforms, the breadth and depth of new healthcare services could dramatically increase. Because developers can work with widely adopted voice assistants such as Google Assistant, Apple’s Siri, and Amazon’s Alexa, these new services won’t need to be confined to standalone apps. Instead, they can seamlessly integrate care and healthier activity into a user’s day-to-day routines.
Many of us already talk to our devices when we want information on things like traffic conditions, movie times, and weather forecasts. Likewise, many of us are already accustomed to taking advice from our digital assistants, such as when they point out conflicts on our calendars or advise us to leave in order to make it to a meeting on time. It’s natural these interfaces will expand to include new approaches to care: encouraging patients to exercise, reminding them to take medications, accelerating diagnoses by making medical records more digestible and complete, facilitating easier scheduling, etc.
Indeed, research firm Gartner’s recent “Top 10 Strategic Technology Trends for 2018” speaks to the potential of voice and other conversational interaction models: “These platforms will continue to evolve to even more complex actions, such as collecting oral testimony from crime witnesses and acting on that information by creating a sketch of the suspect’s head based on the testimony.”
As voice and other interfaces continue to evolve from scripted answers to more sophisticated understandings of user intent and more extemporaneous, context-aware ways of providing service, the nature of daily routines will change. For example, whereas many patients today feel anxiety over finding the time and focus to pursue better care, in the near future, this stress will likely diminish as more healthcare capabilities are built into platforms and interaction models consumers already use.
What comes next?
It’s clear that providers feel the urgency to improve patient engagement and operational efficiency. Research firm Accenture, for example, predicts that by the end of 2019, two-thirds of U.S. health systems will offer self-service digital scheduling, producing $3.2 billion in value. That’s a start, but there’s much more to do.
More capabilities will need to be developed and made available via productized APIs, platforms will need to continue to grow and evolve, and providers must adopt operational approaches that allow them to innovate at a breakneck pace while still complying with safety and privacy regulations.
But even though work remains, voice platforms and new approaches to IT architecture are already changing how patients and doctors interact. As more interoperability challenges are overcome, the opportunities for voice to be a meaningful healthcare interface are remarkable.
For the biggest changes, the question likely isn’t if they will happen but how quickly.
Aashima Gupta is the global head of healthcare solutions for Google Cloud Platform where she spearheads healthcare solutions for Google Cloud.
News media badly need improved recommendation engines. Scoring the inventory of stories could help. This is one of the goals of the News Quality Scoring Project. (Part of a series.)
For news media, recommendation engines are a horror show. The NQS project I’m working on at Stanford forced me to look at the way publishers try to keep readers on their property — and how the vast majority conspire to actually lose them.
I will resist putting terrible screenshots I collected for my research… Instead, we’ll look at practices that prevent a visitor from continuing to circulate inside a website (desktop or mobile):
— Most recommended stories are simply irrelevant. Automated, keyword-based recommendations yield poor results: merely mentioning a person’s name, or various named entities (countries, cities, brands) too often digs up items that have nothing to do with the subject matter. In other words, without a relevancy weight attached to keywords in the context of a story, keyword-based recommendations are useless. Unfortunately, they’re widespread.
Similarly, little or no effort is made to disambiguate possibly confusing words: in a major legacy media, I just saw an op-ed about sexual harassment that referred to Harvey Weinstein connected to… a piece on Donald Trump’s dealings with Hurricane Harvey; the article is also linked to Amazon’s takeover of the retail industry… only because of a random coincidence: the articles happened to mention Facebook.
— Clutter. Readers always need a minimum of guidance. Finding the right way to recommended stories (or videos) can be tricky. Too many modules in a page, whatever those are, will make the smartest recommendation engine useless.
— Most recommendation systems don’t take into account basic elements such as the freshness or the length of a related piece. Repeatedly direct your reader toward a shallow three-year-old piece and it’s highly likely she might never again click on your suggestions.
— Reliance on Taboola or Outbrain. These two are the worst visual polluters of digital news. Some outlets use them to recommend their own production. But, in most cases, through “Elsewhere on the web” headers, they send the reader to myriads of clickbait sites. This comes with several side-effects: readers go away, so are their behavioral data, and it disfigures the best design. For the sake of a short-term gain (these two platforms pay a lot), publishers give up their ability to retain users, and leak tons of information in the process — that Taboola, Outbrain and their ill ilk resell to third parties. Smart move indeed.
I could mention dozens of large media brands afflicted with those ailments. For them, money is not the problem. Incompetence and carelessness are the main culprits. Managers choose not to invest in recommendation engines because they simply don’t understand their value.
. . . . .
Multibillion businesses are based on large investment in competent recommendation engines: Amazon (both for its retail and video businesses); YouTube and, of course, Netflix.
The latter is my favorite. Four years ago, I realized the size and scope of Netflix’s secret weapon, its suggestion system, when reading this seminal Alex Madrigal piece in The Atlantic.
Madrigal was first in revealing the number of genres, sub-genres, micro-genres used by Netflix’s descriptors for its film library: 76,897! This entails the incredible task of manually tagging every movie and generating a vast set of metadata ranging from “forbidden-love dramas” to heroes with a prominent mustache.
Today, after a global roll-out of its revamped recommendation engine (which handles cultural differences between countries), the Netflix algorithm is an invaluable asset, benefiting viewership and subscriber retention. In his technical paper “The Netflix Recommender System: Algorithms, Business Value, and Innovation” (pdf here), Carlos Gomez-Uribe, VP of product innovation at Netflix says (emphasis mine):
Our subscriber monthly churn is in the low single-digits, and much of that is due to payment failure, rather than an explicit subscriber choice to cancel service. Over years of development of personalization and recommendations, we have reduced churn by several percentage points. Reduction of monthly churn both increases the lifetime value of an existing subscriber and reduces the number of new subscribers we need to acquire to replace canceled members. We think the combined effect of personalization and recommendations save us more than $1B per year.
Granted, Netflix example is a bit extreme. No news media company is able to invest $15M or $20M in just one year and have 70 engineers working to redesign a recommendation engine.
For Netflix it was deemed as a strategic investment.
Media should consider that too, especially given the declining advertising performance, and the subsequent reliance on subscriptions. Making a user view 5 pages per session instead of 3 will make a big difference in terms of Average Revenue per User (ARPU). It will also increase loyalty and reduce churn in the paid-for model.
How can scoring stories change that game? Powered by data science, the News Quality Scoring Project is built on a journalistic approach to the quantitative attributes of great journalism. (This part is provided by a great team of French data scientist working for Kynapse, which deals with gigantic datasets of the energy or health sectors.)
Let’s consider the ideal attributes of good recommendation engines for news, and see how they can be quantified.
—Relevancy: meaning, how it relates to the essence of the referential article, as opposed to an incidental mention (which should rule out a basic keyword system that generates so many and embarrassing false positives).
—Freshness: The more recent, the better. Sending someone who just read a business story about the digital economy to an old piece make no sense as that environment changes fast. Practically, it means that an obsolescence weight should be applied to any news items. Except that we need to take into account the following attribute…
—…“Evergreenness”: The evergreen story is the classic piece that will last (nearly) forever. A good example is the Alex Madrigal piece mentioned above: its freshness index (it was published in January 2014), should exclude it from any automated recommendation, but its quality, the fact that very few journalistic research rivals the author’s work, also the resources deployed by the publisher (quantified by the time given by The Atlantic editors to Madrigal, the number of person-hours devoted to discuss, edit, verify the piece), all of it contribute to a usually great value for the piece.
—Uniqueness: It’s a factor that neighbors the “evergreeneess”, but with a greater sensitivity to the timeliness of the piece; the uniqueness must also be assessed in the context of competition. For example: ‘We crushed other media with this great reportage about the fall of Raqqa; we did because we were the only one to have a writer and a videographer embedded with the Syrian Democratic Force’. Well… powerful and resource-intensive as this article was, its value will inexorably drop over time.
—Depth: a recommendation engine has no business digging up thin content. It should only lift from archives pieces that carry comprehensive research and reporting. Depth can be quantified by length, information density (there is a variety of sub-signals that measure just that) and, in some cases, the authorship features of a story, i.e. multiple bylines and mentions such as “Additional reporting by…” or “Researcher…” This tagging system is relatively easy to implement in the closed environment of a publication but, trust me, much harder to apply to the open web!
The News Quality Scoring platform I’m working on will vastly improve the performance of recommendation engines. By being able to come up with a score for each story (and eventually each video), I want to elevate the best editorial a publication has to offer.
=> Next week, we’ll look at the complex process of tagging large editorial datasets in a way that is comparable enough to what Netflix does. This will shed light on the inherent subjectivity of information and on the harsh reality of unstructured data (unlike cat images, news is a horribly messy dataset). We’ll also examine how to pick the right type of recommendation engine.
Conversational interfaces have become a larger part of our day-to-day lives. Many of us wake up in the morning and talk to Alexa or Google Home about the weather, ask Siri to “call mom” during our commute, and engage with several Slack apps and bots throughout the workday. But as the AI and bot industry matures, developers realize that users do not really care about AI or chat. What they really want is a better way to complete tasks. In this article, I will talk about how to optimize your conversational interface to do just that.
In my bot design book, I coined the term of a “conversational funnel.” Like a web or mobile user funnel, a conversational funnel models and measures the engagement, ease of use, and conversion of a user within a conversational interface.
Do you have an AI strategy — or hoping to get one? Check out VB Summit on October 23-24 in Berkeley, a high-level, invite-only AI event for business leaders.
Let’s imagine the conversational funnel for a bot that helps people buy a new laptop:
In each step of the funnel, users get closer to making the purchase, but fewer and fewer of them progress through the entire funnel. Many drop off — confused, distracted, or just tired of the process.
Never underestimate the power of making things easy to use. Companies like Lyft and Uber, and even Google and Amazon, were built on the foundation of providing an easier way to consume a pre-existing service or product. Conversational interfaces are no different. If they can become an easier, more pleasant way to identify a service or a product, they will be successful. Otherwise, they will fail.
Let’s explore ways to optimize a conversational funnel.
Buttons for simple choices
Here is a very common funnel failure we see with conversational interfaces today:
At this point, the user is already frustrated. They would rather go online and view the report on a web page. It is definitely not a pleasant or easy experience. The user isn’t satisfied, and the bot appears dense and unhelpful. But what if we change the interaction just a bit?
Now it is very clear what the user should do. Taking action is quick and contextual, whether interacting on web or mobile.
Pull-down lists for more choices
This problem of capturing the right information becomes even more complex when choosing from a large list of items:
Again, the user is frustrated. It is hard to spell out a specific item from a large list. Let’s make a small modification:
Now the experience is much easier. For example, Slack menus are auto-complete enabled, so it’s simple for the user to start typing and select the right item from a list. Again, improvements like these make the interaction more productive than the web or mobile app alternatives.
Forms for collecting structured data
When it comes to real-life examples, many bots must collect more structured information, which can be particularly cumbersome to collect with regular text.
Just reading this correspondence might make you tired. Capturing long structured user input is a nightmare when turned into plain conversation. So what should a bot do? Well, this week my team is introducing dialogs, a new Slack feature that allows developers to build interactive forms.
Once the user has clicked on the button, a well-structured form pops up in the conversation.
After the user fills in the form, the conversational interface can change accordingly, keeping the information captured in the context of the conversation.
As you can see, in this example the bot captured the structured data and brought the results back into the conversation, keeping the core value of bots in messaging apps: retaining a quick and contextual workflow.
Dialogs are new interactive modals that can capture multiple pieces of information and send them directly to your bot. You can use them to build a robust form inside Slack, or simply collect a single line of text.
Traditional conversation is a great hammer, but not everything’s a nail. Using rich interactions like buttons, menus, and dialogs make better performing conversational funnels and helps create a more pleasant and productive way to get things done. With these new emerging experiences, we take a large step toward making our lives simpler, more pleasant, and more productive.
Amir Shevat is head of developer relations at Slack.
Imagine sitting across from your new virtual assistant, Nadia. Although you’re staring into the eyes of a computer-generated image, “she” has extremely detailed and lifelike physical features. Her face has a bone structure and her muscles appear to move exactly like a human’s. She even has realistically detailed skin blemishes.
Nadia is a “digital human” that was released earlier this year by Soul Machines, an Auckland-based company that develops highly detailed avatars with personality and character. Nadia was created for the National Disability Insurance Scheme (NDIS) in Australia, using IBM Watson’s AI technology.
She was designed to help disabled people learn more about the NDIS and the resources it provides. Customers can go on the NDIS website and interact with her directly. One of the most compelling features of Nadia is her level of emotional responsiveness. She can “see” who she is talking to and customize her conversation based on emotional cues. For instance, if Nadia senses a customer is upset or distraught, she will change her behavior instantaneously to respond more empathetically.
The third era of AI
Nadia is a product of the latest era of artificial intelligence, a period marked by the proliferation of intelligent virtual assistants and robots with specific skill sets. Netflix and Pandora brought us the first wave of AI, which defined the curation era. Next, Siri brought us the voice interface. Now, in addition to developments like Nadia, we are seeing virtual assistants such as Alexa, Clara, and x.ai.
This great awakening of AI is fueled by super fast computers, powerful software, greater connectivity, and the Internet of Things. At the same time, AI advancements such as deep learning and neural networks (computational models that function much like biological brains) are expanding the capacity for machines to be more like humans. The capabilities of these neural networks are also creating a new reliance on artificial intelligence that goes beyond the completion of mundane tasks. AI technologies are now tasked with filling important roles in modern communities.
Juniper Research found that chatbots alone could save businesses $8 billion a year by 2022, with health care and banking benefiting most. Health care is particularly active in creating widespread AI solutions for the general public. One interesting example is a project from Stanford University. Researchers at the school are training an algorithm to identify skin cancer, one of the most common types of cancer in humans. The Stanford scientists loaded an algorithm with nearly 130,000 skin-lesion images that represented more than 2,000 diseases to test whether the computer could distinguish harmless moles from malignant melanomas and carcinomas. It did so with surprising accuracy, performing as well as a panel of 21 board-certified dermatologists. The team plans to make the system available on smartphones in the future.
Scrambling to claim stake in AI
As the value and need for AI increases, so do the investments in the space. Nearly 140 private companies developing AI technology have been acquired since 2011, with 40 acquired in 2016 alone. AI startups raised more than $5 billion worldwide in 2016, which marks a five-year high. Google has adopted an AI-first strategy for its business categories. Apple, Amazon, Facebook, and Salesforce have all jumped into the AI race as well. Other major tech giants competing in the space include General Electric and Samsung.
Major companies are already implementing AI in many consumer-facing products and services. Here are just a few of the areas where companies have created consumer touchpoints with the technology:
Auto manufacturing: In the automotive industry, AI is helping self-driving cars communicate with one another by sharing data and information about the infrastructure and traffic conditions around them. Apple CEO Tim Cook recently called the challenge of autonomous vehicles “the mother of all AI projects.”
Food and beverage: London-based IntelligentX Brewing Co. uses machine-learning algorithms to automatically analyze customer feedback on its bottled beers. This influences how its human brewers create new products targeted to drinkers’ rapidly changing tastes.
Manufacturing and logistics: Last holiday season we saw an interesting innovation by Amazon: The retailer employed 45,000 robots alongside human workers in 20 fulfillment centers. This number was up from 30,000 in 2015.
Travel: Companies like Boxever and John Paul are leveraging machine learning and AI to enhance customer experiences by better anticipating needs and providing more engaging interactions.
Retail: San Francisco-based personalized clothing e-tailer Stitch Fix uses software to design apparel. AI systems process trillions of possible combinations of patterns, cuts, and colors, factoring in customer purchasing behavior and information about the latest fashion trends to design new offerings. And as VentureBeat previously reported, Neiman Marcus’ app implements AI to help customers find the products they’re looking for. The app lets customers submit photos of items they like and get suggestions for similar items from the store’s stock.
According to Accenture, artificial intelligence could double annual economic growth rates by 2035 and boost labor productivity by up to 40 percent. Productivity may seem like a vague descriptor, but look at health care. Advancements in productivity from helpful assistants such as Ellie and Nadia result in better preventative health care, diagnoses, treatment, and health outcomes. That’s how bots are helping to build a better world.
Kal Patel is senior vice president of digital health at Flex, a sketch-to-scale solutions provider that designs and builds intelligent products for a connected world.
Artificial intelligence is at the root of several entirely new platforms on which customers and companies can interact. Voice augmented reality and chatbots are powered by natural language processing, computer vision, and machine learning AI algorithms. Each technology offers considerable opportunities for companies to deliver a more personal, useful, and relevant service to their customers.
Conversational interfaces are already here
Voice-controlled user interfaces have been around since 1952 when Bell Labs produced Audrey, a machine that could understand spoken numbers. But the current wave of voice technology was started by Amazon just a couple of years ago.
In 2015, Amazon launched the Echo, which introduced its AI-powered voice service, Alexa. At the time, the general response was one of confusion and frustration. As Farhad Manjoo, The New York Times’ tech columnist, wrote at the time, “If Alexa were a human assistant, you’d fire her, if not have her committed.”
But in the past two years, a lot has changed. Today, the Echo is recognized as a product that is leading a major shift in how humans engage with technology — and, by extension, how customers engage with brands.
It’s taken more than six decades, but now increasing processing power and advances in AI have technology giants locked in an arms race to create the dominant voice-based assistant. Some of the advances of key focus include machine learning, self-improving algorithms, speech recognition, and synthesis for developing conversational voice interfaces.
Voice can deliver better customer experiences
As the technology improves, the opportunity for companies to use voice to improve customer relationships grows.
Via an Alexa skill (Amazon’s term for an Alexa app), home cooks can ask for advice from Campbell’s Soup, shoppers can pay their Capital One credit card bills, and BMW drivers can check fuel levels remotely. Alexa, of course, is not alone. Apple Siri, Microsoft Cortana, Google Assistant, and other voice-enabled platforms are vying for attention.
For example, Xfinity’s latest TV remote is voice-enabled; Samsung Bixby controls a phone with voice commands; and Ikea is considering integrating voice-enabled AI services into its furniture.
Customer-focused companies must consider three areas in which voice can have an impact on their relationship with their customers.
More personality leads to deeper relationships: By its very nature, voice technology allows brands to move from text-based interactions with customers to something that feels more human. However, there is a high bar to meet. If customers feel they’re engaging with something closer to a “real person,” their expectations will change. If a conversational voice assistant makes a mistake or loses the context, it will be important for human backup to intercede. In addition, injecting an ambient conversational intelligence into people’s lives and homes will require deeper levels of trust that an individual’s privacy won’t be violated.
More engagement leads to more data, which gives companies further opportunities to understand their customers: Customers now expect omnichannel service, meaning they take for granted that companies will interact with and respond to them across any and all channels, including voice. From a company’s perspective, those voice interfaces can provide a rich additional set of data on its customer interactions. Companies will be able to use phrasing, tone, accent, and speed of delivery to learn far more about their customers than ever before. More data means companies can get better at understanding customer intent and attitude, such that they can take proactive steps to optimize the customer experience.
Voice presents opportunities for new types of engagement: Customers increasingly expect companies to respond to their queries immediately, whether during business hours or not. Voice and AI-powered conversational technology can help companies measure up to those expectations.
Intelligent conversational interfaces allow companies to scale up their capacity to engage with customers. The result is reducing customer service hold times, resolving simple issues more quickly, and triaging complex questions before directing them to the appropriate department. Intelligent, personalized voice-enabled assistants could also help health care companies scale “virtual medicine” and in-home care, and they could give financial services companies the capacity to handle customer service and provide financial advice at scale.
Voice is the most natural interface for humans. As conversational interfaces continuously learn, become smarter, and grow more aware of each individual’s preference, they will become more valuable in augmenting the customer experience and building deeper relationships with brands.
Clement Tussiot is director of product management at Salesforce Service Cloud, which delivers customer service software in the cloud.