AI could help solve the world’s healthcare problems at scale

In a world with limited doctors, emerging diseases and superbugs, and sharply rising healthcare costs, how can we successfully tackle healthcare problems at scale?
This is just one of the critical challenges India’s explosive startup community hopes to solve by implementing AI in new and innovative ways to serve the needs of 1.324 billion citizens. This is a feat that carries huge implications for the US and other healthcare ecosystems around the globe.

To understand how dire the situation is, it’s worth considering India’s health paradox. The country’s deep demographic dividend — which occurs when the majority of a country’s population consists of working-age individuals — is driving rapid and unprecedented growth, but it is also a ticking time bomb. With an average age of 27, India has one of the youngest and most educated populations in the world. Since 1991, this phenomenon has fueled approximately 7% annual growth, produced new goods and services, and reduced dependents in the economy.

But in order to keep reaping the benefits of this dividend, India’s young population needs to have access to quality nutrition and healthcare. In addition, as the dividend declines (as we are witnessing in China), the country will need new infrastructure in place to care for its aging population. And unfortunately, the infrastructure that is necessary doesn’t exist today.

The doctor-to-patient ratio in India is one of the worst in the world, with just 0.2 doctors for every 1,000 Indians (for comparison, there are 1.1 doctors for every 1,000 Americans in the US). Modern medical facilities — and as a result, doctors — are heavily concentrated in urban areas. In addition to heart disease, cancer, and is so bad, for instance, that it was deemed equivalent to smoking 44 cigarettes per day.

The fundamental reason behind India’s healthcare issues is resource scarcity. India needs more medical facilities and more medical expertise, and both of these require time and billions of dollars to develop. But such resources are not easily obtainable, so we must consider other ways to dramatically increase access to existing resources in an effective and inexpensive way.

This is where AI has the potential to reshape India’s healthcare problem. Manu Rekhi, Managing Director of Inventus, says, “Indian AI platform companies are building upon two decades of India’s IT industry expertise. They are supercharging how software and human intelligence can partner to create new human-in-the-loop AI systems for global markets as well as the bottom of the pyramid.”

Indeed, a number of Indian startups have implemented deep AI expertise to move the needle on specific health conditions and disease. In some cases, these companies offer technology and distribution opportunities, which attract Fortune 500 giants to partner with them both for the India market and globally.

One such company is Tricog Health, a startup handpicked by GE’s healthcare accelerator program for its cloud-based cardiac diagnosis platform. Coronary heart disease is increasingly prevalent in India, having escalated from causing 26% of adult deaths in 2003 to 32% in 2013. Tricog increases access to cardiac care across 340 cities in 23 states, including in some of the most remote locations in India. The company’s platform collects physiological data and ECGs from medical devices in the field then uses specialized AI to process the data in real-time and provide a diagnosis to the cardiologist.The cardiologist then reviews and recommends next steps to the GP or nurse in the field instantaneously using the Tricog mobile app. By using Tricog’s AI engine, a few specialists can diagnose over 20,000 patients.

Another startup, Bengaluru-based Aindra Systems, is using AI to tackle cervical cancer, which is the second most common cancer in Indian women between the ages of 15 and 60. In fact, India represents a whopping one-third of the total global incidences of cervical cancer. Aindra’s solution can detect cervical cancer in its early stages and measurably increase the odds of survival. The company increases the productivity of pathologists who screen cervical cancer samples, who otherwise typically need to manually examine each sample and flag cases with a high cancer probability to an oncologist for further review.

Adarsh Natarajan, Founder and CEO of Aindra Systems, says “Our vision is to implement mass cervical cancer screening using AI, and help the 330 million Indian women in the at-risk age bracket. With early detection, up to 90% of deaths can be avoided with appropriate treatment. Aindra’s computational pathology platform includes an affordable and portable, ‘point-of-care’ cervical cancer screening device to automate deep learning analysis and bring down the screening time significantly to help detect cancer at an early stage.”

The AI boom in healthcare is just starting, and the up-and-coming list of players is endless. Niramai is working on early detection of breast cancer. Ten3T is providing remote health monitoring services via AI to detect anomalies and alert the doctor. HealthifyMe, a Bangalore startup, is working on lifestyle diseases like obesity, hypertension, and diabetes. With its AI-enabled nutrition coach, Ria, HealthifyMe brings the best of elite nutrition expertise with AI in the loop.

And of course, global corporate leaders like Google bring their capabilities to India as well. Google recently partnered with Aravind Eye Hospitals to use image recognition algorithms to detect early signs of diabetic retinopathy, an eye disease that can cause blindness in diabetics if not treated early. Aravind Eye Hospitals is the largest eye care group in the world, having treated 32 million patients and performed 4 million surgeries. They have provided 128,000 retinal images to Google that have been invaluable for the application of AI to detect diabetic retinopathy in 415 million at-risk diabetic patients worldwide.

With a bevy of solutions on the rise, India is poised to leapfrog some of the key barriers of conventional healthcare, which of course has profound implications for healthcare delivery in other countries, including the US. With rising costs and unfavorable government policies, an increasing number of people are priced out of access. The burden on emergency rooms across the country is increasing as more people are unable to afford preventative care at primary care centers. AI-assisted technologies could reduce the costs in the US using the same mechanism — affordably scaling access to millions of people.

These startup-driven innovations and global platforms are just the tip of the iceberg. AI can ultimately become a force multiplier in bringing preventative healthcare facilities to anyone and everyone, rather than just urban or affluent communities. As you’ll often hear AI experts say, “more data beats better algorithms.” In other words, simpler algorithms only need a larger training dataset to generate accurate, valuable predictions for both payers and providers. With 1.3 billion citizens, India has the potential to provide the vast amounts of data needed to improve the accuracy and precision of our algorithms and empower both startups and large companies to help solve healthcare problems around the world.

Pranav Deshpande works with Startup Bridge at Stanford, an annual conference held in December at Stanford where top technology innovators from India and Silicon Valley build strategic partnerships to innovate for the world.  

Nvidia and Nuance team up on AI for radiology

Nvidia and Nuance announced a partnership today that’s aimed at helping healthcare institutions tap into artificial intelligence. The Nuance AI Marketplace for Diagnostic Imaging is, as the name suggests, designed to provide a hub for medical professionals to pick up new tools for analyzing the results of x-ray imaging and other radiology tools.

AI developers will be able to release the models that they’ve trained through Nuance’s PowerShare network, which will then allow participating medical institutions and radiology groups to subscribe. After subscribing, Nuance’s PowerScribe software will automatically apply the AI algorithm in relevant situations.

Nvidia’s Digits developer tool will be updated to provide developers with a way to publish their algorithms directly to Nuance PowerShare, so it’s easier for people to get their applications into the marketplace.

The deal is designed to make it easier for medical institutions to benefit from the rise of machine learning by offering access to trained models. What’s more, the institutions developing these models can benefit from sharing them with other radiologists to drive the overall state of the field forward.

Medical imaging is a tough field to tackle with machine learning, since it encompasses multiple different sections of the body, along with different machines that output different results. (A static x-ray film is quite different than a video of an ultrasound, for example.) On top of that, radiologists are often looking for different objects on the resulting images or videos, depending on what they’re looking for.

With that in mind, Kimberly Powell, the vice president for healthcare at Nvidia, said that she expects multiple algorithms working in concert will be necessary to provide even a single diagnosis through a single test. The marketplace is supposed to support that vision by making it easier for medical professionals to orchestrate the use of multiple systems.

The news comes alongside another partnership between Nvidia and GE Healthcare to use the chipmaker’s hardware to help power improved hardware for CT scans and ultrasounds, as well as an analytics platform.

AI Weekly: There are more pressing problems than god-like AI

A religion based around artificial intelligence is in the news again, this time helmed by Anthony Levandowski, a former member of Google’s self-driving car team. His argument is that humans will eventually create AI that is more intelligent than we are, making it functionally god-like, so we might as well start planning for that eventuality.

His thinking about the rise of super intelligent machines runs parallel to that of Elon Musk, who has been trumpeting the risks of artificial superintelligence on Twitter and in public appearances. (At one point, the Tesla CEO said that threats from AI posed a greater risk than North Korea.)

But while talking about an AI god grabs headlines, we have more pressing problems to consider. The AI experts I get to speak with aren’t concerned about an artificial superintelligence suddenly cropping up in the next few months and taking over the world.

Meanwhile, there’s plenty to be concerned about when it comes to immediate and unintended consequences of the machine learning techniques already available. There’s been no shortage of ink spilled over how the algorithms behind Facebook, Google, and the like are influencing our daily lives, and even our elections. And algorithmic bias continues to plague many other systems we use on a regular basis.

Take the case of speech recognition for virtual assistants like Alexa and Siri. As a white dude who grew up in California, I have little trouble conversing with those systems, but friends and acquaintances with non-standard accents are far less lucky. That may seem like a moderate source of frustration at worst, but imagine those systems becoming portals to key services, discounts, or other functionality that’s otherwise unavailable.

In earlier eras, structural biases that didn’t involve revolutionary technology have had far-reaching effects. Consider the impact of racial bias in the design of expressways and parkways in the New York metropolitan area. And photographers are still contending with the legacy of decisions that made film better suited to capturing people with lighter skin.

It stands to reason that decisions we make about AI systems today, even if their intelligence is far from godlike, could have similarly outsized impacts down the road.

As always, for AI coverage, send news tips to Blair Hanley Frank and Khari Johnson and guest post submissions to Cosette Jarrett — and be sure to bookmark our AI Channel.

Thanks for reading,

Blair Hanley Frank

AI Staff Writer

P.S. Please enjoy this video: Where AI is today and where it’s going

From the AI Channel

Alexa and Google Assistant should tell you when the next bus is coming

Rarely a week goes by without news of a new feature for AI assistants like Alexa, Bixby, or Siri. It’s a fast-moving competition between tech giants like Amazon, Samsung, and Apple, but despite billions of investment in AI and everyone from Softbank to Will.I.Am entering this space, sometimes critical or easily accomplishable tasks for the uberbots aren’t immediately addressed.

Read the full story here.

AISense wants to deliver total recall by transcribing all your conversations

There’s a new machine learning company on the block, with big ambitions to help people remember every conversation they’ve ever had. Called AISense, the company operates a voice transcription system that’s designed to work through long conversations using machine learning and provide users with a full text record of what was said.

Read the full story here.

Google gives developers more tools to make better voice apps

Google Assistant received some major upgrades in recent days, and today Google Assistant product manager Brad Abrams announced a series of changes to help developers make voice apps that interact with Google’s AI assistant, including ways to give them more expressive voices and send push notifications, as well as new subcategories for the Assistant’s App Directory.

Read the full story here.

PullString debuts Converse, a simple Alexa skills maker for marketers

PullString today announced plans to launch a simplified version of its platform, this one aimed at professionals who want to quickly design and launch voice apps. A marked departure from the company’s more complicated Author platform, Pullstring’s Converse will be available November 27 to coincide with AWS Re:Invent.

Read the full story here.

Microsoft’s Visual Studio gets new tools to help developers embrace AI

Microsoft announced today that its Visual Studio integrated development environment is getting a new set of tools aimed at easing the process of building AI systems.

Visual Studio Tools for AI is a package that’s designed to provide developers with built-in support for creating applications with a wide variety of machine learning frameworks, like Caffe2, TensorFlow, CNTK, and MXNet.

Read the full story here.

Google launches TensorFlow Lite developer preview for mobile machine learning

Google today launched TensorFlow Lite to give app developers the ability to deploy AI on mobile devices. The mobile version of Google’s popular open source AI program was first announced at the I/O developer conference.

Read the full story here.

Beyond VB

Inside the first church of artificial intelligence

Anthony Levandowski makes an unlikely prophet. Dressed Silicon Valley-casual in jeans and flanked by a PR rep rather than cloaked acolytes, the engineer known for self-driving cars—and triggering a notorious lawsuit—could be unveiling his latest startup instead of laying the foundations for a new religion. But he is doing just that. (via Wired)

Read the full story.

Where self-driving cars go to learn

Three weeks into his new job as Arizona’s governor, Doug Ducey made a move that won over Silicon Valley and paved the way for his state to become a driverless car utopia. (via The New York Times)

Read the full story.

AI could help reporters dig into grassroots issues once more

Last year’s divisive American presidential race highlighted the extent to which mainstream media outlets were out of touch with the political pulse of the country. (via MIT Technology Review)

Read the full story.

AI’s latest application: wasting scammers’ time

Schadenfreude is one of life’s simplest pleasures — especially when the victim in question is an email scammer. That’s the service Netsafe’s Re:scam provides. Simply forward your Nigerian prince emails to the service and it’ll use machine learning to generate conversations to waste the nefarious Nancy’s time. (via Engadget)

Read the full story.

A New Algorithm Can Spot Pneumonia Better Than a Radiologist

“A new arXiv paper by researchers from Stanford explains how CheXNet, the convolutional neural network they developed, achieved the feat. CheXNet was trained on a publicly available data set of more than 100,000 chest x-rays that were annotated with information on 14 different diseases that turn up in the images. The researchers had four radiologists go through a test set of x-rays and make diagnoses, which were compared with diagnoses performed by CheXNet. Not only did CheXNet beat radiologists at spotting pneumonia, but once the algorithm was expanded, it proved better at identifying the other 13 diseases as well.”

SciGraph publishes 1 billion facts as Linked Open Data

Last Thursday we reached a major milestone for the SciGraph project: nearly 1 billion facts (= RDF statements) have been released as Linked Open Data, most of it under a CC-BY license! This data release follows and improves on the previous data release (February 2017) which included metadata for all journal articles published in the last 5 years.

Continue reading “SciGraph publishes 1 billion facts as Linked Open Data”

The robots are coming — the promise and peril of AI, some questions

I’m at the Charleston conference, my first time, and we had a panel discussion this morning talking about AI.

On the panel were:

Heather Staines Director of Partnerships,

Peter Brantley Director of Online Strategy, UC Davis

Elizabeth Caley Chief of Staff, Meta, Chan Zuckerberg Initiative

Ruth Pickering Co-founder and Chief Strategy Officer, Yewno

and myself. It was a pleasure to be on a panel with these amazing people. Continue reading “The robots are coming — the promise and peril of AI, some questions”

Scoring stories to make better recommendation engines for news

by Frederic Filloux

An early version of Netflix recommendation engine. Photo: Reed Hastings (and Nasa)

News media badly need improved recommendation engines. Scoring the inventory of stories could help. This is one of the goals of the News Quality Scoring Project. (Part of a series.)

For news media, recommendation engines are a horror show. The NQS project I’m working on at Stanford forced me to look at the way publishers try to keep readers on their property — and how the vast majority conspire to actually lose them.

I will resist putting terrible screenshots I collected for my research… Instead, we’ll look at practices that prevent a visitor from continuing to circulate inside a website (desktop or mobile):

— Most recommended stories are simply irrelevant. Automated, keyword-based recommendations yield poor results: merely mentioning a person’s name, or various named entities (countries, cities, brands) too often digs up items that have nothing to do with the subject matter. In other words, without a relevancy weight attached to keywords in the context of a story, keyword-based recommendations are useless. Unfortunately, they’re widespread.

Similarly, little or no effort is made to disambiguate possibly confusing words: in a major legacy media, I just saw an op-ed about sexual harassment that referred to Harvey Weinstein connected to… a piece on Donald Trump’s dealings with Hurricane Harvey; the article is also linked to Amazon’s takeover of the retail industry… only because of a random coincidence: the articles happened to mention Facebook.

— Clutter. Readers always need a minimum of guidance. Finding the right way to recommended stories (or videos) can be tricky. Too many modules in a page, whatever those are, will make the smartest recommendation engine useless.

— Most recommendation systems don’t take into account basic elements such as the freshness or the length of a related piece. Repeatedly direct your reader toward a shallow three-year-old piece and it’s highly likely she might never again click on your suggestions.

— Reliance on Taboola or Outbrain. These two are the worst visual polluters of digital news. Some outlets use them to recommend their own production. But, in most cases, through “Elsewhere on the web” headers, they send the reader to myriads of clickbait sites. This comes with several side-effects: readers go away, so are their behavioral data, and it disfigures the best design. For the sake of a short-term gain (these two platforms pay a lot), publishers give up their ability to retain users, and leak tons of information in the process — that Taboola, Outbrain and their ill ilk resell to third parties. Smart move indeed.

I could mention dozens of large media brands afflicted with those ailments. For them, money is not the problem. Incompetence and carelessness are the main culprits. Managers choose not to invest in recommendation engines because they simply don’t understand their value.
. . . . .

Multibillion businesses are based on large investment in competent recommendation engines: Amazon (both for its retail and video businesses); YouTube and, of course, Netflix.

The latter is my favorite. Four years ago, I realized the size and scope of Netflix’s secret weapon, its suggestion system, when reading this seminal Alex Madrigal piece in The Atlantic.

Madrigal was first in revealing the number of genres, sub-genres, micro-genres used by Netflix’s descriptors for its film library: 76,897! This entails the incredible task of manually tagging every movie and generating a vast set of metadata ranging from “forbidden-love dramas” to heroes with a prominent mustache.

Today, after a global roll-out of its revamped recommendation engine (which handles cultural differences between countries), the Netflix algorithm is an invaluable asset, benefiting viewership and subscriber retention. In his technical paper “The Netflix Recommender System: Algorithms, Business Value, and Innovation” (pdf here), Carlos Gomez-Uribe, VP of product innovation at Netflix says (emphasis mine):

Our subscriber monthly churn is in the low single-digits, and much of that is due to payment failure, rather than an explicit subscriber choice to cancel service. Over years of development of personalization and recommendations, we have reduced churn by several percentage points. Reduction of monthly churn both increases the lifetime value of an existing subscriber and reduces the number of new subscribers we need to acquire to replace canceled members. We think the combined effect of personalization and recommendations save us more than $1B per year.

Granted, Netflix example is a bit extreme. No news media company is able to invest $15M or $20M in just one year and have 70 engineers working to redesign a recommendation engine.

For Netflix it was deemed as a strategic investment.

Media should consider that too, especially given the declining advertising performance, and the subsequent reliance on subscriptions. Making a user view 5 pages per session instead of 3 will make a big difference in terms of Average Revenue per User (ARPU). It will also increase loyalty and reduce churn in the paid-for model.

How can scoring stories change that game? Powered by data science, the News Quality Scoring Project is built on a journalistic approach to the quantitative attributes of great journalism. (This part is provided by a great team of French data scientist working for Kynapse, which deals with gigantic datasets of the energy or health sectors.)

Let’s consider the ideal attributes of good recommendation engines for news, and see how they can be quantified.

—Relevancy: meaning, how it relates to the essence of the referential article, as opposed to an incidental mention (which should rule out a basic keyword system that generates so many and embarrassing false positives).

—Freshness: The more recent, the better. Sending someone who just read a business story about the digital economy to an old piece make no sense as that environment changes fast. Practically, it means that an obsolescence weight should be applied to any news items. Except that we need to take into account the following attribute…

—…“Evergreenness”: The evergreen story is the classic piece that will last (nearly) forever. A good example is the Alex Madrigal piece mentioned above: its freshness index (it was published in January 2014), should exclude it from any automated recommendation, but its quality, the fact that very few journalistic research rivals the author’s work, also the resources deployed by the publisher (quantified by the time given by The Atlantic editors to Madrigal, the number of person-hours devoted to discuss, edit, verify the piece), all of it contribute to a usually great value for the piece.

—Uniqueness: It’s a factor that neighbors the “evergreeneess”, but with a greater sensitivity to the timeliness of the piece; the uniqueness must also be assessed in the context of competition. For example: ‘We crushed other media with this great reportage about the fall of Raqqa; we did because we were the only one to have a writer and a videographer embedded with the Syrian Democratic Force’. Well… powerful and resource-intensive as this article was, its value will inexorably drop over time.

—Depth: a recommendation engine has no business digging up thin content. It should only lift from archives pieces that carry comprehensive research and reporting. Depth can be quantified by length, information density (there is a variety of sub-signals that measure just that) and, in some cases, the authorship features of a story, i.e. multiple bylines and mentions such as “Additional reporting by…” or “Researcher…” This tagging system is relatively easy to implement in the closed environment of a publication but, trust me, much harder to apply to the open web!

The News Quality Scoring platform I’m working on will vastly improve the performance of recommendation engines. By being able to come up with a score for each story (and eventually each video), I want to elevate the best editorial a publication has to offer.

=> Next week, we’ll look at the complex process of tagging large editorial datasets in a way that is comparable enough to what Netflix does. This will shed light on the inherent subjectivity of information and on the harsh reality of unstructured data (unlike cat images, news is a horribly messy dataset). We’ll also examine how to pick the right type of recommendation engine.
Stay tuned.

To get regular updates about the News Quality Scoring Project and participate in various tests we are going to make, subscribe now:

Scoring stories to make better recommendation engines for news was originally published in Monday Note on Medium, where people are continuing the conversation by highlighting and responding to this story.

A Robot Wrote This?

Digital Journalism Vol. 0 , Iss. 0,0
“Two experimental studies were conducted that examined the effect of purported machine authorship on perceptions of news credibility. Study One (N = 129) revealed that news attributed to a machine is perceived as less credible than news attributed to a human journalist. Study Two (N = 182) also observed negative effects of machine authorship through the indirect pathway of source anthropomorphism and negative expectancy violations, with evidence of moderation by prior recall of robotics also observed.”

The Washington Post’s robot reporter has published 850 articles in the past year

It’s been a year since The Washington Post started using its homegrown artificial intelligence technology, Heliograf, to spit out around 300 short reports and alerts on the Rio Olympics. Since then, it’s used Heliograf to cover congressional and gubernatorial races on Election Day and D.C.-area high school football games, producing stories like this one and tweets like this:

Continue reading “The Washington Post’s robot reporter has published 850 articles in the past year”

The New York Times, with a little help from automation, is aiming to open up most articles to comments

The New York Times’ strategy for taming reader comments has for many years been laborious hand curation. Its community desk of moderators examines around 11,000 individual comments each day, across the 10 percent of total published articles that are open to commenting.

For the past few months, the Times has been testing a new tool from Jigsaw — Google parent Alphabet’s tech incubator — that can automate a chunk of the arduous moderation process. On Tuesday, the Times will begin to expand the number of articles open for commenting, opening about a quarter of stories on Tuesday and shooting for 80 percent by the end of this year. (Another partner, Instrument, built the CMS for moderation.)

“The bottom line on this is that the strategy on our end of moderating just about every comment by hand, and then using that process to show readers what kinds of content we’re looking for, has run its course,” Bassey Etim, Times community editor, told me. “From our end, we’ve seen that it’s working to scale comments — to the point where you can have a good large comments section that you’re also moderating very quickly, things that are widely regarded as impossible. But we’ve got a lot left to go.”

These efforts to improve its commenting functions were highlighted in the Times announcement earlier this month about the creation of a reader center, led by Times editor Hanna Ingber, to deal specifically with reader concerns and insights. (In the same announcement, it let go Liz Spayd and eliminated its public editor position.)

Nudging readers towards comments that the Times “is looking for” is no easy task. Its own guidelines, laid out in an internal document and outlining various rules around comments and how to take action on them, have evolved over time. (I took the Times’ moderation quiz — getting only one “correct” — and at my pace, it would’ve taken more than 24 hours to finish tagging 11,000 comments.)

Jigsaw’s tool, called Perspective, has been fed a corpus of Times comments that have been tagged by human editors already. Human editors then trained the algorithm over the testing phase, flagging mistakes in moderation it made. In the new system, a moderator can evaluate comments based on the likelihood of rejection and checks that the algorithm has properly labeled comments that fall into a grayer zone (comments with 17 to 20 percent likelihood of rejection, for instance). Then the community desk team can set a rule to allow all comments that fall between 0 to 20 percent, for instance, to go through.

“We’re looking at an extract of all the mistakes it’s made, evaluate what the impact of each of those moderating mistakes might be on the community and on the perceptions of our product. Then based on that, we can choose different forms of moderation for each individual section at the Times,” Etim said. Some sections could remain entirely human-moderated; some sections that tend to have a low rate of rejection for comments could be automated.

Etim’s team will be working closely with Ingber’s Reader Center, “helping out in terms of staffing projects, with advice, and all kinds of things,” though the relationship and roles are not currently codified.

“It used to be when something bubbled up in the comments, maybe we’d hear repeated comments or concerns about coverage. You’d send that off to a desk editor, and they would say, ‘That’s a good point; let’s deal with this.’ But the reporter is out reporting something else, then time expires, and it passes,” Etim said. “Now it’s at the point where when things bubble up, [Ingber] can help us take care of it in the highest levels in the newsroom.”

I asked Etim why the Times hadn’t adopted any of the Coral Project’s new tools around comment moderation, given that Coral was announced years ago as a large collaborative effort between The Washington Post, the Times, and Mozilla. It’s mostly a matter of immediate priorities, according to Etim, and he can see the Times coming back to the Coral Project’s tools down the line.

“The Coral Project is just working on a different problem set at the moment — and the Coral Project was never meant to be creating the New York Times commenting system,” he said. “They are focusing on helping most publishers on the web. Our business priority was, how do we do moderation at scale? And for moderation at our kind of scale, we needed the automation.

“The Coral stuff became a bit secondary, but we’re going to circle back and look at what it has in the open source world, and looking to them as a model for how to deal with things like user reputation,” he added.

Photo of chattering teeth toy by Wendy used under a Creative Commons license.

You can call it hype — but Watson is getting marketers ROI

IBM CMO Rashmy Chatterjee is a featured speaker at MB 2017, July 11-12 in San Francisco. She’s among dozens of others from some of the most iconic brands who will be sharing how marketers are using AI within the broad marketing ecosystem to stay ahead. See the full roster of speakers here.

“AI, or cognitive computing, absolutely should be table stakes today,” says Rashmy Chatterjee, IBM’s North America CMO. “In the future, cognitive computing (AI) won’t even be an option anymore. You’ll use it as a matter of course.”

In her upcoming talk at MB 2017, “Applied AI for Real ROI,” she’ll break down the real-world examples that prove her statement out — like the five-fold increase in conversions that BMO (Bank of Montreal) achieved using strategies fueled by Watson, IBM’s powerful AI platform.

“BMO uses our client experience tools, and their conversion rate on mobile — from customer interest to actual business — went up from ten percent to 50 percent,” Chatterjee says. “With American Eagle, we’ve seen an almost 20 percent increase in mobile traffic because AI implementation dramatically increased their understanding of customer issues.”

Chatterjee has an unshakable focus on making IBM’s clients successful, and goes on to explain this goal is achieved by helping brands better understand their customers, discern context of action and queries,  provide multiple options to engage, respond to feedback quickly in a personalized way — and, ultimately, by enabling them to deliver an unmatched superior experience.

Enter Watson, IBM’s poster child for AI, which now has APIs that can discern tone, understand personality quirks, and learn where and how the client is seeking to be engaged, with real-time input from customers that is immediately actionable.

“Watson has a set of capabilities, and with each of them the goal is: Can we make this experience better for the client, and can we make them more successful in what they want to do?” Chatterjee says.

For instance, the Tone Analyzer capability uses linguistic analysis to detect communication tones in text to understand conversations and communications — allowing brands to respond to customer needs, worries and wants. Or to better analyze and understand what’s really behind the thousands of comments customers leave scattered across social media.

“We also use Tone Analyzer for customer experience assessments and customer support, and with this information, we keep getting better,” she adds. “We’re constantly asking, what does it mean, and how can we respond to it better?”

Then there’s Watson’s Personality Insights service which extracts personality characteristics based on a variety of written communications using customers’ social media entries, enterprise data, and other digital communications.

Try a demo here and Watson may tell you that “you are helpful and analytical,” and “your choices are driven by a desire for well being” (along with a much fuller description). Or you may learn from that “you are excitable and adventurous, eager to try new things.. and you tend to speak up and take charge of situations.”

By enabling companies to learn who their customers are as individuals, they can improve acquisition, retention, and engagement with highly personalized interactions.

Chatterjee points out that mobile is where these AI capabilities have the most potential to shine. Of course, the bar is high. Customers already expect to be able to do most transactions on mobile, and take things like location capabilities for granted.

“But what is the next frontier?” asks Chatterjee. “What are the next things we can do — through emotions, through tone, through personalities, through so many other technological capabilities that will create an even more differentiated client experience?”

Chatterjee will be speaking about how her teams are already making progress on this next frontier — and where it’s going next. You can register for MB 2017 right here.

Artificial intelligence can’t solve every problem in the media, but it can take care of these

Machine learning and Artificial Intelligence are slowly but surely getting ahead in some newsrooms around the world, but how are they effectively shaping the life of one of the biggest news agencies in the world?

Francesco Marconi, Manager of Strategy and Corporate Development at the Associated Press in New York focuses on media strategy in Virtual Reality, Artificial Intelligence and Data. He will be part of the panel ‘It’s raining bots: Four best practices to make the most of automation’ with co-panelists David Alandete, Managing Editor of El País, Robert Unsworth from News Republic, with moderator Noriko Tagikuchi of, discussing the involvement of machine learning in personalised news, at the GEN Summit 2017 in Vienna, 21–23 June.

©Associated Press

How do you see Artificial intelligence and immersive technologies shape the future of news?

Streamlining workflows, taking out grunt work, crunching more data, digging out insights and generating additional outputs are just a few of the mega-wins that have resulted from putting smart machines to work in the service of journalism.

Artificial intelligence can enable journalists to analyse data; identify patterns, trends and actionable insights from multiple sources; see things that the naked eye can’t see; turn data and spoken words into text; text into audio and video; understand sentiment; analyse scenes for objects, faces, text or colours — and more.

Broadly speaking, AI promises to reap many big rewards for journalism in the years to come. Greater speed, accuracy, scale and diversity of coverage are just some of the results media organisations are already seeing.

A robotic camera used by The Associated Press to capture unique images from angles not normally seen by the public.

How will immersive technologies and AI affect a publication’s business model or strategy?

These technologies are opening up new territories and changing journalism in ways no one might have predicted even a few years ago. And they arrive at a time when journalists and media companies are searching for new solutions to the challenges that the digital revolution has imposed on the news business. Not only is it imperative to save time and money in an era of shifting economics, but at the same time, you need to find ways to keep pace with the growing scale and scope of the news itself.

However, Artificial intelligence can’t solve every problem. As the technology evolves, it will certainly allow for more precise analyses, but there will always be challenges the technology can’t overcome.

Francesco Marconi

What type of automation can be used optimally in a newsroom? Is there any resistance in adopting these?

AI enables the automation of repetitive tasks such as writing news articles that follow a very “templated” structure. The Associated Press is currently automating earning reports as well as sports articles. We have increased our output by 10x and reduced the error rate.

AI can also enable journalists to sift through large corpuses of data, text, images and videos. We recently teamed up with MIT to analyse twitter data pertaining to the American public’s response to US President Donald Trump.

From the Associated Press

In addition to increasing news coverage (automation) and extract hidden insights from data (augmentation), AI can improve processes such as automatically tag photos, generate captions for videos and even deploy AI powered cameras to capture angles not easily available to journalists (which AP did during the Olympics)

This new wave of technological innovation is no different than any other that has come before it. Success still relies on how human journalists implement these new tools. Artificial intelligence is man-made, meaning that all the ethical, editorial and economic influences considered when producing traditional news content still apply in this new age of augmented journalism.

To best leverage and responsibly use artificial intelligence in news, the first step is to understand the technology itself.

How best to use machine learning and automation for news?

Tip 1: Be aware that when technology changes, journalism doesn’t. Artificial intelligence can help augment journalism, but it will never replace journalism. AI might aid in the reporting process, but journalists will always need to put the pieces together and construct a digestible, creative narrative.

Tip 2: Journalists can best leverage AI once they understand the technology. Artificial intelligence is complicated, and there are many ways it can be implemented in a newsroom, but just like any other technology, the more you know about a tool, the more effectively you can use it.

Tip 3: There are ethical considerations inherent in journalism’s use of AI. Again, just because the tools of journalism change, that doesn’t mean the rules of journalism change. As AI works its way into newsrooms, it is important to adhere to our existing standards and ethics.

What trends do you see arise in new technologies for journalism?

These are some of the key trends (these are not in any rank order):

  • Trend 1: Conversational interfaces and distribution of news across voice enabled devices such as Amazon Alexa and Google Home.
  • Trend 2: Utilise blockchain technology to protect and monitor digital content and intellectual property.
  • Trend 3: Messaging bots and automated push alerts to engage readers on platforms such as Whatsapp and Facebook messenger
  • Trend 4: Over the top video and distribution to new platforms such as Netflix, HBO, Hulu and other major video portals.
  • Trend 5: Emergence of micro-payments and new paid subscription models as a key monetisation strategy.
A VR experience by the Associated Press

What are the most exciting or outstanding VR/MR/AR or AI news projects or initiatives you have seen recently?

Cortico is a nonprofit in collaboration with the MIT Media Lab that applies artificial intelligence and media analytics to map and analyse the public sphere.

In AI, the Associated Press collaborated with Cortico, a media analytics nonprofit recently launched from the Laboratory for Social Machines at the MIT Media Lab, to analyse a large dataset of tweets related to the first 100 days of the new administration using machine learning techniques. The result of this collaboration between AP journalism and MIT data scientists proved both fascinating and insightful, ultimately allowing for a better understanding of President Trump’s activities on Twitter and the subsequent public response to those activities.

In VR, we recently produced an immersive experience exploring invasive species including specific types of insects, venomous fish and reptiles. This virtual reality experience is hosted on the web and enables participants to explore how non-native species cost the world hundreds of billions of dollars a year. This story also explores how creative, high tech techniques may finally be turning the tide, however, with tools like underwater tasers, electrified nets and robots that zap and vacuum up venomous lionfish.

About Francesco Marconi

Francesco Marconi is AP’s strategy manager and co-lead on automation and AI. He is also a fellow at Columbia’s Tow Center and an affiliate researcher at MIT Media Lab. He will be publishing his new book, Live Like Fiction, this July by Frontier Press. The book is a guide about finding purpose and inspiration through storytelling.

Artificial intelligence can’t solve every problem in the media, but it can take care of these was originally published in Global Editors Network on Medium, where people are continuing the conversation by highlighting and responding to this story.

New Books on Applications of the Wolfram Language

We’re always excited to see new books that illustrate applications of Wolfram technology in a wide range of fields. Below is another set of recently published books using the Wolfram Language to explore computational thinking. From André Dauphiné’s outstanding geographical studies of our planet to Romano and Caveliere’s work on the geometric optics that help us study the stars, we find a variety of fields served by Wolfram technology.

Application Books Set 1


From Curve Fitting to Machine Learning: An Illustrative Guide to Scientific Data Analysis and Computational Intelligence, second edition

We’re fascinated by artificial intelligence and machine learning, and Achim Zielesny’s second edition of From Curve Fitting to Machine Learning: An Illustrative Guide to Scientific Data Analysis and Computational Intelligence provides a great introduction to the increasingly necessary field of computational intelligence. This is an interactive and illustrative guide with all concepts and ideas outlined in a clear-cut manner, with graphically depicted plausibility arguments and a little elementary mathematics. Exploring topics such as two-dimensional curve fitting, multidimensional clustering and machine learning with neural networks or support vector machines, the subject-specific demonstrations are complemented with specific sections that address more fundamental questions like the relation between machine learning and human intelligence. Zielesny makes extensive use of Computational Intelligence Packages (CIP), a high-level function library developed with Mathematica’s programming language on top of Mathematica’s algorithms. Readers with programming skills may easily port or customize the provided code, so this book is particularly valuable to computer science students and scientific practitioners in industry and academia.

The Art of Programming in the Mathematica Software, third edition

Another gem for programmers and scientists who need to fine-tune and otherwise customize their Wolfram Language applications is the third edition of The Art of Programming in the Mathematica Software, by Victor Aladjev, Valery Boiko and Michael Shishakov. This text concentrates on procedural and functional programming. Experienced Wolfram Language programmers know the value of creating user tools. They can extend the most frequently used standard tools of the system and/or eliminate its shortcomings, complement new features, and much more. Scientists and data analysts can then conduct even the most sophisticated work efficiently using the Wolfram Language. Likewise, professional programmers can use these techniques to develop more valuable products for their clients/employers. Included is the MathToolBox package with more than 930 tools; their freeware license is attached to the book.

Introduction to Mathematica with Applications

For a more basic introduction to Mathematica, readers may turn to Marian Mureşan’s Introduction to Mathematica with Applications. First exploring the numerous features within Mathematica, the book continues with more complex material. Chapters include topics such as sorting algorithms, functions—both planar and solid—with many interesting examples and ordinary differential equations. Mureşan explores the advantages of using the Wolfram Language when dealing with the number pi and describes the power of Mathematica when working with optimal control problems. The target audience for this text includes researchers, professors and students—really anyone who needs a state-of-the art computational tool.

Application Books Set 2

Geographical Models with Mathematica

The Wolfram Language’s powerful combination of extensive map data and computational agility is on display in André Dauphiné’s Geographical Models with Mathematica. This book gives a comprehensive overview of the types of models necessary for the development of new geographical knowledge, including stochastic models, models for data analysis, geostatistics, networks, dynamic systems, cellular automata and multi-agent systems, all discussed in their theoretical context. Dauphiné then provides over 65 programs that formalize these models, written in the Wolfram Language. He also includes case studies to help the reader apply these programs in their own work.

Geometric Optics: Theory and Design of Astronomical Optical Systems Using Mathematica, second edition

Our tour of new Wolfram Language books moves from terra firma to the stars in Geometric Optics: Theory and Design of Astronomical Optical Systems Using Mathematica. This book by Antonio Romano and Roberto Caveliere provides readers with the mathematical background needed to design many of the optical combinations that are used in astronomical telescopes and cameras. The results presented in the work were obtained through a different approach to third-order aberration theory as well as the extensive use of Mathematica. Replete with workout examples and exercises, Geometric Optics is an excellent reference for advanced graduate students, researchers and practitioners in applied mathematics, engineering, astronomy and astronomical optics. The work may be used as a supplementary textbook for graduate-level courses in astronomical optics, optical design, optical engineering, programming with Mathematica or geometric optics.

EIWL book cover Don’t forget to check out Stephen Wolfram’s An Elementary Introduction to the Wolfram Language, now in its second edition. It is available in print, as an ebook and free on the web—as well as in Wolfram Programming Lab in the Wolfram Open Cloud. There’s also now a free online hands-on course based on the book. Read Stephen Wolfram’s recent blog post about machine learning for middle schoolers to learn more about the new edition.

Visualize data instantly with machine learning in Google Sheets

Sorting through rows and rows of data in a spreadsheet can be overwhelming. That’s why today, we’re rolling out new features in Sheets that make it even easier for you to visualize and share your data, and find insights your teams can act on.

Ask and you shall receive → Sheets can build charts for you

Explore in Sheets, powered by machine learning, helps teams gain insights from data, instantly. Simply ask questions—in words, not formulas—to quickly analyze your data. For example, you can ask “what is the distribution of products sold?” or “what are average sales on Sundays?” and Explore will help you find the answers.

Now, we’re using the same powerful technology in Explore to make visualizing data even more effortless. If you don’t see the chart you need, just ask. Instead of manually building charts, ask Explore to do it by typing in “histogram of 2017 customer ratings” or “bar chart for ice cream sales.” Less time spent building charts means more time acting on new insights.

Sheets GIF

Instantly sync your data from Sheets → Docs or Slides

Whether you’re preparing a client presentation or sharing sales forecasts, keeping up-to-date data is critical to success, but it can also be time-consuming if you need to update charts or tables in multiple sources. This is why we made it easier to programmatically update charts in Docs and Slides last year.

Now, we’re making it simple to keep tables updated, too. Just copy and paste data from Sheets to Docs or Slides and tap the “update” button to sync your data.

Sheets bundle - still

Even more Sheets updates

We’re constantly looking for ways to improve our customers’ experience in Sheets. Based on your feedback, we’re rolling out more updates today to help teams get work done faster:

  • Keyboard shortcuts: Change default shortcuts in your browser to the same spreadsheet shortcuts you’re already used to. For example, delete a row quickly by using “Ctrl+-.”
  • Upgraded printing experience: Preview Sheet data in today’s new print interface. Adjust margins, select scale and alignment options or repeat frozen rows and columns before you print your work.
  • Powerful new chart editing experience: Create and edit charts in a new, improved sidebar. Choose from custom colors in charts or add additional trendlines to model data. You can also create more chart types, like 3D charts. This is now also available for iPhones and iPads.
  • More spreadsheet functions: We added new functions to help you find insights, bringing the total function count in Sheets to more than 400. Try “SORTN,” a function unique to Sheets, which can show you the top three orders or best-performing months in a sales record spreadsheet. Sheets also support statistical functions like “GAMMADIST,” “F.TEST” and “CHISQ.INV.RT.”

These new features in Sheets are rolling out starting today. Learn how Sheets can help you find valuable insights.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑