Nvidia and Nuance team up on AI for radiology


Nvidia and Nuance announced a partnership today that’s aimed at helping healthcare institutions tap into artificial intelligence. The Nuance AI Marketplace for Diagnostic Imaging is, as the name suggests, designed to provide a hub for medical professionals to pick up new tools for analyzing the results of x-ray imaging and other radiology tools.

AI developers will be able to release the models that they’ve trained through Nuance’s PowerShare network, which will then allow participating medical institutions and radiology groups to subscribe. After subscribing, Nuance’s PowerScribe software will automatically apply the AI algorithm in relevant situations.

Nvidia’s Digits developer tool will be updated to provide developers with a way to publish their algorithms directly to Nuance PowerShare, so it’s easier for people to get their applications into the marketplace.

The deal is designed to make it easier for medical institutions to benefit from the rise of machine learning by offering access to trained models. What’s more, the institutions developing these models can benefit from sharing them with other radiologists to drive the overall state of the field forward.

Medical imaging is a tough field to tackle with machine learning, since it encompasses multiple different sections of the body, along with different machines that output different results. (A static x-ray film is quite different than a video of an ultrasound, for example.) On top of that, radiologists are often looking for different objects on the resulting images or videos, depending on what they’re looking for.

With that in mind, Kimberly Powell, the vice president for healthcare at Nvidia, said that she expects multiple algorithms working in concert will be necessary to provide even a single diagnosis through a single test. The marketplace is supposed to support that vision by making it easier for medical professionals to orchestrate the use of multiple systems.

The news comes alongside another partnership between Nvidia and GE Healthcare to use the chipmaker’s hardware to help power improved hardware for CT scans and ultrasounds, as well as an analytics platform.

AI Weekly: There are more pressing problems than god-like AI


A religion based around artificial intelligence is in the news again, this time helmed by Anthony Levandowski, a former member of Google’s self-driving car team. His argument is that humans will eventually create AI that is more intelligent than we are, making it functionally god-like, so we might as well start planning for that eventuality.

His thinking about the rise of super intelligent machines runs parallel to that of Elon Musk, who has been trumpeting the risks of artificial superintelligence on Twitter and in public appearances. (At one point, the Tesla CEO said that threats from AI posed a greater risk than North Korea.)

But while talking about an AI god grabs headlines, we have more pressing problems to consider. The AI experts I get to speak with aren’t concerned about an artificial superintelligence suddenly cropping up in the next few months and taking over the world.

Meanwhile, there’s plenty to be concerned about when it comes to immediate and unintended consequences of the machine learning techniques already available. There’s been no shortage of ink spilled over how the algorithms behind Facebook, Google, and the like are influencing our daily lives, and even our elections. And algorithmic bias continues to plague many other systems we use on a regular basis.

Take the case of speech recognition for virtual assistants like Alexa and Siri. As a white dude who grew up in California, I have little trouble conversing with those systems, but friends and acquaintances with non-standard accents are far less lucky. That may seem like a moderate source of frustration at worst, but imagine those systems becoming portals to key services, discounts, or other functionality that’s otherwise unavailable.

In earlier eras, structural biases that didn’t involve revolutionary technology have had far-reaching effects. Consider the impact of racial bias in the design of expressways and parkways in the New York metropolitan area. And photographers are still contending with the legacy of decisions that made film better suited to capturing people with lighter skin.

It stands to reason that decisions we make about AI systems today, even if their intelligence is far from godlike, could have similarly outsized impacts down the road.

As always, for AI coverage, send news tips to Blair Hanley Frank and Khari Johnson and guest post submissions to Cosette Jarrett — and be sure to bookmark our AI Channel.

Thanks for reading,

Blair Hanley Frank

AI Staff Writer

P.S. Please enjoy this video: Where AI is today and where it’s going

From the AI Channel

Alexa and Google Assistant should tell you when the next bus is coming

Rarely a week goes by without news of a new feature for AI assistants like Alexa, Bixby, or Siri. It’s a fast-moving competition between tech giants like Amazon, Samsung, and Apple, but despite billions of investment in AI and everyone from Softbank to Will.I.Am entering this space, sometimes critical or easily accomplishable tasks for the uberbots aren’t immediately addressed.

Read the full story here.

AISense wants to deliver total recall by transcribing all your conversations

There’s a new machine learning company on the block, with big ambitions to help people remember every conversation they’ve ever had. Called AISense, the company operates a voice transcription system that’s designed to work through long conversations using machine learning and provide users with a full text record of what was said.

Read the full story here.

Google gives developers more tools to make better voice apps

Google Assistant received some major upgrades in recent days, and today Google Assistant product manager Brad Abrams announced a series of changes to help developers make voice apps that interact with Google’s AI assistant, including ways to give them more expressive voices and send push notifications, as well as new subcategories for the Assistant’s App Directory.

Read the full story here.

PullString debuts Converse, a simple Alexa skills maker for marketers

PullString today announced plans to launch a simplified version of its platform, this one aimed at professionals who want to quickly design and launch voice apps. A marked departure from the company’s more complicated Author platform, Pullstring’s Converse will be available November 27 to coincide with AWS Re:Invent.

Read the full story here.

Microsoft’s Visual Studio gets new tools to help developers embrace AI

Microsoft announced today that its Visual Studio integrated development environment is getting a new set of tools aimed at easing the process of building AI systems.

Visual Studio Tools for AI is a package that’s designed to provide developers with built-in support for creating applications with a wide variety of machine learning frameworks, like Caffe2, TensorFlow, CNTK, and MXNet.

Read the full story here.

Google launches TensorFlow Lite developer preview for mobile machine learning

Google today launched TensorFlow Lite to give app developers the ability to deploy AI on mobile devices. The mobile version of Google’s popular open source AI program was first announced at the I/O developer conference.

Read the full story here.

Beyond VB

Inside the first church of artificial intelligence

Anthony Levandowski makes an unlikely prophet. Dressed Silicon Valley-casual in jeans and flanked by a PR rep rather than cloaked acolytes, the engineer known for self-driving cars—and triggering a notorious lawsuit—could be unveiling his latest startup instead of laying the foundations for a new religion. But he is doing just that. (via Wired)

Read the full story.

Where self-driving cars go to learn

Three weeks into his new job as Arizona’s governor, Doug Ducey made a move that won over Silicon Valley and paved the way for his state to become a driverless car utopia. (via The New York Times)

Read the full story.

AI could help reporters dig into grassroots issues once more

Last year’s divisive American presidential race highlighted the extent to which mainstream media outlets were out of touch with the political pulse of the country. (via MIT Technology Review)

Read the full story.

AI’s latest application: wasting scammers’ time

Schadenfreude is one of life’s simplest pleasures — especially when the victim in question is an email scammer. That’s the service Netsafe’s Re:scam provides. Simply forward your Nigerian prince emails to the service and it’ll use machine learning to generate conversations to waste the nefarious Nancy’s time. (via Engadget)

Read the full story.

Google and Salesforce sign massive strategic partnership

Google and Salesforce announced a massive strategic partnership today that’s aimed at driving value across their mutual customers. As part of the deal, Salesforce plans to use Google Cloud Platform infrastructure as a preferred partner to power the tech titan’s international expansion. Google, for its part, will use Salesforce as its preferred CRM provider for selling its cloud services.

Continue reading “Google and Salesforce sign massive strategic partnership”

Google’s AI-powered video analyzer hits public beta


Google released its Cloud Video Intelligence API to the world today by making it available in public beta, as part of the company’s ongoing push to make AI accessible.

The Video Intelligence API is designed to let users upload a video and get information back about what objects are in it, using a system called label detection. With this release, the company also added support for detecting pornographic content, making it possible to use the service to spot videos that would be inappropriate to share with an audience that isn’t looking for that sort of content.

In addition, Google also announced a number of improvements to its Cloud Vision API to make various features more accurate. The label detection model, which names objects inside an image, now supports more than 10,000 different entities, so it can spot the difference between “breakfast cereal” and just “breakfast.” That model is also twice as good at recall, which means that it’s more likely to pick the most relevant label for an image.

The service’s safe search model, which detects adult content, saw a 30 percent reduction in errors. The Vision API’s text detection model saw a 25 percent increase in average speed of detection, and 5 percent increase in accuracy on latin languages. Google’s system is also better at reading human emotions: the face detection system is more than twice as good at recognizing sadness, surprise and anger than it was at launch.

Google’s services are designed to make it easier for people to implement AI capabilities inside their applications without building the machine learning systems needed to power them. Today’s news shows one of the key benefits of those systems: it’s possible to gain major improvements in applications that use them without doing anything, just because the company behind the system makes improvements in the background.

The Cloud Video Intelligence API launched in private beta earlier this year, as part of the announcements made at the Google Cloud Next conference.

Google is competing with a wide variety of companies in the intelligent API space, including titans like Microsoft, Amazon and IBM.

As part of the Video Intelligence API’s public beta launch, Google announced pricing for the service. Label and adult content detection is free for the first 1,000 minutes of video uploaded, and costs 10 cents per minute for the next 9,000 minutes. Shot detection, which finds scene changes within a video, is also free for the first 1,000 minutes, and then costs 5 cents per minute for the next 9,000 minutes.

Companies that need additional time should contact Google for additional pricing information.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑