Google added a feature to its Hire service today designed to help recruiters find past job candidates who weren’t the right people for a previous position but might fit a new gig at a company. When recruiters open a new job, Hire will show them a list of candidates that it thinks are already qualified for the role, based on how their profiles…Read More
Nvidia and Nuance announced a partnership today that’s aimed at helping healthcare institutions tap into artificial intelligence. The Nuance AI Marketplace for Diagnostic Imaging is, as the name suggests, designed to provide a hub for medical professionals to pick up new tools for analyzing the results of x-ray imaging and other radiology tools.
AI developers will be able to release the models that they’ve trained through Nuance’s PowerShare network, which will then allow participating medical institutions and radiology groups to subscribe. After subscribing, Nuance’s PowerScribe software will automatically apply the AI algorithm in relevant situations.
Nvidia’s Digits developer tool will be updated to provide developers with a way to publish their algorithms directly to Nuance PowerShare, so it’s easier for people to get their applications into the marketplace.
The deal is designed to make it easier for medical institutions to benefit from the rise of machine learning by offering access to trained models. What’s more, the institutions developing these models can benefit from sharing them with other radiologists to drive the overall state of the field forward.
Medical imaging is a tough field to tackle with machine learning, since it encompasses multiple different sections of the body, along with different machines that output different results. (A static x-ray film is quite different than a video of an ultrasound, for example.) On top of that, radiologists are often looking for different objects on the resulting images or videos, depending on what they’re looking for.
With that in mind, Kimberly Powell, the vice president for healthcare at Nvidia, said that she expects multiple algorithms working in concert will be necessary to provide even a single diagnosis through a single test. The marketplace is supposed to support that vision by making it easier for medical professionals to orchestrate the use of multiple systems.
A religion based around artificial intelligence is in the news again, this time helmed by Anthony Levandowski, a former member of Google’s self-driving car team. His argument is that humans will eventually create AI that is more intelligent than we are, making it functionally god-like, so we might as well start planning for that eventuality.
But while talking about an AI god grabs headlines, we have more pressing problems to consider. The AI experts I get to speak with aren’t concerned about an artificial superintelligence suddenly cropping up in the next few months and taking over the world.
Meanwhile, there’s plenty to be concerned about when it comes to immediate and unintended consequences of the machine learning techniques already available. There’s been no shortage of ink spilled over how the algorithms behind Facebook, Google, and the like are influencing our daily lives, and even our elections. And algorithmic bias continues to plague many other systems we use on a regular basis.
Take the case of speech recognition for virtual assistants like Alexa and Siri. As a white dude who grew up in California, I have little trouble conversing with those systems, but friends and acquaintances with non-standard accents are far less lucky. That may seem like a moderate source of frustration at worst, but imagine those systems becoming portals to key services, discounts, or other functionality that’s otherwise unavailable.
In earlier eras, structural biases that didn’t involve revolutionary technology have had far-reaching effects. Consider the impact of racial bias in the design of expressways and parkways in the New York metropolitan area. And photographers are still contending with the legacy of decisions that made film better suited to capturing people with lighter skin.
It stands to reason that decisions we make about AI systems today, even if their intelligence is far from godlike, could have similarly outsized impacts down the road.
Rarely a week goes by without news of a new feature for AI assistants like Alexa, Bixby, or Siri. It’s a fast-moving competition between tech giants like Amazon, Samsung, and Apple, but despite billions of investment in AI and everyone from Softbank to Will.I.Am entering this space, sometimes critical or easily accomplishable tasks for the uberbots aren’t immediately addressed.
There’s a new machine learning company on the block, with big ambitions to help people remember every conversation they’ve ever had. Called AISense, the company operates a voice transcription system that’s designed to work through long conversations using machine learning and provide users with a full text record of what was said.
Google Assistant received some major upgrades in recent days, and today Google Assistant product manager Brad Abrams announced a series of changes to help developers make voice apps that interact with Google’s AI assistant, including ways to give them more expressive voices and send push notifications, as well as new subcategories for the Assistant’s App Directory.
PullString today announced plans to launch a simplified version of its platform, this one aimed at professionals who want to quickly design and launch voice apps. A marked departure from the company’s more complicated Author platform, Pullstring’s Converse will be available November 27 to coincide with AWS Re:Invent.
Microsoft announced today that its Visual Studio integrated development environment is getting a new set of tools aimed at easing the process of building AI systems.
Visual Studio Tools for AI is a package that’s designed to provide developers with built-in support for creating applications with a wide variety of machine learning frameworks, like Caffe2, TensorFlow, CNTK, and MXNet.
Google today launched TensorFlow Lite to give app developers the ability to deploy AI on mobile devices. The mobile version of Google’s popular open source AI program was first announced at the I/O developer conference.
Anthony Levandowski makes an unlikely prophet. Dressed Silicon Valley-casual in jeans and flanked by a PR rep rather than cloaked acolytes, the engineer known for self-driving cars—and triggering a notorious lawsuit—could be unveiling his latest startup instead of laying the foundations for a new religion. But he is doing just that. (via Wired)
Schadenfreude is one of life’s simplest pleasures — especially when the victim in question is an email scammer. That’s the service Netsafe’s Re:scam provides. Simply forward your Nigerian prince emails to the service and it’ll use machine learning to generate conversations to waste the nefarious Nancy’s time. (via Engadget)
Google and Salesforce announced a massive strategic partnership today that’s aimed at driving value across their mutual customers. As part of the deal, Salesforce plans to use Google Cloud Platform infrastructure as a preferred partner to power the tech titan’s international expansion. Google, for its part, will use Salesforce as its preferred CRM provider for selling its cloud services.
Google released its Cloud Video Intelligence API to the world today by making it available in public beta, as part of the company’s ongoing push to make AI accessible.
The Video Intelligence API is designed to let users upload a video and get information back about what objects are in it, using a system called label detection. With this release, the company also added support for detecting pornographic content, making it possible to use the service to spot videos that would be inappropriate to share with an audience that isn’t looking for that sort of content.
In addition, Google also announced a number of improvements to its Cloud Vision API to make various features more accurate. The label detection model, which names objects inside an image, now supports more than 10,000 different entities, so it can spot the difference between “breakfast cereal” and just “breakfast.” That model is also twice as good at recall, which means that it’s more likely to pick the most relevant label for an image.
The service’s safe search model, which detects adult content, saw a 30 percent reduction in errors. The Vision API’s text detection model saw a 25 percent increase in average speed of detection, and 5 percent increase in accuracy on latin languages. Google’s system is also better at reading human emotions: the face detection system is more than twice as good at recognizing sadness, surprise and anger than it was at launch.
Google’s services are designed to make it easier for people to implement AI capabilities inside their applications without building the machine learning systems needed to power them. Today’s news shows one of the key benefits of those systems: it’s possible to gain major improvements in applications that use them without doing anything, just because the company behind the system makes improvements in the background.
The Cloud Video Intelligence API launched in private beta earlier this year, as part of the announcements made at the Google Cloud Next conference.
Google is competing with a wide variety of companies in the intelligent API space, including titans like Microsoft, Amazon and IBM.
As part of the Video Intelligence API’s public beta launch, Google announced pricing for the service. Label and adult content detection is free for the first 1,000 minutes of video uploaded, and costs 10 cents per minute for the next 9,000 minutes. Shot detection, which finds scene changes within a video, is also free for the first 1,000 minutes, and then costs 5 cents per minute for the next 9,000 minutes.
Companies that need additional time should contact Google for additional pricing information.