Google and Salesforce sign massive strategic partnership

Google and Salesforce announced a massive strategic partnership today that’s aimed at driving value across their mutual customers. As part of the deal, Salesforce plans to use Google Cloud Platform infrastructure as a preferred partner to power the tech titan’s international expansion. Google, for its part, will use Salesforce as its preferred CRM provider for selling its cloud services.

Continue reading “Google and Salesforce sign massive strategic partnership”

Google’s AI-powered video analyzer hits public beta


Google released its Cloud Video Intelligence API to the world today by making it available in public beta, as part of the company’s ongoing push to make AI accessible.

The Video Intelligence API is designed to let users upload a video and get information back about what objects are in it, using a system called label detection. With this release, the company also added support for detecting pornographic content, making it possible to use the service to spot videos that would be inappropriate to share with an audience that isn’t looking for that sort of content.

In addition, Google also announced a number of improvements to its Cloud Vision API to make various features more accurate. The label detection model, which names objects inside an image, now supports more than 10,000 different entities, so it can spot the difference between “breakfast cereal” and just “breakfast.” That model is also twice as good at recall, which means that it’s more likely to pick the most relevant label for an image.

The service’s safe search model, which detects adult content, saw a 30 percent reduction in errors. The Vision API’s text detection model saw a 25 percent increase in average speed of detection, and 5 percent increase in accuracy on latin languages. Google’s system is also better at reading human emotions: the face detection system is more than twice as good at recognizing sadness, surprise and anger than it was at launch.

Google’s services are designed to make it easier for people to implement AI capabilities inside their applications without building the machine learning systems needed to power them. Today’s news shows one of the key benefits of those systems: it’s possible to gain major improvements in applications that use them without doing anything, just because the company behind the system makes improvements in the background.

The Cloud Video Intelligence API launched in private beta earlier this year, as part of the announcements made at the Google Cloud Next conference.

Google is competing with a wide variety of companies in the intelligent API space, including titans like Microsoft, Amazon and IBM.

As part of the Video Intelligence API’s public beta launch, Google announced pricing for the service. Label and adult content detection is free for the first 1,000 minutes of video uploaded, and costs 10 cents per minute for the next 9,000 minutes. Shot detection, which finds scene changes within a video, is also free for the first 1,000 minutes, and then costs 5 cents per minute for the next 9,000 minutes.

Companies that need additional time should contact Google for additional pricing information.

Visualize data instantly with machine learning in Google Sheets

Sorting through rows and rows of data in a spreadsheet can be overwhelming. That’s why today, we’re rolling out new features in Sheets that make it even easier for you to visualize and share your data, and find insights your teams can act on.

Ask and you shall receive → Sheets can build charts for you

Explore in Sheets, powered by machine learning, helps teams gain insights from data, instantly. Simply ask questions—in words, not formulas—to quickly analyze your data. For example, you can ask “what is the distribution of products sold?” or “what are average sales on Sundays?” and Explore will help you find the answers.

Now, we’re using the same powerful technology in Explore to make visualizing data even more effortless. If you don’t see the chart you need, just ask. Instead of manually building charts, ask Explore to do it by typing in “histogram of 2017 customer ratings” or “bar chart for ice cream sales.” Less time spent building charts means more time acting on new insights.

Sheets GIF

Instantly sync your data from Sheets → Docs or Slides

Whether you’re preparing a client presentation or sharing sales forecasts, keeping up-to-date data is critical to success, but it can also be time-consuming if you need to update charts or tables in multiple sources. This is why we made it easier to programmatically update charts in Docs and Slides last year.

Now, we’re making it simple to keep tables updated, too. Just copy and paste data from Sheets to Docs or Slides and tap the “update” button to sync your data.

Sheets bundle - still

Even more Sheets updates

We’re constantly looking for ways to improve our customers’ experience in Sheets. Based on your feedback, we’re rolling out more updates today to help teams get work done faster:

  • Keyboard shortcuts: Change default shortcuts in your browser to the same spreadsheet shortcuts you’re already used to. For example, delete a row quickly by using “Ctrl+-.”
  • Upgraded printing experience: Preview Sheet data in today’s new print interface. Adjust margins, select scale and alignment options or repeat frozen rows and columns before you print your work.
  • Powerful new chart editing experience: Create and edit charts in a new, improved sidebar. Choose from custom colors in charts or add additional trendlines to model data. You can also create more chart types, like 3D charts. This is now also available for iPhones and iPads.
  • More spreadsheet functions: We added new functions to help you find insights, bringing the total function count in Sheets to more than 400. Try “SORTN,” a function unique to Sheets, which can show you the top three orders or best-performing months in a sales record spreadsheet. Sheets also support statistical functions like “GAMMADIST,” “F.TEST” and “CHISQ.INV.RT.”

These new features in Sheets are rolling out starting today. Learn how Sheets can help you find valuable insights.

AI in the newsroom: What’s happening and what’s next?

Bringing people together to discuss the forces shaping journalism is central to our mission at the Google News Lab. Earlier this month, we invited Nick Rockwell, the Chief Technology Officer from the New York Times, and Luca D’Aniello, the Chief Technology Officer at the Associated Press, to Google’s New York office to talk about the future of artificial intelligence in journalism and the challenges and opportunities it presents for newsrooms.

The event opened with an overview of the AP’s recent report, “The Future of Augmented Journalism: a guide for newsrooms in the age of smart machines,” which was based on interviews with dozens of journalists, technologists, and academics (and compiled with the help of a robot, of course). As early adopters of this technology, the AP highlighted a number of their earlier experiments:

Boxing match image captured by one of AP’s AI-powered cameras
This image of a boxing match was captured by one of AP’s AI-powered cameras.
  • Deploying more than a dozen AI-powered robotic cameras at the 2016 Summer Olympics to capture angles not easily available to journalists
  • Using Google’s Cloud Vision API to classify and tag photos automatically throughout the report
  • Increasing news coverage of quarterly earnings reports from 400 to 4,000 companies using automation

The report also addressed key concerns, including risks associated with unchecked algorithms, potential for workflow disruption, and the growing gap in skill sets.

Here are three themes that emerged from the conversation with Rockwell and D’Aniello:

1. AI will increase a news organization’s ability to focus on content creation

D’Aniello noted that journalists, often “pressed for resources,” are forced to “spend most of their time creating multiple versions of the same content for different outlets.” AI can reduce monotonous tasks like these and allow journalists to to spend more of their time on their core expertise: reporting.

For Rockwell, AI could also be leveraged to power new reporting, helping journalists analyze massive data sets to surface untold stories. Rockwell noted that “the big stories will be found in data, and whether we can find them or not will depend on our sophistication using large datasets.”

2. AI can help improve the quality of dialogue online and help organizations better understand their readers’ needs.

Given the increasing abuse and harassment found in online conversations, many publishers are backing away from allowing comments on articles. For the Times, the Perspective API tool developed by Jigsaw (part of Google’s parent company Alphabet), is creating an opportunity to encourage constructive discussions online by using machine learning to increase the efficiency of comment moderation. Previously, the Times could only moderate comments on 10 percent of articles. The Times aspires to use Perspective to enable commenting on all its articles.

The Times is also thinking about using AI to increase the relevance of what they deliver to readers. As Rockwell notes, “Our readers have always looked to us to filter the world, but to do that only through editorial curation is a one-size-fits-all approach. There is a lot we can do to better serve them.”

3. Applying journalistic standards is essential to AI’s successful implementation in newsrooms

Both panelists agreed that the editorial standards that go into creating quality journalism should be applied to AI-fueled journalism. As Francesco Marconi, the author of the AP report, remarked, “Humans make mistakes. Algorithms make mistakes. All the editorial standards should be applied to the technology.”

Here are a few approaches we’ve seen for how those standards can be applied to the technology:

  • Pairing up journalists with the tech. At the AP, business journalists trained software to understand how to write an earnings report.
  • Serving as editorial gatekeepers. News editors should play a role in synthesizing and framing the information AI produces.
  • Ensuring more inclusive reporting. In 2016, Google.org, USC and the Geena Davis Foundation used machine learning to create a tool that collects data on gender portrayals in media.

What’s ahead

What will it take for AI to be a positive force in journalism? The conversation showed that while the path wasn’t certain, getting to the right answers would require close collaboration between the technology industry, news organizations, and journalists.

“There is a lot of work to do, but it’s about the mindset,” D’Aniello said. “Technology was seen as a disruptor of the newsroom, and it was difficult to introduce things. I don’t think this is the case anymore. The urgency and the need is perceived at the editorial level.”

We look forward to continuing to host more conversations on important topics like this one. Learn more about the Google News Lab on our website.

Header image of robotic camera courtesy of Associated Press.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑