The Google News Initiative: Building a stronger future for news

People come to Google looking for information they can trust, and that information often comes from the reporting of journalists and news organizations around the world. And while the demand for quality journalism is as high as it’s ever been, the business of journalism is under pressure, as publications around the world face challenges from an industry-wide transition to digital. Continue reading “The Google News Initiative: Building a stronger future for news”

Google Search will start ranking faster mobile pages higher in July

Google today announced a new project to improve its mobile search results: factoring page speed into its search ranking. As the company notes, page speed “has been used in ranking for some time” but that was largely for desktop searches. Starting in July 2018, page speed will be a ranking factor for mobile searches on Google as well. Continue reading “Google Search will start ranking faster mobile pages higher in July”

In a Confusing World, Context is Key — A Times Intern Sets Out to Improve Search Results

Illustration by Kevin Zweerink for The New York Times

The past few years have seen the rise of “context-aware” systems: technologies that can predict your intentions based on information about your environment. If you ask Google’s intelligent personal assistant, “How tall is that building?” it will use your phone’s GPS to see what buildings are near you and guess which building you are asking about. Or, if you add “pick up milk” to the Reminders app on your iPhone, you can choose to have the app remind you the next time you are within a block of a grocery store. Continue reading “In a Confusing World, Context is Key — A Times Intern Sets Out to Improve Search Results”

Yet Another Turning Point….

As some readers at this place already know , the boring fact is that I started work in the publishing and information industry in October 1967 , and am thus over fifty years as an observer of change in these parts . And , in what some regard as a fifty year dotage , , I am prone to remark that change is the new normal etc etc and pour scorn on the wealthy publisher who I approached for work in 1993 and who replied “ tell me when your digital revolution thing is over and then help me to cope with the next five hundred years of the post-printing world “ . And I quite see the point . Revolutions are not for everyone . And there were comfortable years in my twenties when it seemed possible to believe that Longman ad OUP, Nelson and Macmillan , could go on ruling the post colonial world of school textbook publishing  with nothing more exciting than a revised Latin syllabus to stir the waters of their creativity . Yet in truth the world of print , from the rise of Gutenberg to the fall of the house of Murdoch , has been full of change . It just happens faster and more completely now . Continue reading “Yet Another Turning Point….”

Your Search questions, answered selfie-style on Google

Whether you watch them on TV, listen to them on a podcast, or read about them in a magazine, you spend a lot of time wondering about the people who inspire you. Personally, I’ve always wanted to know if my favorite actor Will Ferrell can really play the drums. Now in the U.S., you can find answers to questions about notable people on mobile Search, and they’re coming directly from the source.

When you search for your favorite personalities, whether they’re rising stars or well-known celebs, their answers will appear in the form of selfie-style videos with a uniquely personal, authentic and delightful touch.

gif

Continue reading “Your Search questions, answered selfie-style on Google”

Improving Search and discovery on Google

Search is not just about answering your questions—it’s also about discovery. We search to explore new topics of interest, to find new angles to ideas or things we think we already know, or even to uncover information that we didn’t even think to ask about.

Over the years, we’ve developed many features to help you discover more on your journeys through the web, starting with related searches almost 10 years ago, to more recent additions such as related questions (Related questions are labeled “People also ask” in search results). In the last few weeks, we’ve made three new additions to help you explore further, including expanded Featured Snippets, improved functionality of Knowledge Panels, and suggested content as you search for a particular topic.

Continue reading “Improving Search and discovery on Google”

Learn more about publishers on Google

As tens of thousands of publishers of all sizes push out content every day, chances are you’ve come across a publication you’re not familiar with or one you wanted to learn more about. To help in this situation, publisher Knowledge Panels on Google will now show the topics the publisher commonly covers, major awards the publisher has won, and claims the publisher has made that have been reviewed by third parties. These additions provide key pieces of information to help you understand the tone, expertise and history of the publisher.

 

Continue reading “Learn more about publishers on Google”

Microsoft, Google, & Baidu Join Forces on Open Academic Search

The Allen Institute for Artificial Intelligence (AI2) in Seattle was created by Microsoft co-founder Paul Allen with the mission of providing the latest findings in artificial intelligence to humankind. AI2 collaborated with Microsoft, Google, and Baidu to create the Open Academic Search (OAS) working group to “advance scientific research and discovery, promote technology that assists  … Read more

What to know about visual search

Everything in digital media is going visual, including search. Platforms and brands have experimented with the technology for years to improve in-store experiences, increase engagement and retarget audiences. Here’s what you have to know:

The numbers:

  • Around three-quarters of U.S. internet users regularly or always search for visual content prior to making a purchase, and only 3 percent never do, according to a 2017 eMarketer study.
  • Over 3 billion photos are shared across the internet every day.
  • Consumers process images 60,000 times faster than text.
  • 74 percent of consumers say text-based keyword searches are inefficient in helping them find the right product online, according to visual search company Slyce’s 2015 report.
  • The image recognition market will grow to $25.65 billion by 2019, a growth of 216 percent from $9.65 billion in 2014, according to global market research firm Markets and Markets.
  • The number of Pinterest Lens users has tripled from April to May.
  • On average, Pinterest users search with Lens more than three times each day.
  • The number of objects Pinterest Lens has been trained to recognize has more than doubled in the last month.

Pinterest Lens
Pinterest in February introduced its visual search technology — Pinterest Lens — and is pitching itself to marketers as the place where consumers, especially millennials, come to discover items they didn’t even know they wanted. The beta feature allows consumers to search using images. It is not available to marketers yet, but Pinterest was pushing Lens hard in Cannes last week.

It’s easy to see the appeal. Point the Pinterest mobile app at everyday objects — a dress, a desk or a piece of fruit (Pinterest co-founder Evan Sharp first demonstrated the tool with a pomegranate) — and it will return related images, even outside Pinterest. Last week, the platform updated its Lens with the ability to zoom in and out on an object. Lens sets Pinterest apart from Facebook and Instagram, which does not offer any means of search through images, pinning it against the search and discovery mammoths Google and Amazon.

Google Lens
A few months after Pinterest premiered its Lens, Google came out with its own mobile version, even adopting the same name. Google CEO Sundar Pichai demonstrated the new technology at the company’s I/O developer conference in May, describing it as “a set of vision-based computing capabilities that can understand what you’re looking at and help you take action based on what you are looking at.”

Unlike Pinterest Lens, where a user takes a photo of an object, Google Lens uses artificial intelligence so that information about a place or item automatically appears on the screen when a user scrolls over it with their phone. The visual search app can also be integrated into the Google Assistant, which means consumers can use a combination of voice and visuals to discover something they are looking for. Pichai said the technology will be available soon, perhaps in time with its Pixel 2 smartphone, which Google is rumored to be working on.

Bing Visual Search
At the beginning of June, Bing upgraded its image search capabilities so users could search for images within images, something Pinterest, but not Google, does. Users can search for any photo and tap the magnifying glass button at the upper-left corner to zero in on anything within the image — whether it’s an outfit, face or product. A selection of related images then appears, sometimes with links for the user to then buy the products.

Brands’ own visual search platforms
Brands and retailers, such as Target, Neiman Marcus and Macy’s, began implementing visual recognition technology for their own apps and websites mostly around 2014. Home furnishings retailer Wayfair is the most recent brand to do so. In May, it created its own visual search engine for consumers to search for its products across desktop, iOS or Android.

The buyer view
“Searchers are increasingly interested in either graphic results, picture-based results or interacting with pictures for discovery because many people cannot explain what they are searching for in text all the time,” said Scott Linzer, vp of media at iCrossing. “We just don’t have enough information on how far an advertiser can take it. One of the things we are working with Pinterest on is to find out how do I then take that level of intelligence from the picture and be able to monetize against it for clients.”

“We’re excited about all the new search and discovery tools and technologies hitting the market today, including Pinterest Lens and extending to voice search and other tools,” said Orli LeWinter, svp of strategy and social marketing at 360i. “While the advertising opportunities are few and far between at this moment, we believe they are indicative of a future where our technologies are much more intuitive to our lives.”

The post What to know about visual search appeared first on Digiday.

Microsoft’s Bing search results now include bots


The Bing search engine can now surface bots in its search results when you search for a business.

The news was announced today at Build, Microsoft’s annual developer conference, taking place May 10-12 in Seattle. The Microsoft Bot Framework, a toolkit for creating bots for more than half a dozen chat apps and Bing.com, was first launched at Build in San Francisco last year.

Hints of bots in Bing search results first occurred last month and again last week.

To test the bots on Bing feature, search “Monsoon Restaurant Seattle” now and try out the experience.

The bots pop up on Bing.com in the bottom right-hand corner of the screen, similar to a Facebook Messenger or Google Hangouts message notification. Any conversations with the bot automatically carries over and can be continued on Skype.

Bots on Bing will begin with businesses, but experimentation may happen with other kinds of bots and popular search destinations.

Bing began adding bots to its search engine by speaking with businesses with high search traffic and reminding them of the sorts of questions people were asking about their business in search.

VentureBeat and other news outlets reported last week that the possibility of bots in Bing search results

Microsoft also announced Wednesday that its Cortana Skill Kit is now publicly available so developers can begin to create voice apps for the intelligent assistant. Changes are also expected for Microsoft Bot Framework, and Microsoft Cognitive Services, which can supply the AI smarts for bots.

Lili Cheng is general manager of Fuse Labs at Microsoft Research, the group that created the Microsoft Bot Framework and bots like the famous Xiaoice and the infamous Tay bot.

The Bing team is beginning to promote bots to business owners by reminding them of the traffic they receive, the kinds of questions patrons are asking about a business, and how a bot may be able to address these questions.

Standards will be important for businesses using Bing on bots so that people can understand the experience they’re having. Bot experiences today range from a natural language processing bot that text-chats to an NLP-free guided experience with buttons and cards. Some combine the two interfaces, while chat extensions on Facebook Messenger only operate in webview.

Cheng says even after a year of bots on Microsoft devices, a lot of people still don’t know what bots do. Using a standard format and approach will help consumers get familiar with what to expect when speaking to a business’ bot.

Another advantage Bing has is that it can act as kind of a directory for bots. In addition to finding bots on the search result page for a business, Bing users will be able to search for bots overall and bots by categories.

A search engine for bots is an idea Cheng first shared with VentureBeat last fall, and she said Microsoft continues to talk with partners like Slack and Facebook.

“They show you how people interact with a bot that they’d never interact with in search, because in search you’re just trying to type in a word and click and you’re on the restaurant page,” she said. “When they click on the bot, they engage a lot more, and I think what we’re doing is also we’re able to give the businesses more data about how people interact with them.”

Read GamesBeat Summit Stories Here

Keeping track of published literature

Modern scientists are busier than ever. Their typical days are filled not only with experimental work, but also with teaching, supervising, mentoring, grant applications, budget planning… The list goes on and on. No wonder there is barely any time left to stay on top of the field. Keeping track of published literature is made easier by following these simple tips:

Newest first

Don’t get lost in the long list of publications. To find the most recent articles on Europe PMC, sort results by date. If you want to limit your search to a specific date range – last week or last month – set this in the advanced search.

Focus on what’s important

Citations are the currency of the academic world. Familiarise yourself with the most cited papers in your area by using “Times cited” as a sorting order. For your convenience, citation counts are displayed for each publication in the search results on Europe PMC.

Follow your colleagues

Are you already familiar with the experts in your field? Check for publications from a specific author by typing their last name into the search bar. For scientists with common last names, such as Smith or Wu, paste their unique ORCID ID into the search bar to match the author exactly.

Automate repetitive tasks

Don’t waste your time on a job that your computer can do for you. Doing the same search every now and then? Instead of typing a long query into the search bar every time, save your search and recall it with one click. In Europe PMC all of your saved searches appear in your account. Create an account or log in with ORCID or Twitter.

Stay alert

With a busy schedule, it is easy to miss an exciting discovery.  Any search, including those by keywords, author, or scientific journal, can be turned into an RSS feed on Europe PMC. This way, once an article on your topic is added to the database, you will be notified immediately.

Building instant image feature detection

Kent Brewster | Pinterest engineer, Product Engineering

Last month we launched visual search in our browser extension for Chrome. After we shipped it, we noticed lots of clicks to annotations that said “Web Site.” Closer examination revealed that these were always from searches originating with the context menu, which runs visual search on a generated screenshot from the browser window. (To try this out with Chrome, right-click on the empty space in any page and choose “Search.”)

Results and annotations for whole-window screenshots were pretty disappointing. Since they tended to match screenshots that were previously saved to Pinterest, we were showing results like “Web Site,” “Internet Site,” and “Wordpress Theme,” instead of the interesting objects inside the screenshots.

We don’t have a back-end API ready to look at a screenshot and return a list of interesting things inside. Even if we did, it would be unacceptably slow, delaying the user’s first view of the select tool by several seconds under the best of circumstances.

Instead of sending every screenshot back to Pinterest for analysis, we figured out a way to detect interesting things using nothing but JavaScript, inside the browser extension.

Convert screenshot to data

To search inside an image we need to look at individual pixel colors. This isn’t usually possible using plain vanilla HTML, CSS, and JavaScript, but because we’re inside a browser extension we have a higher level of privilege. Here’s an original screenshot, rendered to a <CANVAS> tag:

Downsample the original data

Screenshots can be huge and we’re going to be making a bunch of recursive function calls, which is a fine way to crash the browser with a Maximum Call Stack Size Exceeded error. So before we do anything else, let’s reduce the size of our statistical universe from 1100×800 (almost a million pixels) to a maximum of 80×80.

To do this, we take the larger of the two dimensions (height or width) and divide by 80 to get our swatch size. If our original is 1100×800 we’ll use 14×14 samples, converting our original image to something more like this:

We’re not doing anything fancy like averaging out all the colors in each sample swatch; we’re just using the top left pixel. This seems to give us better results when the page being sampled has not scrolled; that top-left pixel on the top row and left column tends to be the main background color.

Count and declare the most popular colors to be “background”

Once we’ve downsampled we look at each sample swatch in turn and count how many times we’ve seen its color. When we’re done we sort by count and wind up with a list of colors like this:

[ “#ffffff”: 1321, “#ffeeee”: 910, “#ffeeaa”: 317 … “#a5e290”: 1]

Swatches showing the most common colors (here we’ll use the top three) are tagged as background.

Once we know our background colors we poll all foreground swatches again. This time we convert red-green-blue (RGB) values to hue-saturation-value (HSV) values, and find all swatches that have the same hue and value as the top background color. When we find these, we declare them to also be background swatches themselves. This catches many situations where we have a translucent background under a blown-up image, such as close-up views of Twitter images and Instagram posts.

After counting and tagging background, here’s what’s left. Background blocks are set to green, so it’s visually obvious what’s going on. Leftover foreground blocks have been grayscaled, because their colors are no longer important.

Remove isolated pixels

See all those tiny white and gray islands in the green? Those aren’t big enough to search, so we need a way to get rid of them. We run through the image one swatch at a time and remove any that have a neighboring block to the north, south, east, or west containing the background color. What’s left looks much simpler:

Flood-fill remaining foreground blocks, noting heights and widths

We’re down to just a few things. Now we need to pick the winner. Here’s how we do it.

  1. Scan each swatch, counting the total number of foreground swatches, so we know when we’re done.
  2. Scan again. This time when we encounter a foreground block, flood-fill it and all of its attached neighbors, decreasing the count of foreground blocks left to do with each one.
  3. As flood fill completes for each foreground area, note the minimum and maximum row and column for each. Each set of coordinates gives us a rectangular area containing any irregularities within the filled area. Convert to row, column, height, and width, and add to a list of interesting rectangles.
  4. Keep scanning until there aren’t any foreground blocks left to fill.

What’s left? Only interesting rectangles, shown in white here:

Find the most interesting rectangle in the world

A rectangle is more interesting if it’s:

  • larger in area than 1/16 of the size of the canvas
  • no more than 3x wider than it is tall
  • portrait instead of landscape
  • closer to the top left corner than other rectangles of the same size

A rectangle is not interesting at all if it’s:

  • less than 100×100 pixels in size
  • more than 5x wider than it is tall

Important: we’re not actually trying to find rectangles or any other shape. We’re just looking for distinctive difference in colors between neighboring pixels. If we have a fullscreen background image with an irregular yellow flower on a dark green background, we’d hope to wind up with our selector around the flower, even though it’s not a rectangle.

Once we have scores, sort and reverse to find the best area to select. The best rectangle not disallowed by the rules wins. If we don’t find an acceptable rectangle, select the whole screenshot as before. Either way, draw the select tool, run Search, and bask in the glow of much-more-relevant results!

Results

After the initial release of visual search in our browser extension we examined an anonymized sample of full-window screenshots that had been refined by Pinners with the selector tool. Instant image feature detection resulted in agreement to (or improvement on) what the Pinner had selected 85 percent of the time. In all cases where interesting things were detected, the search annotations were coming back with something better than “Web Site.”

These improvements are now out to everyone, and we’re detecting interesting things in screenshots 96 percent of the time, with a much higher rate of relevant search results appearing and being saved, all without the Pinner having to refine the initial selection.

Acknowledgements: Kelei Xu, Ryan Shih, Steven Ramkumar & Steven Walling

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑