5 things you can do with resource recommender/best bets in Primo or Summon

This post was originally published on this site

At my prior institution, I was the administrator of the discovery service – Summon and one of the features that I loved the most was the “best bets” and database recommender feature.

If you are unfamiliar with what that does, it essentially allows you as a librarian to setup special notes/links/images to appear when they are triggered by specific search terms – called tags in Primo.


Summon best bets – searching opening hours, triggers a link to the hours page
Summon Database recommender – searching education reform, recommends ERIC database
How much did I love this feature? I literally spent hundreds of hours pouring over our search logs, painstakingly trying common searches to see what gave poor results and trying to figure out if the librarian’s touch could help point the way.
So you can imagine my excitement 2 years ago, when Proquest acquired Ex Libris and it was announced Primo would get features from Summon (and vice versa) which meant a version of Summon’s best bets and database recommender would be added to Primo! We now I have this live in the August release of Primo called resource recommender and I’m happily playing with it.
The implementation in Primo is slightly different from in Summon though. For example unlike Summon which has a specific recommender for databases with a autopopulated list from 360Core and a “best bets” for everything else, Primo merges both under a resource recommender.
Primo’s resource recommender shows up to three recommendations & merges different types of recommendations
There are other differences (e.g. Summon is able to auto-recommend librarians based on LibGuide profiles, etc) and I actually prefer the Summon implementation slightly to the Primo resource recommender, but both versions have so much potential to be improved. But that is another post.
But before we go into that, why do I love this feature so much? To put it bluntly, as good as discovery services are today , they are not perfect for various reasons as I have blogged about over the years. “Best bets”/ resource recommender type features allows librarians to help round out the rouge edges and help the system a little. The promise is to blend machine smarts with librarian know-how to create a much better system.
Here are some uses I can think of.
1. Direct users to resources not indexed

Index based discovery services these days like Summon or Primo, index a lot. But not everything, not by a long shot. For instance, such services are generally strong for indexing of peer reviewed articles but weak if you are looking for business management type materials like SWOT reports, company reports, analyst reports etc. Similarly legal cases are also poorly indexed. Yet another example might be specialized local archives  (think national or regional archives) that can be very useful to your community that is hard to get indexed. You may know of other examples of course.

Unfortunately, the issue is users may be under the impression Primo or Summon has “everything” accessible and may miss out on very relevant resources that are not indexed.

One solution of course is that if a resource can’t be easily indexed is to do federated searching (or real time searching) for that resource. This indeed was a early debate in this area, with many arguing that relying only on indexed based search like Summon did was not a good idea as many resources would not be discover-able and hence a hybrid method of indexing what you can and federating what you can’t was the way to go. But as I noted in 8 things we know about web scale discovery systems in 2013, while in the early days systems like EBSCO discovery search offered federation as an option, it was eventually discarded by most libraries.Today almost everyone relies on primarily indexed based search. (Though admittedly some like Stanford still use commerical federated systems)

My article explains why this happened though I have a sneaking feeling part of the failure is also due to implementation issues (speed/reliability plus UX). With the current popularity in “bento search” boxes, which is actually a kind of federated search that demotes the index based discovery to one box of content, a come-back with  selected specialised content federated in (dynamically determined?) might actually be in a future.

In any case, with this option closed, the next best option is to create a relevant recommendation.
As such a recommendation to use say Westlaw or Passport or local archives when triggered by the right searches say “legal cases” or “country/industry reports” might be timely.

Searching country report suggested the poorly indexed – Marketline
The tricky part of creating recommenders for this category though is you need to figure out what are the most common searches to trigger these non-indexed yet useful resources at the right time. Going through your logs for the most popular searches, running them in your search to judge if the results are good enough is one way but very time consuming.

To make matters worse the trigger words or tags need to be exact.


Demo resource recommender configuration screen, need to add variants to the tag field seperated by comma

For example if you decide you want to trigger off a recommendation to a specialised non-index local database whenever someone searches for the word Singapore in the search query, you need to put all combinations of the query to match.

In other words, putting Singapore as a trigger/tag , will not trigger off a recommendation if someone searches for Singapore history, History Singapore, Singapore database etc (you need to put each one explicitly).


2. Improve relevancy of known item searches
In the early days of discovery services, a lot of users were annoyed that discovery services were pretty bad at known item search particularly when compared to searching catalogs. There were many reasons for this.

Partly this was due to the fact that the discovery service’s central index had hundred of millions of articles, newspaper articles etc which swamped out the catalogue records of even the biggest libraries by magnitudes. This was particularly compounded by the fact that while many of the central indexed items were indexed full text, many catalogue records were simply metadata with no full text. This held true particularly for non-text items like DVDs, image files or records for unique items only in the library collection.

The above diagram visualizes our items in our discovery service can be visualized among two dimensions. Firstly is the metadata available “rich” or “thin”. Secondly is full text indexed. As many catalog items are in segments A or even C in the chart above, this makes discovery harder (particularly for systems that did not give high weightings to metadata). For more details, see my book chapter “Managing Volume in Discovery Systems“.

In theory, relevancy algorithms should take that into account, but it’s tricky doing relevancy ranking over such diverse items as articles, books, book chapters, DVDs with varying amount of full text, length, metadata etc. So in the early days, we implementers of such systems got angry users yelling at us about how useless the new search was because they could no longer find their favourite book/journal title/dvd/database anymore (typically not in top 10 results). This was my #1 negative feedback when I first implemented Summon.

Of course these days the discovery services are tuned to better avoid these problems (I have a hunch the early discovery vendors did not expect known item search to be a big use case and hence neglected it in favour of discovery) but they are still not perfect.

Also I would guess this problems are more serious for larger institutions with larger amounts of special collections which users want to find and these tend to be obscured by the central index. Below for example shows some user reactions in Cambridge to Primo.

It’s hard to say why exactly they were unhappy just based on the comments, but I would guess known item searching could be an issue if they are typical.

Desired database / journal title hard to find even with exact title search

Here’s another common issue relating to known item searching that is less serious but irriating.

Trying searching the name of your top 100 databases. This is an exercise, I highly encourage you to try when implementing a new discovery service. You will find while these days the search is usually good enough to get the right result in the first 10 results, it’s seldom or rarely in the top result depending on various settings. Often, the top result tends to be a article reviewing the database (sadly not all reviews are labelled correctly as such so it’s hard to penalize it relevancy ranking wise).

Searching for the database – onesource , it turns out in the 2nd position not 1st & might be hard to spot

The same problem can occur for journal titles particularly ones with very common one or two word titles. (A “go to” test used to be for the Journal Title Science or Nature but these rarely fail nowadays). Below is an example in Summon, I used to have in my old institution for the journal with a very generic title “Urban Geography”.


In a big institution searching a simple one or even two word search can get you a whole page of results all with the same title and while the icon representing the type of resource might be different, it can still be very tricky to pick up the right result you if you are unfamiliar with the system.

Solutions are (1) adjust the relevancy ranking (but do you have the expertise to tweak?) , (2) implement bento search with a seperate bento to pick out catalogue or even just database items and (3) Recommenders!

Desired database / journal title hard to find because of variant titles, acronyms etc. 

If you think the desired database/journal result not in #1 is not a serious issue here’s another problem.

Another issue is users sometimes don’t quite get the name of the database correctly, or use acronyms. In the business area, the database owners also love to change names at the drop of a hat. Passport for example has changed at least 3 times in the 10 years I’ve been a librarian and some people still search by it’s old name GMID.

Another example is the database “One Source” or “Onesource” (without space)? I also see a common search for us is BMI. The database full name is actually BMI research. Yet another common one is HBR for Harvard business review articles.


Searching for one source instead of onesource database and you can’t find the database

You can try it out on your discovery service, I would guess for most people the correct result would not appear in the top 10 (and that’s even if your record has the former or variant name) and certainly not the top result.

As a new librarian, I worried in a blog post I couldn’t tell the difference between data monitor, euromonitor and business monitor.
And yes lo and behold euromonitor is one of our top searches, our search is smart enough to bring Passport database at 7th position, but would users know that’s the right database?
I could go on but I hope I have convinced you that a appropriately chosen best bet/resource recommender can be a good idea here. You can either add text to explain or link to a FAQ or guide , which is good because you can reuse material making it easier to manage changes.

3. Directing users for non traditional searches better suited for other scopes such as site searches

For some reason the search opening hours seems to be the go to example for vendors when demonstrating this feature. I personally haven’t noticed people doing this search in the 2 institutions I’ve worked at but it still illustrates a third possible use of recommender

Some users take the “one search box” literally and sometimes use the search as a site search (yet another reason why Bento is popular). So you may find searches for “fines”, “book rooms” etc and if so a recommendation to the FAQ would be a good move.

4. Recommending the librarian


This one is obvious. Have a librarian particularly good at a specific specialized database that needs expertise to use (think Bloomberg or Scifinder scholar)? Recommend him/her when the database is searched! You can do the same for subject related searches under which the librarian is the subject or liaison of , but this I suspect is less useful as people rarely look for generic subjects like Economics or Law and even if they did there would be so many results, they wouldn’t think of asking a librarian for help.
5. Other misc uses
I have some wild ideas such as creating a short orientation game. Imagine part of the orientation game where you need to search for the right keywords to get a clue etc.
Or use it in a class where you ask students to choose between searching with two different search queries and getting feedback via messages in the recommenders?


As you can see the resource recommender in Primo and Best bets/database recommender in Summon can be very useful to librarians who want to improve the experience of users.

However, my impression is while the Summon database recommender and best bets has been around for a while , it’s still not heavily used.

There are many reasons for this, mostly to due with the drawbacks in the system some of which I already alluded t0 in this post. But I will expand on them in a future post discussing other features I think are needed to make resource recommenders even more usable in the future.

These include

  • Better analytics to measure performance of created recommendations (are users clicking?) and to help identify candidate search queries that need recommendations
  • More flexible triggering of recommendations. While exact match is a use case, recommendations should be capable of matching “contains”, “begins with” or even more fuzzy type matches if desired
  • Consider allowing adding of more automated contextual recommendations as opposed to just manual ones – Summon already does this to some extent for Librarian profiles on Libguides and may recommend databases if search results covers a significant portion of a indexed databases – a signal that database is very appropriate. But more can be done with FAQs from libguides or even from the knowledgebase, similar to UIUC’s ezsearch suggestion system which automatically suggests matches when search queries are loosely similar to titles of journals/databases activated in the knowledgebase
  • A community based system – something Summon’s best bets and database recommender already have with community tags but not Primo
  • Easier management and sharing of recommendations

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑