If you are unfamiliar with what that does, it essentially allows you as a librarian to setup special notes/links/images to appear when they are triggered by specific search terms – called tags in Primo.
Index based discovery services these days like Summon or Primo, index a lot. But not everything, not by a long shot. For instance, such services are generally strong for indexing of peer reviewed articles but weak if you are looking for business management type materials like SWOT reports, company reports, analyst reports etc. Similarly legal cases are also poorly indexed. Yet another example might be specialized local archives (think national or regional archives) that can be very useful to your community that is hard to get indexed. You may know of other examples of course.
Unfortunately, the issue is users may be under the impression Primo or Summon has “everything” accessible and may miss out on very relevant resources that are not indexed.
One solution of course is that if a resource can’t be easily indexed is to do federated searching (or real time searching) for that resource. This indeed was a early debate in this area, with many arguing that relying only on indexed based search like Summon did was not a good idea as many resources would not be discover-able and hence a hybrid method of indexing what you can and federating what you can’t was the way to go. But as I noted in 8 things we know about web scale discovery systems in 2013, while in the early days systems like EBSCO discovery search offered federation as an option, it was eventually discarded by most libraries.Today almost everyone relies on primarily indexed based search. (Though admittedly some like Stanford still use commerical federated systems)
My article explains why this happened though I have a sneaking feeling part of the failure is also due to implementation issues (speed/reliability plus UX). With the current popularity in “bento search” boxes, which is actually a kind of federated search that demotes the index based discovery to one box of content, a come-back with selected specialised content federated in (dynamically determined?) might actually be in a future.
In any case, with this option closed, the next best option is to create a relevant recommendation.
As such a recommendation to use say Westlaw or Passport or local archives when triggered by the right searches say “legal cases” or “country/industry reports” might be timely.
To make matters worse the trigger words or tags need to be exact.
For example if you decide you want to trigger off a recommendation to a specialised non-index local database whenever someone searches for the word Singapore in the search query, you need to put all combinations of the query to match.
In other words, putting Singapore as a trigger/tag , will not trigger off a recommendation if someone searches for Singapore history, History Singapore, Singapore database etc (you need to put each one explicitly).
Partly this was due to the fact that the discovery service’s central index had hundred of millions of articles, newspaper articles etc which swamped out the catalogue records of even the biggest libraries by magnitudes. This was particularly compounded by the fact that while many of the central indexed items were indexed full text, many catalogue records were simply metadata with no full text. This held true particularly for non-text items like DVDs, image files or records for unique items only in the library collection.
The above diagram visualizes our items in our discovery service can be visualized among two dimensions. Firstly is the metadata available “rich” or “thin”. Secondly is full text indexed. As many catalog items are in segments A or even C in the chart above, this makes discovery harder (particularly for systems that did not give high weightings to metadata). For more details, see my book chapter “Managing Volume in Discovery Systems“.
In theory, relevancy algorithms should take that into account, but it’s tricky doing relevancy ranking over such diverse items as articles, books, book chapters, DVDs with varying amount of full text, length, metadata etc. So in the early days, we implementers of such systems got angry users yelling at us about how useless the new search was because they could no longer find their favourite book/journal title/dvd/database anymore (typically not in top 10 results). This was my #1 negative feedback when I first implemented Summon.
Of course these days the discovery services are tuned to better avoid these problems (I have a hunch the early discovery vendors did not expect known item search to be a big use case and hence neglected it in favour of discovery) but they are still not perfect.
Also I would guess this problems are more serious for larger institutions with larger amounts of special collections which users want to find and these tend to be obscured by the central index. Below for example shows some user reactions in Cambridge to Primo.
Resounding vote of confidence for the new Cambridge library interface. pic.twitter.com/y2enudPCVi
— Jason Scott-Warren (@jes1003) August 9, 2017
It’s hard to say why exactly they were unhappy just based on the comments, but I would guess known item searching could be an issue if they are typical.
Desired database / journal title hard to find even with exact title search
Here’s another common issue relating to known item searching that is less serious but irriating.
Trying searching the name of your top 100 databases. This is an exercise, I highly encourage you to try when implementing a new discovery service. You will find while these days the search is usually good enough to get the right result in the first 10 results, it’s seldom or rarely in the top result depending on various settings. Often, the top result tends to be a article reviewing the database (sadly not all reviews are labelled correctly as such so it’s hard to penalize it relevancy ranking wise).
The same problem can occur for journal titles particularly ones with very common one or two word titles. (A “go to” test used to be for the Journal Title Science or Nature but these rarely fail nowadays). Below is an example in Summon, I used to have in my old institution for the journal with a very generic title “Urban Geography”.
In a big institution searching a simple one or even two word search can get you a whole page of results all with the same title and while the icon representing the type of resource might be different, it can still be very tricky to pick up the right result you if you are unfamiliar with the system.
Solutions are (1) adjust the relevancy ranking (but do you have the expertise to tweak?) , (2) implement bento search with a seperate bento to pick out catalogue or even just database items and (3) Recommenders!
Desired database / journal title hard to find because of variant titles, acronyms etc.
If you think the desired database/journal result not in #1 is not a serious issue here’s another problem.
Another example is the database “One Source” or “Onesource” (without space)? I also see a common search for us is BMI. The database full name is actually BMI research. Yet another common one is HBR for Harvard business review articles.
You can try it out on your discovery service, I would guess for most people the correct result would not appear in the top 10 (and that’s even if your record has the former or variant name) and certainly not the top result.
3. Directing users for non traditional searches better suited for other scopes such as site searches
For some reason the search opening hours seems to be the go to example for vendors when demonstrating this feature. I personally haven’t noticed people doing this search in the 2 institutions I’ve worked at but it still illustrates a third possible use of recommender
As you can see the resource recommender in Primo and Best bets/database recommender in Summon can be very useful to librarians who want to improve the experience of users.
However, my impression is while the Summon database recommender and best bets has been around for a while , it’s still not heavily used.
There are many reasons for this, mostly to due with the drawbacks in the system some of which I already alluded t0 in this post. But I will expand on them in a future post discussing other features I think are needed to make resource recommenders even more usable in the future.
- Better analytics to measure performance of created recommendations (are users clicking?) and to help identify candidate search queries that need recommendations
- More flexible triggering of recommendations. While exact match is a use case, recommendations should be capable of matching “contains”, “begins with” or even more fuzzy type matches if desired
- Consider allowing adding of more automated contextual recommendations as opposed to just manual ones – Summon already does this to some extent for Librarian profiles on Libguides and may recommend databases if search results covers a significant portion of a indexed databases – a signal that database is very appropriate. But more can be done with FAQs from libguides or even from the knowledgebase, similar to UIUC’s ezsearch suggestion system which automatically suggests matches when search queries are loosely similar to titles of journals/databases activated in the knowledgebase
- A community based system – something Summon’s best bets and database recommender already have with community tags but not Primo
- Easier management and sharing of recommendations