Advocacy by librarians – is there a tension with assessment/analysis?

This post was originally published on this site


When you attend librarian conferences, it is common to hear speakers say that librarians are too modest about our value to our stakeholders and they advocate that we librarians should well.. advocate for ourselves more.

Yet I think things are not that simple. There is always a tension between advocacy (which is by it’s nature not fully objective) and analysis/assessment which is mostly objective.  I’ve noticed this issue rear it’s head more and more in various library domains.

In Open Access – Advocacy vs analysis

I briefly met Rick Anderson in a conference earlier this year  – International Conference on Changing Landscape of Science & Technology Libraries in India when he gave a talk entitled “Good intentions and unintended consequences: science libraries, researchers, and the open access movement”. The talk essentially covered the ground in the article entitled “Advocacy, Analysis, and the Vital Importance of Discriminating Between Them“.

In that article he advocated the importance of discriminating between analysis and advocacy. While recognizing the importance of advocacy and the fact that analysis can never be perfectly objective due to biases he pointed out that there is a difference between the two.

Fundamentally, because it is an analyst’s job to tell the whole story, but it is an advocate’s job to tell only the part of the story that will further the advocate’s agenda. This becomes particularly problematic when advocates are treated in the news media as sources of analysis.

Most observers will know that Rick Anderson is targeting a particular type of advocacy by librarians, namely by open access advocates. In articles such as Is rational discussion of open access possible?, he claims that those who approve open access in a analyst point of view are often attacked by open access advocates (many who are librarians).

Whether you agree this is true or not in this particular case , it does bring up an interesting lens of which to view librarians and open access.

In library assessment vs advocacy 

More recently, I attended the 12th International Conference on Performance Measurement in Libraries (formerly known as Northumbria Conference) at Oxford. Besides enjoying the Harry Potter like surroundings (literally, some places we visited was really where the movies were filmed such as the Divinity school where the drinks reception were held and the Duke Humphrey library ) , it was a high caliber conference that brought together a diverse group of people using a variety of methods to measure performance.

 

How diverse?

As far as I could make up, there were the ethnography/anthropology people – perhaps best represented by Donna Lanclos and Andrew Asher both Anthropologists by training and well known in libraries for moving the needle in this area.

Closely related is the field of UX – these days best known internationally via Andy Priestner who was with Futurelib (Cambridge University) and though he did not attend, Futurelib peeps were in attendance & I met Andy’s successor and of course attended workshops conducted by people who said they were influenced by the UXlibs conference.

On the quantitative side, we must definitely mention the UK people from the oft mentioned JISC that lead the way in early library impact studies on effect of library use on student success, learning analytics and dashboards. In particular, I was glad to have met Graham Stone, whose pioneer studies on Library impact study in Huddersfield made me interested in this area.

Of course other big names in assessment like Megan Oakleaf‏ were often mentioned but were not in attendance.

Many librarians had titles like assessment librarians (mostly from the US?), who did a mix of all this, though I had the impression traditionally assessment librarians were more into Information literacy and collection assessment though this seems to be changing.

Of all these methods, perhaps the most controversial method of assessment was the correlation studies where libraries showed correlations between student usage of library (typically electronic resource usage, book loans, physical entry to library) and academic success (often defined as degree class , GPA attained or simply retention) constantly drew subtweets.

Among the biggest skeptics was Donna Lanclos

Megan Oakleaf‏ and Lisa Hinchliffe who weren’t at the conference , Graham Stone‏ were less skeptical.

Megan and Lisa for example suggested that correlations with theory can give strong evidence for causation.

Graham Stone‏ felt that in the past such correlations were not known and it was important to find out. Similar to Lisa , he recommended that it was important to follow up such correlation results to find out why there was this effect 

But overall, I like Andrew Asher‘s take on this. First he started off with something that I think is true. I have some weak evidence myself this is true.

Still he does take a more nuanced view stating that he is not in principle opposed to the idea that correlations + theory can work but he feels that libraries are not at this point and yet are making big claims that do not stand up to scruntiy

Many correlation studies according to him do not have strong correlations but are instead very weak correlations with small effect size.

In other words, libraries in the rush to justify their own existence may have fallen into advocacy at the cost of objective analysis.

And can you blame them? A recent upcoming paper entitled  Provosts’ Perceptions of Academic Library Value & Preferences for Communication: A National Study, surveyed Provosts at ARL libraries and found a high percentage of them (72%) responded that data correlations with student academic success had a high influence on budget requests, compared to other types of data such as satisfaction surveys, basic utilization data or focus groups.

In a similar vein, in Architecture of Authority, Angela Galvan writes

One of the reasons I think Alma is increasingly adopted is because it makes provost candy easy to generate. But how many of the people we hand those analytics to for decision making are literate in the ways we need them to be to understand data visualizations, to give us FTE, a bigger collections budget, pay for interns?

For more objections to correlation studies or even more broadly whether libraries should even be collecting student data for such studies see  “Can we demonstrate academic library value without violating student privacy?” or “Learning Analytics and the Academic Library: Professional Ethics Commitments at a Crossroads

In any case this assessment vs advocacy thing was brought up at the Library assessment conference 2016 as well

As Lisa Hinchliffe from the University of Illinois at Urbana-Champaign explained in her keynote, Sensemaking for Decisionmaking, there is a distinction between library assessment and advocacy, and the two can require very different approaches; with assessment efforts, data needs to be gathered and analyzed thoughtfully and deeply, while with advocacy, information should be summarized and condensed for other stakeholders.  (However, in both endeavors, it’s crucial that we make evidence-based decisions, and not decision-based evidence; that is, we do not attempt to draw conclusions and then find evidence to back up those conclusions.)  The tension between these two endeavors—assessment and advocacy—was echoed throughout the sessions.” – Building Effective, Sustainable, Practical Assessment Notes from the Library Assessment Conference

Essentially the question is about doing studies to prove one’s value vs doing studies to assess performance. Of course as Jacob Berg  notes this can be dangerous,

We don’t assume libraries have value because we’re constantly having to say so, or otherwise discuss our relevance. You don’t hear people discuss how the Provost has value, or the university president. This has dramatic implications for how the library behaves. It shouldn’t be “save libraries” it should be “libraries save”.

Concluding thoughts

As I’m hardly an expert in these two complicated library domains (Scholarly communication and library assessment), I’m not going to weigh in on which sides of the debate (desirability of open access, correlation studies etc) I fall on.

Still I wonder if there is a difference in the way advocacy is framed in both cases.

Rick Anderson suggests that advocacy is inherently subjective, that as a advocate one should only tell the parts of the story that supports one’s goals. This seems intellectually dishonest. In the summary of Lisa’s keynote, the author mentions a “tension” between the two but I’m unsure what the tension is beyond the fact that the results have to be summarized, but presumably all evidence should be shown, even or especially unfavorable evidence ?

Is advocacy inherently bad? Probably not. But what’s the difference between good advocacy and bad advocacy? What is the difference between good advocacy and good analysis/assessment, aren’t they the same if both are objective?

Overall I think we should be very careful when slipping into advocacy mode particularly when we do a piece of research with expectation of results that support our agenda already

But we can’t avoid advocacy totally, while librarians and libraries are not neutral (or at least that seems to be the popular view now),   but neither do we want to be a source of fake news.

In fact, these days we are trying to stake a place at the table for helping users handle fake news but again we must be careful against overstating our place in this issue (advocacy rears it’s head again).

 

Comments are closed.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑