The 2016 introduction of HEFCE’s open access research policy and specifically its “deposit on acceptance” message has led to a large volume of restricted-access items being placed in institutional repositories. Dimity Flanagan reports on how LSE Library’s “request a copy service” has offered would-be readers a way to overcome this obstacle to research, and how the data the service provides […]
In Disrupt This! MOOCs and the Promise of Technology, Karen Head draws on a “view from inside” of developing and teaching a first-year writing massive open online course (MOOC) to critically interrogate the claim that such technology will fundamentally “disrupt” educational structures. This is an eloquent and intricate analysis that shows how personal experience and practice can add nuance to questions regarding the egalitarian […]
Academic publishing is a necessity, yet continues to be a point of concern for institutions and individuals alike. Academics, students, teachers, and researchers all need the ability to publish and share their findings, but the model academic publishers utilize is woefully inadequate. Not only do publishers exploit their authors for profit, they also gate this content, meaning some information may never see publication regardless of how important or valuable the information may be.
Glasstree’s continuing mission is to break down those barriers. We started by introducing self-publishing to academia with Glasstree’s publishing tools. Instead of hoping to be accepted by a publisher, Glasstree empowers academics to take control of their content by publish it themselves!
Lulu is Glasstree’s parent company, and together we aim to make knowledge, literature, and publishing available to everyone. Whatever your story – be it a fictional novel or a dissertation – we are here to help you share.
The next step in our evolution is here: Print API.
What is an API? And how does this new tool benefit academics interested in self-publishing? Keep reading and we’ll explore these questions.
Even if you’re unfamiliar with the technical aspects of APIs for software, you’ve almost certainly encountered them online without realizing. The acronym API stands for “Application Programming Interface.” Most basically, API is code that allows two unique pieces of software to talk to each other. This, in and of itself, is pretty simple.
Retailers, individuals, and institutions all make use of APIs to expand their capabilities and offer their users more options, better pricing, faster shipping and much more. Lulu’s Print API serves the same functionality. Once the API is integrated, users can create unique “buy now” options on their SHOP pages within their websites, and all orders placed are channeled into our global printing network, to be fulfilled by the same process as any order on Glasstree.
Breaking down Boundaries, Creating Partners
From a technical stand point, our Print API service may not seem like an exciting piece of news for the individual author (APIs run in the background and are never seen). API tools are usually meant for web developers, who implement the cross-platform code so the two discrete programs work in harmony. The average author might have little need for an API connection if they don’t want to deal with selling directly from their website.
That being said, publishers and academics need APIs for many things. We understand that need, because we’ve lived in that world for the last fifteen years. We’ve witnessed, year after year, small and independent publishers who start up, bring on a handful of authors, publish a few books, and then eventually fold. Yes, of course, some small publishers succeed, and some even succeed beyond all expectations. We’re more concerned with the publishers who couldn’t keep up.
One of the biggest problems facing many small publishers is the cost associated with printing and fulfilling book orders. The price to print and ship can be prohibitive for small publishers, who likely are operating on a limited budget and need to make the most out of every dollar invested. Print API is an answer to the funding problems these small publishers face. Because the Lulu Print API can be implemented to allow for direct print on demand services at low prices, small publishers can remove the cost of printing and storing books from their budget.
Just like using Lulu’s self-publishing tools, the Print API features all the formats and sizes Lulu has to offer, at the same low prices, and with the same quality and global shipping you’ve come to expect from Lulu. The difference is that publishers the world over can plug into our network while maintaining their brand’s independence.
Harnessing the power of the Web
The API process capitalizes on Internet connectivity to enable collaboration among a variety of companies and individuals, further opening the printing and publishing world to more readers, authors, and publishers.
Pricing is another important aspect to consider with an API connection. Rather than pricing your book on the Glasstree site for your profit and our commission, you price it with 100% return of profits. The price you charge on your site is entirely up to you! With the API integrated, the order bills from Glasstree to you for the printing and shipping, while the amount you charge a customer is entirely on your end. This expands on the already generous and easy to control profit model Glasstree utilizes.
In a university setting, an API tool is a means to offer the institution’s students and teachers a means to publishing and sharing their work all from within the institution. The college bookstore can host these print-on-demand titles on their website, and facilitate printing through the API connection. Costs are minimal, and the bookstore can easily make the necessary profits, all while control overhead and storage.
An API connection completely removes barriers to publishing. The institution need only implement the API and provide the file standards for uploading (the same specifications used for publishing on Glasstree). Students, teachers, and researchers can all publish their works at a minimal expense, while their institutions can list these books via our API on a college bookstore website for anyone to purchase.
Integration is In
Using API integration is more than just the cool new thing happening across the web. Take a look at this article from TechCrunch last year, “The Rise of APIs”. While the title sounds very Terminator-esque, the point the author makes is clear: third-party APIs are the future, and they are here to shake up the way the Internet works. The opening paragraph of the article sums it up; ” there is a rising wave of software innovation in the area of APIs that provide critical connective tissue and increasingly important functionality.”
While a clean and easy-to-navigate interface is always going to be important, the ability to quickly implement a new program through API connections is what will keep web based retailers one step ahead. Adding new features, replacing out of date products, and generally being able to work with the range of other programs on the web is a key to staying relevant; using API connections solves all of these problems. All modern software providers are conscious of API connectivity, and the implications of creating software that does not allow for API integration. The way of the future is sharing, through both open and private API connections, and mutually finding success through shared programming.
Lulu and Glasstree embraces this mentality wholly. From the first day, we’ve been a company designed to help content creators better share their stories and knowledge. Enabling API connections with our print network is a logical and necessary step for us.
Looking to the Future
Academia has always been an institution that had to keep an eye toward the future. Because schools and teachers are the ambassadors of knowledge for generations to come, the means to disseminate and archive knowledge has always been critical.
Look for more from Glasstree in the future, as we continue to make innovations in the publishing community. For now, you can check out our API/Developer’s Portal site at developers.lulu.com to learn more about Print API and see if the tool might be right for you.
“As we organize, we must be mindful that organization, while useful, can also be a trap. The trick is to organize just enough to create order and meaning, but not so much that you become enslaved by process.”
— Jessamyn West
Strange things start happening when you limit your search to particular branch locations in our resource discovery layer. An inquiry for “beer” within our Special Collections yields 62 records. The facets on the results page show how you can further refine that list to 18 items which are also in the general stacks. To be clear, we have a total of 651 matches for “beer” in the general stacks, however the initial restriction prevents most of those titles from appearing. Continue reading “The benefits of uncertainty over privileging in “known item” searching”→
This article examines how the emergence of innovative technology platforms, recently introduced by new players in the university services space and public arena, has the potential to open up additional revenue generation opportunities for the university research funding toolkit. How aware are universities of these new technology platforms and their revenue potential? Given anticipated EU funding upheaval (and potential removal/reduction of funding sources), uncertainty surrounding Brexit, and the lack of clarity in the lead-up to Brexit (creating what looks to be a prolonged period of instability and cross-messaging in funding circles), the time is now ripe for university management, financial stewards and library managers to embrace new technology platforms as part of their strategic finance planning in order to take advantage of new emerging revenue models in combination with existing operations.
Summary: Although there continues to be room for improvement, current search engine technology provides a better user experience than its predecessors did. Yet many librarians insist on teaching how research used to be done. Those who make exaggerations and excuses for clinging to the past do so at their peril. Continue reading “Fear of the End of Reference”→
Summary: Controlled vocabularies are inherently subjective, arbitrary, and a more rigid semantic layer than is necessary in an age of full-text indexing and machine learning. This should not be a controversial claim, considering how the most widely-used search tool on the planet already operates.
“Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.”
— George Box
Years ago, I spent a good deal of time compiling library use statistics, primarily for database searches, but also reference desk transactions, bibliographic instruction sessions, and even paper consumption — which dropped precipitously once we started charging for printing. It was useful to have those numbers on hand when colleagues asked for them. They were usually working on a grant or a report or actually considering making an evidence-based decision.
Thanks to improvements in how we record statistics in the first place, as well as better (and fewer, due to mergers) vendor interfaces for pulling data, it’s now a lot less work, especially since I don’t spend much time running numbers unnecessarily. A more common situation nowadays is that someone wants to know how many chat questions we received last year, so I then and only then sign in to QuestionPoint and retrieve the relevant information.
It’s more efficient to wait on spending time doing something until you’re sure it’s needed. There’s little purpose in determining how much a service is used if administration is incapable of considering its cancellation, for example. And as much as innovation is at times worth the risk, making decisions based on speculation, rather than demonstrated demand, can result in an unproductive workload.
That’s why we follow the best practice of putting a login prompt at the point of need, instead of needlessly gating access to free sites. I’m still waiting to see libraries that adhere to the opposite philosophy of, “they might need to sign in later, so let’s require it now” fully embrace the idea and require authentication to view their homepage.
A library could keep a record of books that are 25cm high. It sounds downright silly when you put it like that, because you can instead conduct searches in a library services platform specifying that sort of thing. The time spent on maintaining such an inventory would not only be a waste, it also wouldn’t help educate people on how searching works.
Google sure doesn’t curate a pre-coordinated index of websites about the French Revolution, apart from what it can generate on the fly to deliver results when someone searches for those words. Admittedly, the ways in which computers present seemingly intelligent results is now done in a rather roundabout way of relying upon human behavior: link popularity, co-citations, and paired reading habits (a la “customers who looked at this item also bought…”) all influence how relevancy is calculated.
If you need a list of library items in a specific format, on a certain subject, sorted a particular way, it’s almost trivial to execute a search and retrieve those desired items. Our diminished amount of in-house documentation, reference desk traffic, and time spent in classroom settings imparting procedural knowledge partially reflect this reality.
We likewise don’t maintain shelf lists much anymore, although there are some exceptions. After all, it should come as no surprise that most people have a propensity for insisting that whatever they do for a living is still very much needed in its current form. The A-Z index of journals, although an inefficient method for searching, cannot be removed because some individuals want it around, while our LibGuides site is chock full of pathfinders of various depth and currency, plus the entire video collection is manually cross-listed by genre for some reason.
Last month the Open Directory Project shut down. It was one of the last human-powered endeavors using a hierarchical taxonomy to classify the entire web. Search engines have done a better job, at least as measured by popularity, making web content discoverable. It’s mainly a matter of collective processing power, when not even an army of volunteers could match the robotic might of Google. Reliance on automated crawling also eliminates the thorny problem of human subjectivity.
One of the reasons astrologers claim validity to their craft is that practitioners with the same precepts would in theory come up with a similar horoscope for a person with a given birthday. Determining the objective “aboutness” of a publication isn’t always as straightforward. Beyond measuring and describing the physical dimensions of materials, subject cataloging is prone to “eye of the beholder” sorts of judgments. Is Romeo & Juliet a tale of romantic love, or one of warring families? Once you go down the rabbit hole of interpretive literature, there’s no end of different meanings that could be applied to the same work.
I regularly put stuff back in the “wrong” location when I unload the dishwasher. Last time it was placing the 1/4 cup measuring cup in the same drawer with the 1/2 cup one, although that wasn’t where it belonged. Of course if I knew where the utensils were previously arranged, I’d be sure where to put them, however someone who’d never worked in the kitchen before may have a more difficult time finding everything.
What’s intuitive to some, others may yet take umbrage with. And given the nature of private experience, nobody gets to tell them otherwise. There’s no end to the different manners in which items can be categorized. Just look at what constitutes kosher food. Although, I can certainly see why, in an era with no refrigeration, staying away from shellfish would become a popular habit. The problem with limited access points (e.g., is a book about the record industry shelved in music or the business section of the library?) could just as easily apply to any system using a controlled vocabulary. At some point, cataloging is therefore more of an art instead of a science.
Our evolving and imperspicuous language introduces several pitfalls, as does the existence of differentdialects, translations, wordplay, and allegorical speech. Then there’s the more philosophical quibbling (think Plato’s Cave) over whether or not linguistic labels can ever perfectly represent universal properties, or even if there are such a thing. That hunk of rock floating in space called Pluto obviously didn’t empirically change much when we stopped calling it a planet, after all.
“I know it when I see it” is a famous exasperated claim regarding the legal classification of pornography. That’s pretty much what most typologies come down to. Even when there is a method to such madness, people attempt to ascribe natural kinds to things that are ontologically as arbitrary as any other social construct.
Aside from its place on the evolutionary tree, does asking something like whether a tomato is a fruit or a vegetable have any meaningful basis in reality? You invariably end up with wacky exceptions to any taxonomy. All birds can fly, except those that don’t. All mammals give live birth, except the platypus. So you instead fit the operational definition of a mammal because your ear has three bones in it. See also the concept of corporate personhood, the logical leap in calling campaign donations an exercise of free speech, or the curious case of determining if the X-Men are human.
Peculiar classification issues pervade many sports. For example, what field does a transgender athlete compete in? In the 1988 America’s Cup, the US team sneaked in with a multi-hull design, which adhered to the letter if not spirit of the rules on boat specifications, that blew away other contenders. While his opponents walked from hole to hole, Casey Martin got to use a golf cart in the PGA, thanks to the Americans with Disabilities Act. Anthony Robles is an NCAA wrestling champion, dominating the competition to go undefeated in his senior year, an inspiring achievement considering he’s missing a leg. Anyone with a basic understanding of the sport, however, recognizes the substantial advantage in upper body strength that he has in his weight class when it comes to ground grappling. Competitive runners Aimee Mullins and Oscar Pistorius are also missing their legs. At just what point in length their prosthetic limbs would have to be to constitute an unfair advantage is a question of some debate.
All of these typology discussions are largely academic. Ultimately, any efforts at a simple schema give way to a type of uncertainty principle in that the more you try to make distinctions, the more gerrymandered classification criteria (as with the platypus) you get, no matter how you cut it. It’s enough to make me think it’s time to consider if metadata, at least the kind generated by people prior to a search being done, has any future value.
If the human element in cataloging is removed, a layer of abstraction, namely subject headings, may be eliminated. But without those controls, the cataloger insists, how can we know books penned by “Richard Bachman” were written by Stephen King, much less properly classify, and thereby aid researchers in finding, a collection of haikus, or whatever Ulyssees is about? There appears to be a benefit in deriving metadata external from the work itself, as with an old map of New York City titled “New Amsterdam,” to aid in the discovery process.
A substantial reliance on being acquainted with knowledge of other things, outside of the thing being classified, seems necessary in order to categorize it correctly, or at least in an optimal way. This basis is also essential for verifying alternative truths. As stated by George K. Fortescue, “is it not rather the peculiar felicity of the librarian’s calling that in whatsoever reading or study he may follow for his own sake, he is also adding steadily to his ability to carry out his daily duties?” (1901–08–27)
The more knowledge you possess, the better cataloger you become. This holds for human and computer alike. One of our professions’ many elephants in the room is the question of if and when will computers “know” how to catalog items better than us, provided they haven’t attained this skill already. Considering machine intelligence isn’t going to decline, I’d say it’s becoming increasingly apparent that believing artificial agents will never be able to do original cataloging is nothing more than wishful thinking. Perhaps we should be working more to prepare for the future rather than romanticizing the past.
“Nothing will ever be attempted if all possible objections must first be overcome.”
— Samuel Johnson
“We heard from librarians that the best time to prepare for and launch a new offering would be at the start of a new semester (rather than introducing a mid-semester change) and that advance prep time would be helpful in getting the library prepared to support a new offering.” So states a promotional flyer about an impending update to the LexisNexis Academic database. The vendor’s plans include a period of parallel availability, followed by deactivation of the legacy site on December 31, 2017.
Believe it or not, in the library world, this is an aggressive schedule. The FirstSearch flavor of WorldCat was originally supposed to disappear at the end of 2015. It is still around. A revamped LibGuides platform was made available in 2014. The older one will be retired in early 2018. The latest iteration of RefWorks has been around since January 2016, although we’re still waiting on a way to force users to migrate from the earlier version (ProQuest told me, “Our Development Team is working on this and it should be available soon.” last June). Lastly, at this rate, I expect the new Primo interface, which was released last August, to exist alongside its predecessor until at least the next congressional election.
Running old and new sites in tandem is often a “worst of both worlds” situation. It practically doubles your support and development workload, while the distribution of users trying out your public beta is in many ways the opposite of what it should be. Certain people will only transition when they absolutely have to, so the only way to get everyone acclimated with current technology is by removing the choice to use outdated products.
Maintaining multiple systems, especially in the days of the perpetual beta and agile release cycles, is not something I suspect a service provider would normally choose to do. The examples above, however, show how much catering is done to a client base that is resilient and averse to change. Within an established profession that values tradition and prizes order, a fear of failure unduly paralyzes us from efforts to improve ourselves. This can lead to the biggest catastrophe of all: becoming obsolete.
The university library I work at, like many others, has developed a tradition of eschewing mid-semester upgrades. The perceived need to keep everything the same when classes are in session has grown to almost mythical proportions. In an environment of shared governance, there is a partial chilling effect from certain cranky faculty (not to be ageist, but let’s face it, when was the last time you heard a Freshman complain about something new?) that makes us a little too gun-shy about ever making any modifications whatsoever.
The original date for our previous catalog to go offline was pushed back because a tenured individual pointed out to library administration that even though the planned time was after the semester was over, grades had not yet been handed in. It was therefore dangerous to expect instructors to learn how the new discovery layer (which had been live for 23 months, mind you) functioned while they were conducting searches for the purpose of grading papers. And these are the same people we expect to defer to our expertise when it comes to which journal subscriptions to cancel?
Another common position is basically that, “We’ve just held classroom sessions on how to use the current interface, so you can’t change it, much less decommission it, until next semester.” — even if doing so would improve the experience for the other 99% of our patrons. Or put another way, “But we need time to revise all of our handouts!” Such approaches could effectively delay upgrades forever. We need to do a better job at keeping up with the times, offering the overall best available service, and following a sustainable instruction model. Progress is impossible without change.
Furthermore, as any academic employee can attest, when we put off so many projects to only be done during the summer months, some of that work invariably never gets done. Protracted implementations are our own self-imposed version of development hell.
Entire bookshelves have been written about the business of handling change. Many management fads, which may or may not be rigorously scientific, also attempt to explain our stunted capacity to think differently. Some deal with human needs (as in Maslow’s hierarchy), a common pathology of the propensity to overestimate the dangersofchange, or even the evolutionary biology of why we often view change as a threat.
Similar to the representations of technology adoption and the hype cycle is one visual I find particularly meaningful in my work. It pinpoints why fear of change is often a shortsighted fallacy.
If we view the first curve as a technology beginning to die out, and the second curve as a newer and more promising alternative, their first point of intersection shows continued improvements to the old system alongside the short-term upstart costs of investing in new methodologies. The best time to change is therefore when there is no immediate benefit to doing so, whereas sticking with successful practices eventually ensures their failure.
If you’ve read almost any of my previous posts, it should be no surprise to hear me say Point B is where many library service models and mentalities currently reside. Several of us have apparently created a variation of the can-do adage, “it’s better to seek forgiveness than to ask permission,” and instead act upon the seemingly safer yet downright delusional and cowardly course of, “it’s better to change nothing than potentially upset someone.”
Playing in the future is risky, but it sure as hell beats watching a profession shrink into irrelevance. As much as we don’t need to overstate the certainty of expected benefits to modern systems (e.g., “I promise this is going to be the best upgrade ever!”), it’s equally as important to realize that indecision and intransigence can be quite costly, even if it’s not Summer Break.
There are times when making no choice is the worst choice of all. When the only constant is change, we shouldn’t be too enamored of present conditions. In the words of Ieuan Maddock, “To cherish traditions, old buildings, ancient cultures and graceful lifestyles is a worthy aspiration. In the world of technology it is a prescription for suicide.”
Summary: While not necessarily the root of all evil, the goals of for-profit companies acting in their own self-interest do not necessarily align with the purpose of libraries, and in some cases, even disrupt the ability for libraries to fulfill their mission. A broader use of available technology as more of a delivery aid than a tool for restricting use, coupled with the potential for increased collaboration amongst librarians, could effectively render the need to pay other organizations to do work for us as largely unnecessary.
“You can’t live with them … pass the beer nuts.”
—Norm Peterson (Cheers)
Planned obsolescence keeps certain companies in business. This is nothing new. In the 1920s, a cartel of light bulb manufacturers famously reduced competition by essentially stopping the development and sales of longer-lasting bulbs. Products are still designed with a deliberately finite life span in order to maximize revenue from replacement purchases.
Thanks to the Internet of Things, technological obsolescence is now quite easy to enforce. Last year, for example, Google pushed out a software patch to a product of a former competitor which they had bought out. When those functioning devices received the update, it intentionally rendered them permanently inoperative. That’s obviously one way to generate new customer demand.
Companies also employ many psychological tricks to convince consumers they need the latest and greatest features and fashion. One notable marketing device is the concept of contrived scarcity. The diamondindustry is perhaps the best case of how people have been persuaded to spend significant sums of money on, in this situation, rocks which are neither rare nor precious.
Should we pay money for desired goods and services if their provider’s motives aren’t entirely altruistic? Let’s face it: the reason your credit card rate is 34.99% isn’t because the bank thinks that 35% would be too much. Even in fields supposed to be contributingtothepublicgood, it’s clear that companies don’t always have the welfare of the consumer as their top priority.
Competition and greed in the marketplace can produce starkinequities, but they also help drive innovation, lower costs, and class mobility. Many firms likewise do goodwork providing a social safety net. Yet corporate philanthropy can have its roots in strategic self-interest. In 1999, Philip Morris gave $60 million to charities, and spent $100 million on an advertising campaign touting those donations.
The raison d’être of a business is to maximize earnings and shareholder value. When Facebook made a censored version of itself for China, or retailers became champions of gay rights, or circuses and aquariums started phasing out the use of captiveanimals, or when Simon & Schuster finally made the call not to publish a book by Milo Yiannopoulos, those decisions were all primarily driven by the impact on the bottom line. While I don’t work for free either, there are professional standards about the freedom to read, intellectual freedom, and patron privacy which I don’t need to sacrifice because they might interfere with our profit model.
We should never be more focused on preserving our own job security than achieving the benefits of embracing progress, even if it would render our current role obsolete. Many industries, on the other hand, have a vested interest in maintaining the statusquo, to the point of attacking useful discoveries and innovations that potentially undermine their revenue stream. It’s important to remember those capitalistic concepts when navigating the library information and product marketplace.
My first job as a librarian was at a chemistry library. We subscribed to an online version of Chemical Abstracts, yet the client was unavailable during business hours. As an academic customer, we didn’t pay for access at times of peak demand. SciFinder also only allowed for a maximum number of simultaneous users.
Some databases still do this, most notably electronic book packages, where if one library patron is viewing a title, all others are locked out. Library audiobooks contain similar controls, and publishers have employed DIVX-style restrictions to limit the number of times an e-book can be “circulated” before access is removed. This is another way to artificially inflate demand, since there’s no good reason for electronic manifestations of information to share the same constraints on distribution as their physical counterparts, aside from their creators’ desire to monetize usage.
These sorts of artifices should rub us the wrong way. I’m old enough that when I became a librarian, the biggest impediment to sharing information was technological limitations. We didn’t have the means or the labor or the bandwidth to create a free and digital library. As Google Books and more importantly Sci-Hub have aptly albeit partially demonstrated, that is no longer the case.
The main barriers libraries currently face when it comes to spreading knowledge are the digital restrictions and safeguards in place, created by publishers, which by design suppress the ability to disseminate works more readily and are therefore antithetical to our mission. I don’t say this as a letter of hate to those content “owners,” but as a member of a profession that would not and could not exist if copyright was an absolute.
The problem, at least for those hoping to preserve the traditional publishing economy, is that if the right of first sale applied to electronic formats, a single library could purchase a title and then instantly lend it online to not only every one of their members, but also, through an interlibrary loan network, virtually every other Internet user in the world. Although most electronic licenses prevent this sort of thing from happening, those kind of thought experiments illustrate how we’re not taking full advantage of the Web’s potential capabilities. Imagine also if a multitude of online libraries displayed the type of “fair use snippets” that Google Books shows, or if thousands of people could each upload a ten-second clip of a feature film on YouTube. It would then become a trivial matter to compile and view everything for free.
Would this put people out of business? When livelihoods are threatened, you start hearing shortsighted and hollow claims such as, “but we can’t possibly offer unlimited online library access and still operate,” or “but if we let our songs be played over the free radio we can’t sell records,” or “but we’re unable to offer cheap pharmaceuticals in developing nations because we can’t afford it,” or even “I say to you that the VCR is to the American film producer and the American public as the Boston strangler is to the woman home alone.” Evidence on the correlation between piracy (much less public access) and sales is atbestflimsy, while some claims are patently untrue. It could very well be the case that free digital libraries would exist alongside and in fact promote the retail side of our information trade.
Instead, we face rising costs. Predatory pricing is the expected consequence of library collection managers agreeing to pay more than what a publication is worth (or rather, what it should cost, since I would argue free information is the most prized commodity of all). Many scholarly authors have also demonstrated their vanity, and a tendency for acting as if more interested in prestige than exposure, by being unwilling to support open access models. Until we stand our ground against an increasingly and maybe now inherently corrupt system, the crisis will continue.
“Free” here is admittedly a misnomer. Ultimately, web servers cost money to operate. Nonetheless, consider how editorial boards can today function without those profiteering and rent-seeking middlemen. For almost twenty years now, we’ve known that, “it is technologically possible and economically feasible to build a system of dissemination for academic resources that is completely administrated by the scholarly world without the intervention of economic interests.” Sadly, the idea behind the value added by publishers in the days of typesetting to provide accessible scholarship has tremendous momentum.
Lest you think I’m being hypocritical about asking others to practically work forfree, or otherwise agree or even aim to put themselves out of a job, the comparable argument that “we don’t need libraries anymore now that everything’s online” is something which I can only hope someday becomes true as well. I would love to live in a world where libraries are no longer needed for providing unmediated access to information. If fulfilled, the promise of open access would render the library as a source of information that we’ve purchased for our constituents as no longer necessary. It’s dangerous to start falling for the alluring yet fallacious reasoning behind such claims as “we’ll always need libraries.”
Much of what libraries used to do is already provided by commercial interests. Search engines are a path to resource discovery, readers’ advisory is offered through any number of recommendation agents, and a wealth of content is now readily available outside the library as well. Oddly enough, this renders our role of educator and equalizer all the more crucial. Given the current political climate and continued rise of filter bubbles, there’s definitely still a need for librarians to teach responsible content creation, as well as how to excavate quality sources, constantly and critically evaluate an array of publications, combat biases, and promote a scientific view of the world.
With the proliferation of democratized knowledge formats comes the side effect of information andmisinformation overload. This is analogous to the tragic fact that more people now suffer from obesity than hunger. In both cases, everyone needs to be a little more mindful of their intake, and taught the benefits of seeking and consuming quality materials. I’m not sure why we continue to be surprised and appalled at students’ unwillingness to retrieve a print item from the stacks, let alone wait for an interlibrary loan, rather than moving on to the next full-text result, which may not even come from a library. Like it or not, freemium-type information providers offer a convenience which has proven more appealing than tried and true research methods.
Aside from content publishers, there are other library service providers and similar organizations we outsource our work to. In many circumstances, just as we don’t run our own electrical generators or build our own furniture, this is a cost-saving move. Consumerism has its place, although I’m uncertain it can ever truly be a win-win situation when there’s a corporation taking a cut of the profits. Moreover, many vendors receiving our money seem to be exploiting the precedent and mindset that we can’t just do it ourselves.
Any fitness regimen should exist within the outer bounds of whatever activities cause atrophy and overtraining. Similarly, expenditures should be made if and only if whatever’s being paid for cannot be accomplished for less money by taking the time and commitment to do the job in-house or with colleagues. Examples of this range from a state looking to cover employees’ health benefits instead of subsidizing a for-profit health insurance company, to Amazon reducing expenses by delivering their own packages, to someone saving a few bucks by not paying an auto shop for thirty seconds of labor by learning how to install wiper blades themselves.
Take Springshare, for example. I don’t mean to pick on them, since LibGuides is actually one of the better products out there, in terms of cost and features and support. Amongst academic libraries, their client base is downright ubiquitous. But why are so many libraries paying for this service when SubjectsPlus is free?
A lack of local infrastructure or expertise could be one reason. However, that’s easily remedied by libraries pooling their interests to work more together on shared systems, rather than succumbing to “not invented here” syndrome and each going off to independently program homegrown solutions. Why don’t we collaborate more? This profession created WorldCat, after all. There’s no need to reinvent the wheel when we can instead build upon the existing work of our peers. Imagine if a fraction of the money every library spent on RefWorks subscriptions was instead used to ensure Zotero was a better product in every regard.
There’s a charming proverb about how new ideas go through three stages of existence: first they are denied and ridiculed; then they’re violently opposed; and lastly, they become accepted as trivial truths. The slowacceptance of Wikipedia is my favorite example of this phenomenon. In this and many other cases, ignoring popular trends caused us to miss the boat on opportunities to provide value.
Innovation from corporate interests is rewarded by receiving revenue and market share from change-averse libraries too afraid to take risks. The success of Google is largely due, in addition to the pioneering work of Eugene Garfield and his colleagues, to the fact that librarians were offcatalogingwebsites rather than building a search engine. While we were ensuring all patron records were purged, likewise, LibraryThing was born. And it continues to take in money from people willing to populate a commercial database with their reading habits and book ownership. Companies thrive in the library marketplace by preying on our complacency and intransigence.
Some vendors even receive free labor from us. In my experience with library management systems, a good deal of the development work I see being done is completed by librarian customers who are in theory paying for products which should afford optimal functionality by default. Nevertheless, we happily code bug fixes, designfeatureenhancements, and implement numerous other usability improvements. We also submit a steady stream of reported coverage errors regarding our subscriptions, and the workload for this process feels on par with what it was when we maintained our own database of availability ourselves.
The development roadmap and timeline for our library service platform has been at times uneven and lately downright nonsensical. An interface update is now over a year behind schedule. Over a half-dozen new features that I was excited about, supposedly based on rigorous product testing, have been clumsily reverted back to match legacy systems. This was presumably done because some customers complained about the changes, although that’s mainly conjecture, since it’s not a transparent process. Unfortunately, this is a limitation of closed source which comes from dealing with corporations that can be prone to mergers, bankruptcies, and disputes with competitors.
When a public institution outsources the production and provision of goods or services to an external enterprise — whether it be the federal government buying missiles from a commercial defense contractor or a library paying for online access from scholarly societies and software licenses from for-profit vendors — it’s easy to end up with prices marked up like in a hotel mini-bar, as evidenced by a $435 hammer or a $507,000 subscription to a few dozen chemistry journals. We can move in a better direction, through policy and practice, with advocacy and support for open access publications, open textbooks, open education resources, institutional repositories, and free or open source software.
In just a few short years, our profession has gone from the conceit of believing commercial competitors were unworthy of acknowledgement to a fatalism that we must purchase exorbitantly-priced products because that’s the way the system works. This is not the business we’ve chosen. In an eraofincreasedprivatization, if we’re to maintain any sort of relevance in the future, our over-reliance on organizations structured to make money needs to change. In the words of Ursula Le Guin, “We live in capitalism. Its power seems inescapable. But then, so did the divine right of kings. Any human power can be resisted and changed by human beings.”