The robots are coming — the promise and peril of AI, some questions

I’m at the Charleston conference, my first time, and we had a panel discussion this morning talking about AI.

On the panel were:

Heather Staines Director of Partnerships, Hypothes.is

Peter Brantley Director of Online Strategy, UC Davis

Elizabeth Caley Chief of Staff, Meta, Chan Zuckerberg Initiative

Ruth Pickering Co-founder and Chief Strategy Officer, Yewno

and myself. It was a pleasure to be on a panel with these amazing people. Continue reading “The robots are coming — the promise and peril of AI, some questions”

LSE’s “request a copy” service: widening access to research both within and beyond academia

The 2016 introduction of HEFCE’s open access research policy and specifically its “deposit on acceptance” message has led to a large volume of restricted-access items being placed in institutional repositories. Dimity Flanagan reports on how LSE Library’s “request a copy service” has offered would-be readers a way to overcome this obstacle to research, and how the data the service provides […]

Book Review: Disrupt This! MOOCs and the Promise of Technology by Karen Head

In Disrupt This! MOOCs and the Promise of Technology, Karen Head draws on a “view from inside” of developing and teaching a first-year writing massive open online course (MOOC) to critically interrogate the claim that such technology will fundamentally “disrupt” educational structures. This is an eloquent and intricate analysis that shows how personal experience and practice can add nuance to questions regarding the egalitarian […]

Out of the Comfort Zone: Web Literacy Training for Library Staff

The Willoughby-Eastlake Public Library is currently participating in the Mozilla Foundation’s Web Literacy pilot funded by an IMLS grant. The pilot includes 8 other libraries spread across the US from New York to Oregon.

The web literacy framework is based on three core 21st century skills: read, write, and participate. These areas are divided into more specific skills such as search, navigate, code, and protect.

A framework for entry-level web literacy & 21st Century skills. Source credit: learning.mozilla.org

Continue reading “Out of the Comfort Zone: Web Literacy Training for Library Staff”

Using APIs in Academia

Academic publishing is a necessity, yet continues to be a point of concern for institutions and individuals alike. Academics, students, teachers, and researchers all need the ability to publish and share their findings, but the model academic publishers utilize is woefully inadequate. Not only do publishers exploit their authors for profit, they also gate this content, meaning some information may never see publication regardless of how important or valuable the information may be.

Glasstree’s continuing mission is to break down those barriers. We started by introducing self-publishing to academia with Glasstree’s publishing tools. Instead of hoping to be accepted by a publisher, Glasstree empowers academics to take control of their content by publish it themselves!

Lulu is Glasstree’s parent company, and together we aim to make knowledge, literature, and publishing available to everyone. Whatever your story – be it a fictional novel or a dissertation – we are here to help you share.

The next step in our evolution is here: Print API.

What is an API? And how does this new tool benefit academics interested in self-publishing? Keep reading and we’ll explore these questions.

 

Even if you’re unfamiliar with the technical aspects of APIs for software, you’ve almost certainly encountered them online without realizing. The acronym API stands for “Application Programming Interface.” Most basically, API is code that allows two unique pieces of software to talk to each other. This, in and of itself, is pretty simple.

Retailers, individuals, and institutions all make use of APIs to expand their capabilities and offer their users more options, better pricing, faster shipping and much more. Lulu’s Print API serves the same functionality. Once the API is integrated, users can create unique “buy now” options on their SHOP pages within their websites, and all orders placed are channeled into our global printing network, to be fulfilled by the same process as any order on Glasstree.

Breaking down Boundaries, Creating Partners

From a technical stand point, our Print API service may not seem like an exciting piece of news for the individual author (APIs run in the background and are never seen). API tools are usually meant for web developers, who implement the cross-platform code so the two discrete programs work in harmony. The average author might have little need for an API connection if they don’t want to deal with selling directly from their website.

That being said, publishers and academics need APIs for many things. We understand that need, because we’ve lived in that world for the last fifteen years. We’ve witnessed, year after year, small and independent publishers who start up, bring on a handful of authors, publish a few books, and then eventually fold. Yes, of course, some small publishers succeed, and some even succeed beyond all expectations. We’re more concerned with the publishers who couldn’t keep up.

One of the biggest problems facing many small publishers is the cost associated with printing and fulfilling book orders. The price to print and ship can be prohibitive for small publishers, who likely are operating on a limited budget and need to make the most out of every dollar invested. Print API is an answer to the funding problems these small publishers face. Because the Lulu Print API can be implemented to allow for direct print on demand services at low prices, small publishers can remove the cost of printing and storing books from their budget.

Just like using Lulu’s self-publishing tools, the Print API features all the formats and sizes Lulu has to offer, at the same low prices, and with the same quality and global shipping you’ve come to expect from Lulu. The difference is that publishers the world over can plug into our network while maintaining their brand’s independence.

Harnessing the power of the Web

 

The API process capitalizes on Internet connectivity to enable collaboration among a variety of companies and individuals, further opening the printing and publishing world to more readers, authors, and publishers.

Pricing is another important aspect to consider with an API connection. Rather than pricing your book on the Glasstree site for your profit and our commission, you price it with 100% return of profits. The price you charge on your site is entirely up to you! With the API integrated, the order bills from Glasstree to you for the printing and shipping, while the amount you charge a customer is entirely on your end. This expands on the already generous and easy to control profit model Glasstree utilizes.

In a university setting, an API tool is a means to offer the institution’s students and teachers a means to publishing and sharing their work all from within the institution. The college bookstore can host these print-on-demand titles on their website, and facilitate printing through the API connection. Costs are minimal, and the bookstore can easily make the necessary profits, all while control overhead and storage.

An API connection completely removes barriers to publishing. The institution need only implement the API and provide the file standards for uploading (the same specifications used for publishing on Glasstree). Students, teachers, and researchers can all publish their works at a minimal expense, while their institutions can list these books via our API on a college bookstore website for anyone to purchase.

Integration is In

Using API integration is more than just the cool new thing happening across the web. Take a look at this article from TechCrunch last year, “The Rise of APIs”. While the title sounds very Terminator-esque, the point the author makes is clear: third-party APIs are the future, and they are here to shake up the way the Internet works. The opening paragraph of the article sums it up; ” there is a rising wave of software innovation in the area of APIs that provide critical connective tissue and increasingly important functionality.”

While a clean and easy-to-navigate interface is always going to be important, the ability to quickly implement a new program through API connections is what will keep web based retailers one step ahead. Adding new features, replacing out of date products, and generally being able to work with the range of other programs on the web is a key to staying relevant; using API connections solves all of these problems. All modern software providers are conscious of API connectivity, and the implications of creating software that does not allow for API integration. The way of the future is sharing, through both open and private API connections, and mutually finding success through shared programming.

Lulu and Glasstree embraces this mentality wholly. From the first day, we’ve been a company designed to help content creators better share their stories and knowledge. Enabling API connections with our print network is a logical and necessary step for us.

Looking to the Future

Academia has always been an institution that had to keep an eye toward the future. Because schools and teachers are the ambassadors of knowledge for generations to come, the means to disseminate and archive knowledge has always been critical.

Look for more from Glasstree in the future, as we continue to make innovations in the publishing community. For now, you can check out our API/Developer’s Portal site at developers.lulu.com to learn more about Print API and see if the tool might be right for you.

The benefits of uncertainty over privileging in “known item” searching

“As we organize, we must be mindful that organization, while useful, can also be a trap. The trick is to organize just enough to create order and meaning, but not so much that you become enslaved by process.”
— Jessamyn West

Strange things start happening when you limit your search to particular branch locations in our resource discovery layer. An inquiry for “beer” within our Special Collections yields 62 records. The facets on the results page show how you can further refine that list to 18 items which are also in the general stacks. To be clear, we have a total of 651 matches for “beer” in the general stacks, however the initial restriction prevents most of those titles from appearing. Continue reading “The benefits of uncertainty over privileging in “known item” searching”

In times of geopolitical and economic instability how can innovative technologies drive new revenue opportunities for institutions and research funding in the UK?

By Jean Roberts

Abstract

This article examines how the emergence of innovative technology platforms, recently introduced by new players in the university services space and public arena, has the potential to open up additional revenue generation opportunities for the university research funding toolkit. How aware are universities of these new technology platforms and their revenue potential? Given anticipated EU funding upheaval (and potential removal/reduction of funding sources), uncertainty surrounding Brexit, and the lack of clarity in the lead-up to Brexit (creating what looks to be a prolonged period of instability and cross-messaging in funding circles), the time is now ripe for university management, financial stewards and library managers to embrace new technology platforms as part of their strategic finance planning in order to take advantage of new emerging revenue models in combination with existing operations.

 

Read the full article at https://insights.uksg.org


Jean Roberts is Business Development Director for Glasstree Academic Publishing

 

Academic libraries in a mixed open access & paywall world — Can we substitute open access for…

Academic libraries in a mixed open access & paywall world — Can we substitute open access for paywalled articles?

In 2014, I wrote about “How academic libraries may change when Open Access becomes the norm” which attempts to forecast how academic libraries will change when “50%-80% or more of the annual output of new papers will be open access in some form”.

I’m come to realize a more interesting and critical question for libraries on what to do would be during “the transition period”, when open access becomes a significant but not yet majority pool of articles, say 25%-40% range. In other words, while not fully dominant enough for large scale disruptions but at a stage where it is too big to ignore. Continue reading “Academic libraries in a mixed open access & paywall world — Can we substitute open access for…”

Jumping S-Curves in Mid-Stream

“Nothing will ever be attempted if all possible objections must first be overcome.”
— Samuel Johnson

“We heard from librarians that the best time to prepare for and launch a new offering would be at the start of a new semester (rather than introducing a mid-semester change) and that advance prep time would be helpful in getting the library prepared to support a new offering.” So states a promotional flyer about an impending update to the LexisNexis Academic database. The vendor’s plans include a period of parallel availability, followed by deactivation of the legacy site on December 31, 2017.

Believe it or not, in the library world, this is an aggressive schedule. The FirstSearch flavor of WorldCat was originally supposed to disappear at the end of 2015. It is still around. A revamped LibGuides platform was made available in 2014. The older one will be retired in early 2018. The latest iteration of RefWorks has been around since January 2016, although we’re still waiting on a way to force users to migrate from the earlier version (ProQuest told me, “Our Development Team is working on this and it should be available soon.” last June). Lastly, at this rate, I expect the new Primo interface, which was released last August, to exist alongside its predecessor until at least the next congressional election.

Running old and new sites in tandem is often a “worst of both worlds” situation. It practically doubles your support and development workload, while the distribution of users trying out your public beta is in many ways the opposite of what it should be. Certain people will only transition when they absolutely have to, so the only way to get everyone acclimated with current technology is by removing the choice to use outdated products.

Maintaining multiple systems, especially in the days of the perpetual beta and agile release cycles, is not something I suspect a service provider would normally choose to do. The examples above, however, show how much catering is done to a client base that is resilient and averse to change. Within an established profession that values tradition and prizes order, a fear of failure unduly paralyzes us from efforts to improve ourselves. This can lead to the biggest catastrophe of all: becoming obsolete.

The university library I work at, like many others, has developed a tradition of eschewing mid-semester upgrades. The perceived need to keep everything the same when classes are in session has grown to almost mythical proportions. In an environment of shared governance, there is a partial chilling effect from certain cranky faculty (not to be ageist, but let’s face it, when was the last time you heard a Freshman complain about something new?) that makes us a little too gun-shy about ever making any modifications whatsoever.

The original date for our previous catalog to go offline was pushed back because a tenured individual pointed out to library administration that even though the planned time was after the semester was over, grades had not yet been handed in. It was therefore dangerous to expect instructors to learn how the new discovery layer (which had been live for 23 months, mind you) functioned while they were conducting searches for the purpose of grading papers. And these are the same people we expect to defer to our expertise when it comes to which journal subscriptions to cancel?

Another common position is basically that, “We’ve just held classroom sessions on how to use the current interface, so you can’t change it, much less decommission it, until next semester.” — even if doing so would improve the experience for the other 99% of our patrons. Or put another way, “But we need time to revise all of our handouts!” Such approaches could effectively delay upgrades forever. We need to do a better job at keeping up with the times, offering the overall best available service, and following a sustainable instruction model. Progress is impossible without change.

Furthermore, as any academic employee can attest, when we put off so many projects to only be done during the summer months, some of that work invariably never gets done. Protracted implementations are our own self-imposed version of development hell.

Entire bookshelves have been written about the business of handling change. Many management fads, which may or may not be rigorously scientific, also attempt to explain our stunted capacity to think differently. Some deal with human needs (as in Maslow’s hierarchy), a common pathology of the propensity to overestimate the dangers of change, or even the evolutionary biology of why we often view change as a threat.

Of course these are all abstractions of how beings with billions of brain cells choose to behave. Yet considering a standard change management model and charting the typical human responses to change (a la the Kübler-Ross stages of grief), as well as some related psychological concepts, can provide insight on what to expect during times of organizational upheaval.

Similar to the representations of technology adoption and the hype cycle is one visual I find particularly meaningful in my work. It pinpoints why fear of change is often a shortsighted fallacy.

The Sigmoid Curve by Charles Handy

If we view the first curve as a technology beginning to die out, and the second curve as a newer and more promising alternative, their first point of intersection shows continued improvements to the old system alongside the short-term upstart costs of investing in new methodologies. The best time to change is therefore when there is no immediate benefit to doing so, whereas sticking with successful practices eventually ensures their failure.

If you’ve read almost any of my previous posts, it should be no surprise to hear me say Point B is where many library service models and mentalities currently reside. Several of us have apparently created a variation of the can-do adage, “it’s better to seek forgiveness than to ask permission,” and instead act upon the seemingly safer yet downright delusional and cowardly course of, “it’s better to change nothing than potentially upset someone.”

Playing in the future is risky, but it sure as hell beats watching a profession shrink into irrelevance. As much as we don’t need to overstate the certainty of expected benefits to modern systems (e.g., “I promise this is going to be the best upgrade ever!”), it’s equally as important to realize that indecision and intransigence can be quite costly, even if it’s not Summer Break.

There are times when making no choice is the worst choice of all. When the only constant is change, we shouldn’t be too enamored of present conditions. In the words of Ieuan Maddock, “To cherish traditions, old buildings, ancient cultures and graceful lifestyles is a worthy aspiration. In the world of technology it is a prescription for suicide.”

Check out my other posts for related commentary.

Are Library Vendors a Necessary Evil?

Summary: While not necessarily the root of all evil, the goals of for-profit companies acting in their own self-interest do not necessarily align with the purpose of libraries, and in some cases, even disrupt the ability for libraries to fulfill their mission. A broader use of available technology as more of a delivery aid than a tool for restricting use, coupled with the potential for increased collaboration amongst librarians, could effectively render the need to pay other organizations to do work for us as largely unnecessary.

“You can’t live with them … pass the beer nuts.”
—Norm Peterson (Cheers)

Planned obsolescence keeps certain companies in business. This is nothing new. In the 1920s, a cartel of light bulb manufacturers famously reduced competition by essentially stopping the development and sales of longer-lasting bulbs. Products are still designed with a deliberately finite life span in order to maximize revenue from replacement purchases.

Thanks to the Internet of Things, technological obsolescence is now quite easy to enforce. Last year, for example, Google pushed out a software patch to a product of a former competitor which they had bought out. When those functioning devices received the update, it intentionally rendered them permanently inoperative. That’s obviously one way to generate new customer demand.

Companies also employ many psychological tricks to convince consumers they need the latest and greatest features and fashion. One notable marketing device is the concept of contrived scarcity. The diamond industry is perhaps the best case of how people have been persuaded to spend significant sums of money on, in this situation, rocks which are neither rare nor precious.

Should we pay money for desired goods and services if their provider’s motives aren’t entirely altruistic? Let’s face it: the reason your credit card rate is 34.99% isn’t because the bank thinks that 35% would be too much. Even in fields supposed to be contributing to the public good, it’s clear that companies don’t always have the welfare of the consumer as their top priority.

Competition and greed in the marketplace can produce stark inequities, but they also help drive innovation, lower costs, and class mobility. Many firms likewise do good work providing a social safety net. Yet corporate philanthropy can have its roots in strategic self-interest. In 1999, Philip Morris gave $60 million to charities, and spent $100 million on an advertising campaign touting those donations.

The raison d’être of a business is to maximize earnings and shareholder value. When Facebook made a censored version of itself for China, or retailers became champions of gay rights, or circuses and aquariums started phasing out the use of captive animals, or when Simon & Schuster finally made the call not to publish a book by Milo Yiannopoulos, those decisions were all primarily driven by the impact on the bottom line. While I don’t work for free either, there are professional standards about the freedom to read, intellectual freedom, and patron privacy which I don’t need to sacrifice because they might interfere with our profit model.

We should never be more focused on preserving our own job security than achieving the benefits of embracing progress, even if it would render our current role obsolete. Many industries, on the other hand, have a vested interest in maintaining the status quo, to the point of attacking useful discoveries and innovations that potentially undermine their revenue stream. It’s important to remember those capitalistic concepts when navigating the library information and product marketplace.

My first job as a librarian was at a chemistry library. We subscribed to an online version of Chemical Abstracts, yet the client was unavailable during business hours. As an academic customer, we didn’t pay for access at times of peak demand. SciFinder also only allowed for a maximum number of simultaneous users.

Some databases still do this, most notably electronic book packages, where if one library patron is viewing a title, all others are locked out. Library audiobooks contain similar controls, and publishers have employed DIVX-style restrictions to limit the number of times an e-book can be “circulated” before access is removed. This is another way to artificially inflate demand, since there’s no good reason for electronic manifestations of information to share the same constraints on distribution as their physical counterparts, aside from their creators’ desire to monetize usage.

These sorts of artifices should rub us the wrong way. I’m old enough that when I became a librarian, the biggest impediment to sharing information was technological limitations. We didn’t have the means or the labor or the bandwidth to create a free and digital library. As Google Books and more importantly Sci-Hub have aptly albeit partially demonstrated, that is no longer the case.

The main barriers libraries currently face when it comes to spreading knowledge are the digital restrictions and safeguards in place, created by publishers, which by design suppress the ability to disseminate works more readily and are therefore antithetical to our mission. I don’t say this as a letter of hate to those content “owners,” but as a member of a profession that would not and could not exist if copyright was an absolute.

The problem, at least for those hoping to preserve the traditional publishing economy, is that if the right of first sale applied to electronic formats, a single library could purchase a title and then instantly lend it online to not only every one of their members, but also, through an interlibrary loan network, virtually every other Internet user in the world. Although most electronic licenses prevent this sort of thing from happening, those kind of thought experiments illustrate how we’re not taking full advantage of the Web’s potential capabilities. Imagine also if a multitude of online libraries displayed the type of “fair use snippets” that Google Books shows, or if thousands of people could each upload a ten-second clip of a feature film on YouTube. It would then become a trivial matter to compile and view everything for free.

Would this put people out of business? When livelihoods are threatened, you start hearing shortsighted and hollow claims such as, “but we can’t possibly offer unlimited online library access and still operate,” or “but if we let our songs be played over the free radio we can’t sell records,” or “but we’re unable to offer cheap pharmaceuticals in developing nations because we can’t afford it,” or even “I say to you that the VCR is to the American film producer and the American public as the Boston strangler is to the woman home alone.” Evidence on the correlation between piracy (much less public access) and sales is at best flimsy, while some claims are patently untrue. It could very well be the case that free digital libraries would exist alongside and in fact promote the retail side of our information trade.

Instead, we face rising costs. Predatory pricing is the expected consequence of library collection managers agreeing to pay more than what a publication is worth (or rather, what it should cost, since I would argue free information is the most prized commodity of all). Many scholarly authors have also demonstrated their vanity, and a tendency for acting as if more interested in prestige than exposure, by being unwilling to support open access models. Until we stand our ground against an increasingly and maybe now inherently corrupt system, the crisis will continue.

“Free” here is admittedly a misnomer. Ultimately, web servers cost money to operate. Nonetheless, consider how editorial boards can today function without those profiteering and rent-seeking middlemen. For almost twenty years now, we’ve known that, “it is technologically possible and economically feasible to build a system of dissemination for academic resources that is completely administrated by the scholarly world without the intervention of economic interests.” Sadly, the idea behind the value added by publishers in the days of typesetting to provide accessible scholarship has tremendous momentum.

Lest you think I’m being hypocritical about asking others to practically work for free, or otherwise agree or even aim to put themselves out of a job, the comparable argument that “we don’t need libraries anymore now that everything’s online” is something which I can only hope someday becomes true as well. I would love to live in a world where libraries are no longer needed for providing unmediated access to information. If fulfilled, the promise of open access would render the library as a source of information that we’ve purchased for our constituents as no longer necessary. It’s dangerous to start falling for the alluring yet fallacious reasoning behind such claims as “we’ll always need libraries.”

Much of what libraries used to do is already provided by commercial interests. Search engines are a path to resource discovery, readers’ advisory is offered through any number of recommendation agents, and a wealth of content is now readily available outside the library as well. Oddly enough, this renders our role of educator and equalizer all the more crucial. Given the current political climate and continued rise of filter bubbles, there’s definitely still a need for librarians to teach responsible content creation, as well as how to excavate quality sources, constantly and critically evaluate an array of publications, combat biases, and promote a scientific view of the world.

With the proliferation of democratized knowledge formats comes the side effect of information and misinformation overload. This is analogous to the tragic fact that more people now suffer from obesity than hunger. In both cases, everyone needs to be a little more mindful of their intake, and taught the benefits of seeking and consuming quality materials. I’m not sure why we continue to be surprised and appalled at students’ unwillingness to retrieve a print item from the stacks, let alone wait for an interlibrary loan, rather than moving on to the next full-text result, which may not even come from a library. Like it or not, freemium-type information providers offer a convenience which has proven more appealing than tried and true research methods.

Aside from content publishers, there are other library service providers and similar organizations we outsource our work to. In many circumstances, just as we don’t run our own electrical generators or build our own furniture, this is a cost-saving move. Consumerism has its place, although I’m uncertain it can ever truly be a win-win situation when there’s a corporation taking a cut of the profits. Moreover, many vendors receiving our money seem to be exploiting the precedent and mindset that we can’t just do it ourselves.

Any fitness regimen should exist within the outer bounds of whatever activities cause atrophy and overtraining. Similarly, expenditures should be made if and only if whatever’s being paid for cannot be accomplished for less money by taking the time and commitment to do the job in-house or with colleagues. Examples of this range from a state looking to cover employees’ health benefits instead of subsidizing a for-profit health insurance company, to Amazon reducing expenses by delivering their own packages, to someone saving a few bucks by not paying an auto shop for thirty seconds of labor by learning how to install wiper blades themselves.

Take Springshare, for example. I don’t mean to pick on them, since LibGuides is actually one of the better products out there, in terms of cost and features and support. Amongst academic libraries, their client base is downright ubiquitous. But why are so many libraries paying for this service when SubjectsPlus is free?

A lack of local infrastructure or expertise could be one reason. However, that’s easily remedied by libraries pooling their interests to work more together on shared systems, rather than succumbing to “not invented here” syndrome and each going off to independently program homegrown solutions. Why don’t we collaborate more? This profession created WorldCat, after all. There’s no need to reinvent the wheel when we can instead build upon the existing work of our peers. Imagine if a fraction of the money every library spent on RefWorks subscriptions was instead used to ensure Zotero was a better product in every regard.

There’s a charming proverb about how new ideas go through three stages of existence: first they are denied and ridiculed; then they’re violently opposed; and lastly, they become accepted as trivial truths. The slow acceptance of Wikipedia is my favorite example of this phenomenon. In this and many other cases, ignoring popular trends caused us to miss the boat on opportunities to provide value.

Innovation from corporate interests is rewarded by receiving revenue and market share from change-averse libraries too afraid to take risks. The success of Google is largely due, in addition to the pioneering work of Eugene Garfield and his colleagues, to the fact that librarians were off cataloging websites rather than building a search engine. While we were ensuring all patron records were purged, likewise, LibraryThing was born. And it continues to take in money from people willing to populate a commercial database with their reading habits and book ownership. Companies thrive in the library marketplace by preying on our complacency and intransigence.

Some vendors even receive free labor from us. In my experience with library management systems, a good deal of the development work I see being done is completed by librarian customers who are in theory paying for products which should afford optimal functionality by default. Nevertheless, we happily code bug fixes, design feature enhancements, and implement numerous other usability improvements. We also submit a steady stream of reported coverage errors regarding our subscriptions, and the workload for this process feels on par with what it was when we maintained our own database of availability ourselves.

The development roadmap and timeline for our library service platform has been at times uneven and lately downright nonsensical. An interface update is now over a year behind schedule. Over a half-dozen new features that I was excited about, supposedly based on rigorous product testing, have been clumsily reverted back to match legacy systems. This was presumably done because some customers complained about the changes, although that’s mainly conjecture, since it’s not a transparent process. Unfortunately, this is a limitation of closed source which comes from dealing with corporations that can be prone to mergers, bankruptcies, and disputes with competitors.

When a public institution outsources the production and provision of goods or services to an external enterprise — whether it be the federal government buying missiles from a commercial defense contractor or a library paying for online access from scholarly societies and software licenses from for-profit vendors — it’s easy to end up with prices marked up like in a hotel mini-bar, as evidenced by a $435 hammer or a $507,000 subscription to a few dozen chemistry journals. We can move in a better direction, through policy and practice, with advocacy and support for open access publications, open textbooks, open education resources, institutional repositories, and free or open source software.

In just a few short years, our profession has gone from the conceit of believing commercial competitors were unworthy of acknowledgement to a fatalism that we must purchase exorbitantly-priced products because that’s the way the system works. This is not the business we’ve chosen. In an era of increased privatization, if we’re to maintain any sort of relevance in the future, our over-reliance on organizations structured to make money needs to change. In the words of Ursula Le Guin, “We live in capitalism. Its power seems inescapable. But then, so did the divine right of kings. Any human power can be resisted and changed by human beings.”

Further Reading

Check out my other posts for related commentary.

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑