Supreme Court Declines to Hear Google Books Case

Yesterday, the U.S. Supreme Court denied the petition for writ of certiorari submitted by the Authors Guild in their case against Google Books. In their petition, the Authors Guild had argued that the Second Circuit had “upended the meaning of the phrase ‘transformative use’ employed by” the Supreme Court in an earlier case and had “effectively nullified the four statutory fair-use factors set forth by Congress, including any real analysis under the fourth factor of the market harm to rightsholders caused by Google Books and by its many likely imitators” (pages 3-4 of the Petition). Now that the Supreme Court has declined to hear the case, the Second Circuit’s original decision affirming the fact that Google’s actions meet the requirements of the fair use doctrine.

You can read more about the case and the statements from the parties in the wake of the Supreme Court’s action in the New York Times.

Sweden’s Recent Copyright Ruling Could Impact the Sharing of Images Online

The highest court of Sweden ruled yesterday that tourists may take pictures of art in public spaces but that the posting of these images to Wikimedia Sverige (translated as Wikimedia Sweden) violates the copyright of the artists. You can read the full statement of the Court and the decision on the Court’s website (in Swedish). According to The Guardian:

The Visual Copyright Society in Sweden (BUS), which represents painters, photographers, illustrators and designers among others, had sued Wikimedia Sweden for making photographs of their artwork displayed in public places available in its database, without their consent.
The Supreme Court found in favour of BUS, arguing that while individuals were permitted to photograph artwork on display in public spaces, it was “an entirely different matter” to make the photographs available in a database for free and unlimited use.
“Such a database can be assumed to have a commercial value that is not insignificant. The court finds that the artists are entitled to that value,” it wrote in a statement.

The damages owed by Wikimedia Sweden have not yet been decided. You can read the Wikimedia Foundation’s full response to the ruling on their blog.

eReserves and Fair Use Again

You may remember that the Georgia State e-reserves case resulted in a District Court ruling largely favorable to libraries, but with a bright line 10% standard applied for fair use. The publishers appealed and the higher court sent the case back to the District Court for a new fair use analysis.

The new ruling, yesterday, is still quite favorable to libraries and does away with the 10% rule. It does apply a financial analysis to each posted excerpt based on data available to the court after the fact, but not to the library or professor beforehand.

Both Keven Smith and Brandon Butler have written helpful analyses.

Some bottom-line advice from Smith:

 All we can do, then, is to continue to think carefully about each instance of fair use, and make responsible decisions.  We still have some rules of thumb, and also some places where we will need to think in a more granular way.  But nothing in these rulings need fundamentally upset good, responsible library practice.

The second takeaway from this decision is that we should resort to paying for licenses only very rarely, and when there is no other alternative.  The simple fact is that the nature of the analysis that the Court of Appeals pushed Judge Evans into is such that licensing income for the publishers narrows the scope for fair use by libraries.  To my mind, this means that whenever we are faced with an e-reserves request that may not fall easily into fair use, we should look at ways to improve the fair use situation before we decide to license the excerpt.  Can we link to an already licensed version?  Can we shorten the excerpt?  Buying a separate license should be a last resort.  Doing extensive business with the Copyright Clearance Center, including purchase of their blanket campus license, is not, in my opinion, a way to buy reassurance and security; instead, it increases the risk that our space for fair use will shrink over time.

And, from Butler:

Her new analysis, stripped of its bright lines and clear arithmetic, seems to amount to nothing more than her opinion about whether the use will substantially harm the market value (actual and potential) for the works used. How much harm is “substantial”? Well, in several places Judge Evans says the harm must be so extensive as to risk undermining the publishers’ entire motivation to publish the work. So, it would need to eat their entire profit margin (or enough that they decide it’s not worth the bother to publish). And in at least one place she seems to suggest that, because there is no marginal cost for a publisher to offer to license use of excerpts from a work, there is no real harm when GSU decides not to pay the license. This is heady stuff, and could offer a very wide berth for educational fair use of electronic excerpts.

Voting with Our Collection Dollars

Our colleague, Ellen Finnie, of the MIT Libraries, has written an inspiring blog post about values-based collection spending. She admits that MIT is in a fortunate position to be able to explore this. The whole post is worth reading, but here’s a taste:

In making a more holistic and values-based assessment, we will be using a new lens: assessing potential purchases in relation to whether they transform the scholarly communication system towards openness, or make a positive impact on the scholarly communication environment in some way, whether via licensing, access, pricing, or another dimension. Of course, like shoppers in the supermarket, we’ll need to view our purchase options with more than just one lens. We have finite resources, and we must meet our community’s current and rapidly evolving needs while supporting other community values, such as diversity and inclusion (which I will write about in a future post). So the lens of transforming the scholarly communications system is only one of many we will look through when we decide what to buy, and from what sources. How we will integrate the views from multiple lenses to make our collections decisions is something we will be exploring in the coming months – and years.


Who pays for Open Access?


Library Journal includes a brief article about the Article Processing Charge model of open access publishing. This is by no means the only business model for open access publications, but it does account for approximately half of open access articles.

The article indicates that grant funding is increasingly being made available to pay these charges:

Claus Roll, Publishing Editor at EDP Sciences, also believes that available funding for Open Access is increasing, albeit slowly. This is a reflection of changing public policy. “Public and private funders like the NIH or the Wellcome Trust have a say in how their money is used,” he said. “They make Open Access publishing a requirement because they want to give the public insight into their funded research that may have a societal impact.”

Roll noted that while the OA model places a cost requirement on the author and his or her employer (typically absorbed by STEM grant providers), it also provides a tangible financial benefit. Researchers building on the work of others—a fact of life in the scientific community—are less encumbered by costs when accessing others’ OA articles. The “pay it forward” notion is particularly attractive.


Combating the double-dip

The Scholarly Kitchen blog has an interesting post today about the new “total cost of access” deals that some universities/libraries are striking with publishers. The post takes issue with the lack of transparency and a perceived  me-first attitude, but the deals do begin to chip away at what has been, up to now, a practice that benefited only the publishers’ bottom lines.

These deals may represent a shift from global offsetting to local offsetting. Avoiding “double dipping” has been a requirement for publishers with the rise of OA. When an author pays for an article to be made OA, subscription prices are expected to be reduced a proportionate amount, as subscribers should not be made to pay for free content. The added revenue from the author is “offset” by globally reducing the revenue a small amount from all subscribers. But local offsetting deals seek to keep the savings at the institution paying the OA fee. Institutions argue that their total cost should remain flat, so the added APC revenue from the author’s institution should be offset by a reduction in that institution’s subscription price. Offsetting is thus “local”, rather than spreading the savings around to all subscribers.

Better Ways to Evaluate Research

SPARC Europe logo

This recent SPARC Europe briefing paper tackles the problems with current methods of evaluating research (including the impact factor and h-index) and proposes some future directions:

The most striking aspect of the recent series of Royal Society meetings on the Future of Scholarly  Scientific Communication was that almost every discussion returned to the same core issue: how  researchers are evaluated for the purposes of recruitment, promotion, tenure and grants. Every  problem that was discussed – the disproportionate influence of brand-name journals, failure to move to  more efficient models of peer-review, sensationalism of reporting, lack of replicability, under-population of data repositories, prevalence of fraud – was traced back to the issue of how we assess  works and their authors.

It is no exaggeration to say that improving assessment is literally the most important challenge facing academia. Everything else follows from it. As shown later in this paper, it is possible improve on the present state of the art.

Megajournals and Open Access

An article in yesterday’s Chronicle of Higher Education provides some analysis of the megajournal PLOS ONE, and along the way discusses the gathering momentum of the OA movement and such related issues as impact factors and predatory publishers.

As an Open-Access Megajournal Cedes Some Ground, a Movement Gathers Steam

In short, PLOS ONE — now consistently publishing around 30,000 articles a year — has attracted much more company in its mission to build huge stocks of freely available scientific research. “Since PLOS ONE’s tremendous success, everyone and their grandmother has created a megajournal,” said David J. Solomon, an emeritus professor of medicine at Michigan State University who studies open-access economics.

After years of traditional journals battling the open-access movement, said another analyst, Jevin D. West, an assistant professor of information studies at the University of Washington, “look at all the major publishers — they’re all playing now.”

More Publishers Require ORCID

ORCIDFrom The Scholarly Kitchen:

On December 7 2015 The Royal Society announced that, from January 1 2016, it would require all corresponding authors submitting papers to its journals to provide an Open Researcher and Contributor identifier (ORCID iD). In an open letter published today, seven other publishers – the American Geophysical Union (AGU), eLife, EMBO,  Hindawi, the Institute of Electrical and Electronic Engineers (IEEE), PLOS, and Science – joined them, committing to requiring ORCID iDs in their publication process during 2016.

Find out why…

Blog Posting on Web of Science, Scopus, and Open Access

Ryan Regier has made an interesting blog posting entitled “Web of Science, Scopus, and Open Access: What they are doing right and what they are doing wrong”. In it he discusses the Web of Science Open Access indicator for locating articles from gold open access journals. However, he points out that while this indicator is in theory a boon for finding OA articles, the fact that Web of Science only indexes a very small proportion of OA articles is a serious weakness. Regier has greater praise for the substantially larger OA coverage of the Scopus database and looks forward to Scopus’s article based OA indicator which is expected to launch in 2016.