63 research outputs found
Is Everything All Right At Night? Measuring User Response to Overnight Library Services
A multiple-methods study was conducted in FY15 at Santa Clara University Library to assess 24/5 hours, focusing specifically on impact and value. The purpose was to assess not only the overnight use of the library, but the perceptions of late-night users on the value of 24-hour library accessibility. This three-component study included a survey (conducted over three quarters in FY15, with 616 respondents), hourly patron headcounts, and more detailed headcounts by hour of day and user activity. A fourth component is also now underway (results available by conference time); hourly patron counts are currently being conducted that focus on user space and seating as a way to determine what types of furniture and environments users prefer. These data will be combined with the activity data to develop more detailed “what are they doing and where are they doing it” assessments to inform library issues ranging from the mundane questions concerning 24/5 staffing, services, and hours to more interesting questions that multiple data sets can answer, such as seating preferences (both library location and type of seating) by time of day (here, after midnight) and activity.
Research questions for this study included: How many students make use of overnight hours? What library services do students need/use after midnight? What is the demographic makeup of students utilizing 24/5 library hours, including department and grade point average? Is this representative of the campus as a whole? What types of spaces and seating are preferred in overnight hours? Do students associate 24/5 library access with academic success? Do their self-reported grade points reflect their opinion? How do the perceived value to students and the campus for overnight hours mesh with assessments that point to times of low use and underutilized services? More specifically, how do we weigh the political implications of a data-driven decision? Are data-driven decisions always right?
Preliminary results show that (between midnight and 7 a.m.) students overwhelmingly bring their own laptops to work in the library and therefore the primary library service they require is wi-fi. The next most-popular activity type after midnight is student use of a library-owned computer, followed by group study and studying alone. The survey results confirm that traditional library services (i.e. reference, circulation, use of print materials, photocopying) are not driving late-night library use. The enticements are wi-fi, comfortable furniture, different spaces and seating geared to different needs, and simply a safe, “clean, well-lighted” environment. And while there is a clear value users attribute to overnight library hours, few users remain in the library between 3-7 a.m.
The variety of data sets used in these analyses result in a wide-range of ways to view and analyze the data. However, multiple data sets can lead to multiple viewpoints. Are data-driven decisions always right? The practical implications of conflicting data will be addressed
How weeding adds value to library collections: Weighing the cost of weeding and the cost of keeping books
Weeding in libraries is often like the gardening chore it is named for: sweaty hours spent among dirty tangles to clear out messy undergrowth and remove unwanted materials. But the analogy stops there - the intellectual pursuit of a well-managed collection includes much more than identifying and removing materials from the shelf. In fact, the daunting, many-faceted weeding process can keep librarians from tackling this very crucial task. A collection left unassessed, left to grow ungainly, is also a missed opportunity to add value, and real cost savings, to the collection through weeding. Santa Clara University Library undertook a reference weeding project in 2013/14; library staff reviewed and relocated over 7,800 titles. Goals were to make the reference collection more relevant to current research needs and to redesign the library\u27s first floor to create more high-demand user space. The project involved multiple library units with multiple workflows, with staff including librarians, paraprofessionals, and student workers. This poster presents detailed data on the cost of weeding a book in a mid-sized academic library, based on staff-time estimated during this project and national wage averages. These data, when compared to the “Cost of Keeping a Book” (by Courant and Nielsen, 2010), demonstrate the value associated with weeding and how, by acknowledging the cost associated with keeping a book, libraries can make evidenced-based decisions that may incentivize the weeding process and perhaps even lead to a more cost effective migration to building ebook collections
Print Reference Collections Never Die, They Just Fade Away: Or Do They?
Find out how one University Library undertook a reference-weeding project to pare three reference collections down from 12,000 titles. The poster will discuss the motivations for reviewing the reference collection, the data gathered to support decision making, the challenges encountered in the project, and estimates for the costs associated with such a weeding project
Chemistry journal use and cost: Results of a longitudinal study
Journal-use studies were conducted in the University of Illinois at Urbana-Champaign Chemistry Library in 1988, 1993, and most recently in 1996. Between 1988 and 1996, the cost of purchasing the journal collection rose 66.9% while use of the collection rose 34.2%. These increases occurred during the cancellation of over 180 chemistry journals between 1988 and 1996. The data point to a collection with obvious top journals that generate most of the use. While the data confirm the 80/20 rule ( 84% of use was generated by the top 100 journals in 1996, approximately 20% of the journal collection), journal use is even more focused toward the top: approximately 40% of aU use in 1996 was generated by the top 10 titles. Use of the top 10 journals rose 60% between 1988 and 1996, with nearly identical titles occupying the top 10 positions over 8 years. Longitudinal trends in journal use and cost are explored, recommendations are made for successful journal-use study methodologies, and time series, data-centered collectian development is addressed
Going Beyond DDA’s “They Clicked It → We Bought It → Done” Assessing Ebook Use Pre- and Post-Purchase
Ebook DDA (demand-driven acquisitions) programs have become common in academic libraries of all sizes. To establish a DDA program, a library creates potential subject collections by establishing profiles and downloading e-records that match them into their online catalog; when books are “used” (in an amazingly wide array of options), the library buys the book, often after a certain threshold is met. This is a fairly seamless process that users are often unaware is happening. DDA assessment by libraries, however, is often limited to the obvious demarcation between those ebooks that are purchased (after meeting library thresholds), and those that remain as viewable records in the OPAC, but are either never purchased or don’t move beyond the library’s preset limit of free views. Ironically, the data lurking behind DDA programs are extensive, albeit often confusing and difficult to navigate. And while libraries are frequently content to know an ebook has been selected and purchased through a vetted “use” process, there is much more to learn from DDA.
This poster will present a clear guide to ebook use data, comparing a third-party vendor (EBL) and a publisher (Springer). EBL (employing in-house data) and Springer (employing COUNTER) both provide ebook use data, but they are surprisingly different. The research questions for this study are:
• What are the options for pre- and post-purchase DDA ebook assessment?
• How do data from COUNTER differ from those offered by EBL’s in-house data?
• What are the positives and negatives of the wide variety of “use” definitions?
• Are there any comparable data points uniformly available for ebook use across platforms?
• Do COUNTER and EBL statistics provide sufficient data for assessing ebook usage?
• What are the implications of the lack of uniformity with data provided by ebook vendors and publishers?
Much of the confusion surrounding post-purchase ebook assessment stems from the wide variety of data options provided by publishers and third-party vendors. How many chapter downloads might equal how many page views? Does a total book download trump all other use, or do minutes spent inside a book, with specific pages viewed, carry more weight? And why does it matter?
Ebook assessment matters because it offers new ways of evaluating and measuring how readers read and how books are read. Detailed DDA data are showing us what’s possible to learn about ebook use. These data can show us when users dip in and out (pages skipped), when they judge a book by its preface (reading only the first few pages), or when they read the entire book (rarely!). Ebook data provide libraries with a peek inside how our users use books, something a print book could never tell us. By evaluating and analyzing ebook use data, libraries can begin to better understand why users choose to read ebooks based on how they read ebooks, facts that will greatly enhance the collection development of ebook collections going forward
Surveying the damage: Academic library serial cancellations 1987 through 1990
A longitudinal study of serial cancellations was conducted by analyzing the cancellation lists between 1987 and 1990 from five midwestern libraries of the Association of Research Libraries. The study was designed to test the primary hypothesis that large academic libraries, faced with the same negative impacts on their budgets, are cancelling the same or similar types of serials. This hypothesis was disproved. Results of the study showed that, of 6,503 cancelled titles, only 281 (4 percent) were cancelled at more than one library, resulting in 6,222 (96 percent) unique title cancellations within this survey. Results also provide an overall profile of the at-risk journal. An additional survey of collection development officers gives insight into the cancellation decision-making process. The impact on serial collections in research libraries is also explored
E-book Use and Value in the Humanities: Scholars’ Practices and Expectations
This research is a part of Values, Outcomes, and Return on Investment of Academic Libraries (“Lib-Value”), a three-year study funded by the Institute of Museum and Library Services IMLS grant # LG-06-09-0152-09. We gratefully acknowledge this support. In addition, the authors wish to thank Jean-Louise Zancanella, our graduate research assistant on this project, for her careful work. Portions of the survey results were presented at the Library Assessment Conference in Seattle, Washington, in August 2014 and will be published in those proceedings; other prepublication presentations took place at the Charleston Conference in November 2013 and 2014 (no proceedings publications are available; this paper represents the sole published culmination of this research)
Approval Plan Profile Assessment in Two Large ARL Libraries: University of Illinois at Urbana-Champaign and Pennsylvania State University
Two Association of Research Libraries member libraries, the University of Illinois at Urbana–Champaign (UIUC) and Pennsylvania State University (Penn State), evaluated their monograph acquisition approval plan profiles to answer basic questions concerning use, cost effectiveness, and coverage. Data were collected in tandem from vendors and local online systems to track book receipt, item circulation, and overlap between plans. The study period was fiscal year 2005 (July 1, 2004–June 30, 2005) for the approval plan purchasing data, and circulation use data were collected from July 1, 2004, through March 31, 2007, for both UIUC and Penn State. Multiple data points were collected for each title, including author, title, ISBN, publisher, Library of Congress classification number, purchase price, and circulation data. Results of the study measured the cost-effectiveness of each plan by subject and publisher, analyzed similarities and differences in use, and examined the overlap between the two approval plans. The goals were to establish a benchmark for consistently evaluating approval plan profile effectiveness and to provide a reproducible method with baseline data that will allow other libraries to collect comparable data and conduct their own studies
Library collection deterioration: a study at the University of Illinois at Urbana-Champaign
A survey of bound items in the bookstacks of the University of Illinois library at Urbana-Champaign was conducted following the methodology used in the 1979 survey of the Green Library stacks at Stanford University. A reliable random sampling technique was used. The survey found that 37.0% of the items at Illinois are seriously deteriorated (paper is embrittled), 33.6% are moderately deteriorated (paper is becoming brittle), and 29.4% are in good condition (paper shows no signs of deterioration). The total cost of the survey was $1,845.45 (excluding permanent staff salaries). The methodology can be adapted by other libraries for collection condition surveys
- …