17,302 research outputs found
Student user preferences for features of next-generation OPACs: a case study of University of Sheffield international students
Purpose. The purpose of this study is to identity the features that international student users prefer for next generation OPACs.
Design/ methodology/ approach. 16 international students of the University of Sheffield were interviewed in July 2008 to explore their preferences among potential features in next generation OPACs. A semi-structured interview schedule with images of mock-up screens was used.
Findings. The results of the interviews were broadly consistent with previous studies. In general, students expect features in next generation OPACs should be save their time, easy to use and relevant to their search. This study found that recommender features and features that can provide better navigation of search results are desired by users. However, Web 2.0 features, such as RSS feeds and those features which involved user participation were among the most popular.
Practical implications. This paper produces findings of relevance to any academic library seeking to implement a next-generation OPAC.
Originality/value. There have been no previous published research studies of users’ preferences among possible features of next-generation OPACs
Italian center for Astronomical Archives publishing solution: modular and distributed
The Italian center for Astronomical Archives tries to provide astronomical
data resources as interoperable services based on IVOA standards. Its VO
expertise and knowledge comes from active participation within IVOA and VO at
European and international level, with a double-fold goal: learn from the
collaboration and provide inputs to the community. The first solution to build
an easy to configure and maintain resource publisher conformant to VO standards
proved to be too optimistic. For this reason it has been necessary to re-think
the architecture with a modular system built around the messaging concept,
where each modular component speaks to the other interested parties through a
system of broker-managed queues. The first implemented protocol, the Simple
Cone Search, shows the messaging task architecture connecting the parametric
HTTP interface to the database backend access module, the logging module, and
allows multiple cone search resources to be managed together through a
configuration manager module. Even if relatively young, it already proved the
flexibility required by the overall system when the database backend changed
from MySQL to PostgreSQL+PgSphere. Another implementation test has been made to
leverage task distribution over multiple servers to serve simultaneously: FITS
cubes direct linking, cubes cutout and cubes positional merging. Currently the
implementation of the SIA-2.0 standard protocol is ongoing while for TAP we
will be adapting the TAPlib library. Alongside these tools a first
administration tool (TASMAN) has been developed to ease the build up and
maintenance of TAP_SCHEMA-ta including also ObsCore maintenance capability.
Future work will be devoted at widening the range of VO protocols covered by
the set of available modules, improve the configuration management and develop
specific purpose modules common to all the service components.Comment: SPIE Astronomical Telescopes + Instrumentation 2018, Software and
Cyberinfrastructure for Astronomy V, pre-publishing draft proceeding (reduced
abstract
European Digital Libraries: Web Security Vulnerabilities
Purpose – The purpose of this paper is to investigate the web vulnerability challenges at European library web sites and how these issues can affect the data protection of their patrons.
Design/methodology/approach – A web vulnerability testing tool was used to analyze 80 European library sites in four countries to determine how many security vulnerabilities each had and what were the most common types of problems.
Findings – Analysis results from surveying the libraries show the majority have serious security flaws in their web applications. The research shows that despite country-specific laws mandating secure sites, system librarians have not implemented appropriate measures to secure their online information systems.
Research limitations/implications – Further research on library vulnerability throughout the world can be taken to educate librarians in other countries of the serious nature of protecting their systems.
Practical implications – The findings serve to remind librarians of the complexity in providing a secure online environment for their patrons and that a disregard or lack of awareness of securing systems could lead to serious vulnerabilities of the patrons' personal data and systems. Lack of consumer trust may result in a decreased use of online commerce and have serious repercussions for the municipal libraries. Several concrete examples of methods to improve security are provided.
Originality/value – The paper serves as a current paper on data security issues at Western European municipal library web sites. It serves as a useful summary regarding technical and managerial measures librarians can take to mitigate inadequacies in their security implementation
Book Recommendation System using Data Mining for the University of Hong Kong Libraries
This paper describes the theoretical design of a Library Recommendation System, employing k- means clustering Data Mining algorithm, with subject headings of borrowed items as the basis for generating pertinent recommendations. Sample data from the University of Hong Kong Libraries (HKUL) has been used in a Quantitative approach to study the existing Library Information System, Innopac. Data Warehousing and Data Mining (k-means clustering) techniques are discussed. The primary benefit of the system is higher quality of academic research ensuing from better search results. Personalization improves individual effectiveness of learners and overall in better utilizing library resources.published_or_final_versio
Iris: an Extensible Application for Building and Analyzing Spectral Energy Distributions
Iris is an extensible application that provides astronomers with a
user-friendly interface capable of ingesting broad-band data from many
different sources in order to build, explore, and model spectral energy
distributions (SEDs). Iris takes advantage of the standards defined by the
International Virtual Observatory Alliance, but hides the technicalities of
such standards by implementing different layers of abstraction on top of them.
Such intermediate layers provide hooks that users and developers can exploit in
order to extend the capabilities provided by Iris. For instance, custom Python
models can be combined in arbitrary ways with the Iris built-in models or with
other custom functions. As such, Iris offers a platform for the development and
integration of SED data, services, and applications, either from the user's
system or from the web. In this paper we describe the built-in features
provided by Iris for building and analyzing SEDs. We also explore in some
detail the Iris framework and software development kit, showing how astronomers
and software developers can plug their code into an integrated SED analysis
environment.Comment: 18 pages, 8 figures, accepted for publication in Astronomy &
Computin
Low-Hanging Fruit and Pain Points: An Analysis of Change Implementation from Flash Usability Testing at Duke University Libraries
This paper describes a mixed method study of change implementation resulting from flash usability testing at Duke University Libraries. Flash usability testing, also known as guerilla or on-the-fly, is a method that allows researchers to collect large amounts of data in a short amount of time with quick, unplanned think-aloud tests in a high-traffic library space.
Data from usability reports was triangulated with data from interviews with members of Duke University Libraries' WebX team. WebX is a cross-departmental team that acts as "functional owner" of the libraries' web presence. It commissions flash usability tests and uses the data to implement changes or spur further research. Interviews incorporated a card sort of the recommendations from every flash usability test. The paper unearths myriad attitudes toward the libraries' web presence and perceptions of the role of usability testing in the academic library. Additionally, the paper details the subsequent effectiveness of change implementation.Master of Science in Library Scienc
Multi-modal Embedding Fusion-based Recommender
Recommendation systems have lately been popularized globally, with primary
use cases in online interaction systems, with significant focus on e-commerce
platforms. We have developed a machine learning-based recommendation platform,
which can be easily applied to almost any items and/or actions domain. Contrary
to existing recommendation systems, our platform supports multiple types of
interaction data with multiple modalities of metadata natively. This is
achieved through multi-modal fusion of various data representations. We
deployed the platform into multiple e-commerce stores of different kinds, e.g.
food and beverages, shoes, fashion items, telecom operators. Here, we present
our system, its flexibility and performance. We also show benchmark results on
open datasets, that significantly outperform state-of-the-art prior work.Comment: 7 pages, 8 figure
Building Responsive Library Collections with the Getting It System Toolkit
The Getting It System Toolkit (GIST), a suite of free and open source tools & software, leverages systems to optimize library acquisitions and deselection workflow, reducing the staff time necessary to make informed decisions and process materials. The Toolkit is divided into two functions:GIST for ILLiad consists of three components that enhance the ILLiad® interlibrary loan request management software: addons, webpage customizations, and the acquisitions manager. All three components may be selectively utilized in ILLiad, for instance, ILLiad web pages may be applied to enhance the end-user request interface to add full-text discovery, or an ILLiad Addon can help ILL’s purchase on demand program discover the best way to purchase items difficult to borrow. By combining all three and customizing these components for your library, you achieve significant benefits and optimize the combination of Acquisitions and ILL services.The GIST Gift & Deselection Manager (GDM) is designed to manage and streamline library workflow for processing gifts and evaluating materials for weeding. It is standalone open-source software that automates the gathering of data for evaluating donations; including holdings, edition comparisons, full-text, and other data. The GDM also enables collection managers to perform item-by-item deselection or use the batch analysis tool to create custom deselection reports for large weeding projects.Building Responsive Library Collections with the Getting It System Toolkit combines helpful how-tos from the developers themselves, and first-hand implementation accounts from users of these time-saving tools. The volume is split into the Toolkit’s use with ILLiad and GDM, providing easy reference for users. This manual is an invaluable resource to any library using, or considering using, the Getting It System Toolkit.
With contributions by: Kerri Goergen-Doll, Oregon State University Eric Joslin, Washington University in St. Louis Ryan Litsey, Texas Tech University Micquel Little, Monroe Community College, formerly at St. John Fisher College Katherine Mason, Central Michigan University, formerly at Old Dominion University Kate Ross, St. John Fisher College Susanna Van Sant, Tompkins Cortland Community Collegehttps://knightscholar.geneseo.edu/idsproject-press/1001/thumbnail.jp
The Family of MapReduce and Large Scale Data Processing Systems
In the last two decades, the continuous increase of computational power has
produced an overwhelming flow of data which has called for a paradigm shift in
the computing architecture and large scale data processing mechanisms.
MapReduce is a simple and powerful programming model that enables easy
development of scalable parallel applications to process vast amounts of data
on large clusters of commodity machines. It isolates the application from the
details of running a distributed program such as issues on data distribution,
scheduling and fault tolerance. However, the original implementation of the
MapReduce framework had some limitations that have been tackled by many
research efforts in several followup works after its introduction. This article
provides a comprehensive survey for a family of approaches and mechanisms of
large scale data processing mechanisms that have been implemented based on the
original idea of the MapReduce framework and are currently gaining a lot of
momentum in both research and industrial communities. We also cover a set of
introduced systems that have been implemented to provide declarative
programming interfaces on top of the MapReduce framework. In addition, we
review several large scale data processing systems that resemble some of the
ideas of the MapReduce framework for different purposes and application
scenarios. Finally, we discuss some of the future research directions for
implementing the next generation of MapReduce-like solutions.Comment: arXiv admin note: text overlap with arXiv:1105.4252 by other author
- …