66,967 research outputs found
Moving Digital Images
For over six years the Marquette University Archives managed patron-driven scanning requests using a desktop version of Extensis Portfolio while building thematically-based digital collections online using CONTENTdm. The purchase of a CONTENTdm license with an unlimited item limit allowed the department to move over 10,000 images previously cataloged in Portfolio into the online environment. While metadata in the Portfolio database could be exported to a text file and immediately imported into CONTENTdm’s project client, we recognized that we had an opportunity to analyze and clean our metadata using OpenRefine as a part of the process. We also hoped to update our Portfolio database and the metadata embedded into the files themselves to reflect the results of this cleanup. This article will discuss the process we used to clean metadata in OpenRefine for ingest into CONTENTdm as well as the use of Portfolio and the VRA Panel Export-Import Tool for writing metadata changes back to the original image files
“Swipe Aid”: Using Swipebox to Create a Side-Swipeable Image Gallery for Finding Aids
[Excerpt] The Swipe Aid Project provides another technical solution for the integration of digitized content with a finding aid, specifically aimed at mobile environments. This solution uses a free, open-source JavaScript library (Swipebox) to deliver a multi-device-friendly image gallery within the finding aid. The below project description provides a step-by-step explanation of how Swipebox is used to create a finding aid image gallery. This is followed by a summary of initial feedback, which demonstrates the importance of a finding aid image gallery, as well as desired functionality and further areas for development. This article contributes to the growing body of literature on “next-generation” finding aids by presenting a simple solution to the integration of digitized content for mobile environments
Enhanced Management of Personal Astronomical Data with FITSManager
Although the roles of data centers and computing centers are becoming more
and more important, and on-line research is becoming the mainstream for
astronomy, individual research based on locally hosted data is still very
common. With the increase of personal storage capacity, it is easy to find
hundreds to thousands of FITS files in the personal computer of an
astrophysicist. Because Flexible Image Transport System (FITS) is a
professional data format initiated by astronomers and used mainly in the small
community, data management toolkits for FITS files are very few. Astronomers
need a powerful tool to help them manage their local astronomical data.
Although Virtual Observatory (VO) is a network oriented astronomical research
environment, its applications and related technologies provide useful solutions
to enhance the management and utilization of astronomical data hosted in an
astronomer's personal computer. FITSManager is such a tool to provide
astronomers an efficient management and utilization of their local data,
bringing VO to astronomers in a seamless and transparent way. FITSManager
provides fruitful functions for FITS file management, like thumbnail, preview,
type dependent icons, header keyword indexing and search, collaborated working
with other tools and online services, and so on. The development of the
FITSManager is an effort to fill the gap between management and analysis of
astronomical data.Comment: 12 pages, 9 figures, Accepted for publication in New Astronom
Automatic discovery of image families: Global vs. local features
Gathering a large collection of images has been made quite
easy by social and image sharing websites, e.g. flickr.com.
However, using such collections faces the problem that they
contain a large number of duplicates and highly similar images.
This work tackles the problem of how to automatically
organize image collections into sets of similar images,
called image families hereinafter. We thoroughly compare the
performance of two approaches to measure image similarity:
global descriptors vs. a set of local descriptors. We assess
the performance of these approaches as the problem scales up
to thousands of images and hundreds of families. We present
our results on a new dataset of CD/DVD game covers
Towards improving web service repositories through semantic web techniques
The success of the Web services technology has brought topicsas software reuse and discovery once again on the agenda of software engineers. While there are several efforts towards automating Web service discovery and composition, many developers still search for services
via online Web service repositories and then combine them manually. However, from our analysis of these repositories, it yields that, unlike traditional software libraries, they rely on little metadata to support
service discovery. We believe that the major cause is the difficulty of automatically deriving metadata that would describe rapidly changing Web service collections. In this paper, we discuss the major shortcomings of state of the art Web service repositories and, as a solution, we
report on ongoing work and ideas on how to use techniques developed in the context of the Semantic Web (ontology learning, mapping, metadata based presentation) to improve the current situation
Unsupervised Object Discovery and Tracking in Video Collections
This paper addresses the problem of automatically localizing dominant objects
as spatio-temporal tubes in a noisy collection of videos with minimal or even
no supervision. We formulate the problem as a combination of two complementary
processes: discovery and tracking. The first one establishes correspondences
between prominent regions across videos, and the second one associates
successive similar object regions within the same video. Interestingly, our
algorithm also discovers the implicit topology of frames associated with
instances of the same object class across different videos, a role normally
left to supervisory information in the form of class labels in conventional
image and video understanding methods. Indeed, as demonstrated by our
experiments, our method can handle video collections featuring multiple object
classes, and substantially outperforms the state of the art in colocalization,
even though it tackles a broader problem with much less supervision
A Framework for Exploring and Evaluating Mechanics in Human Computation Games
Human computation games (HCGs) are a crowdsourcing approach to solving
computationally-intractable tasks using games. In this paper, we describe the
need for generalizable HCG design knowledge that accommodates the needs of both
players and tasks. We propose a formal representation of the mechanics in HCGs,
providing a structural breakdown to visualize, compare, and explore the space
of HCG mechanics. We present a methodology based on small-scale design
experiments using fixed tasks while varying game elements to observe effects on
both the player experience and the human computation task completion. Finally
we discuss applications of our framework using comparisons of prior HCGs and
recent design experiments. Ultimately, we wish to enable easier exploration and
development of HCGs, helping these games provide meaningful player experiences
while solving difficult problems.Comment: 11 pages, 5 figure
- …