10 research outputs found
Transmedial Documentation for Non-Visual Image Access
In my doctoral studies on information accessibility for the individual who is blind or visually impaired, I’ve been exploring the ways we can make image documents more accessible. This requires using an alternative sensory modality, and translating the document into a different format. The questions that arise when we consider this process are many, but among them are: Is it the same document once we’ve converted it to an audio narrative about the work, or a 3D topographic map of an artwork, or a musical interpretation? If it is not the same document, how truthful can the “trans-medial” translation be to the original work? Are such efforts valid and useful?
I hope to work with users who have low vision to determine if these image re-documentations are indeed useful and what means of representation are preferred. We now convert textbooks to audio books or electronic texts readable by special equipment, but how do we treat the images in these documents? The images are part of a whole (the textbook), but are also documents in and of themselves. They may have a history apart from the work within which they’re found. They may be reproduced with permission from copyright holders. What is the best practice for describing an image when reading a text to someone who cannot see?
These issues of documentation are part the exploration now under way. I will present several examples of approaches to addressing the problem as provocation for discussion
Recommended from our members
A Preliminary Literature Review of Visual Information Accessibility for Blind and Visually Impaired Individuals
Poster discussing a preliminary literature review of visual information accessibility for blind and visually impaired individuals
Extraction and parsing of herbarium specimen data: Exploring the use of the Dublin core application profile framework
Herbaria around the world house millions of plant specimens; botanists and other researchers value these resources as ingredients in biodiversity research. Even when the specimen sheets are digitized and made available online, the critical information about the specimen stored on the sheet are not in a usable (i.e., machine-processible) form. This paper describes a current research and development project that is designing and testing high-throughput workflows that combine machine- and human-processes to extract and parse the specimen label data. The primary focus of the paper is the metadata needs for the workflow and the creation of the structured metadata records describing the plant specimen. In the project, we are exploring the use of the new Dublin Core Metadata Initiative framework for application profiles. First articulated as the Singapore Framework for Dublin Core Application Profiles in 2007, the use of this framework is in its infancy. The promises of this framework for maximum interoperability and for documenting the use of metadata for maximum reusability, and for supporting metadata applications that are in conformance with Web architectural principles provide the incentive to explore and add implementation experience regarding this new framework
High-Throughput Workflow for Computer-Assisted Human Parsing of Biological Specimen Label Data
4th International Conference on Open RepositoriesThis presentation was part of the session : Conference PostersHundreds of thousands of specimens in herbaria and natural history museums worldwide are potential candidates for digitization, making them more accessible to researchers. An herbarium contains collections of preserved plant specimens created for scientific use. Herbarium specimens are ideal natural history objects for digitization, as the plants are pressed flat and dried, and mounted on individual sheets of paper, creating a nearly two-dimensional object. Building digital repositories of herbarium specimens can increase use and exposure of the collections while simultaneously reducing physical handling. As important as the digitized specimens are, the data contained on the associated specimen labels provide critical information about each specimen (e.g., scientific name, geographic location of specimen, etc.). The volume and heterogeneity of these printed label data present challenges in transforming them into meaningful digital form to support research. The Apiary Project is addressing these challenges by exploring and developing transformation processes in a systematic workflow that yields high-quality machine-processable label data in a cost- and time-efficient manner. The University of North Texas's Texas Center for Digital Knowledge (TxCDK) and the Botanical Research Institute of Texas (BRIT), with funding from an Institute of Museum and Library Services National Leadership Grant, are conducting fundamental research with the goal of identifying how human intelligence can be combined with machine processes for effective and efficient transformation of specimen label information. The results of this research will yield a new workflow model for effective and efficient label data transformation, correction, and enhancement.Institute of Museum and Library Services, National Leadership Gran
Outside the Frame: Modeling Discontinuities in Video Stimulus Streams
How are we to get beyond the literary metaphor Augst asserts is
central problem with film analysis? How are we to step outside
the "shot" as the unit of analysis - the "shot" which Bonitzer
claims is useless for analysis because of researchers' "endlessly
bifurcated" definitions of "shot?
We have had success with a form of computational structural
analysis which incorporates the viewer into the model.
Comparing changes in levels of Red, Green, and Blue from frame
to frame and comparing the patterns of change with an expert
film theorist's model.
We are currently analyzing discontinuities in the entire data
stream of a film. We are asking just what aspects of the data
stream account for viewer reactions. We are examining
distribution of color, edges, luminance, and other components.
By modeling changes in the various stimuli over time within a
vector space model and comparing those changes with the
responses of (at first) an expert viewer, then with a variety of
viewers we should be able to make strides in matching forms of
representation to the most effective mode of representation for
the individual user; and at the same time provide a set of analytic
tools that account for the multiple time-varying signals that make
up a movie, whether a cell phone video or Hollywood
blockbuster.
Significantly, we now step outside the frame as the unit of
analysis and look to the possibilities of analysis at the sub pixel
level. That is, analysis of one component of a pixel location such
as luminance or merely the green component (no red or blue
provides a very fine grained level of examination. At the same
time, the vector space model provides a way of examining the
stimulus effect of multiple threads that do not necessarily change
in synch.
As we consider these possibilities, we begin to see a general
model of a document as a continuous stream of data that either
(as a whole or in part) functions as a stimulus or does not.
Our poster will present graphical representations of changes in
the data stream for the "Bodega Bay" sequence of Hichcock's
THE BIRDS and the reactions of Raymond Bellour, whose
analyses and modeling of Hichcok's works and of classic
Hollywood film in general are held in high regard. We begin
with Bellour and the Bodega Bay sequence because we have
already published research on this data and, thus, have a
significant foundation upon which to build. We will then apply
the same techniques to a set of other works
Recommended from our members
Apiary Project
Poster presented at the 2009 Taxonomic Database Working Group Annual Conference. This poster discusses an application profile using Darwin Core rendered in the new Dublin Core application profile framework. This is part of the Apiary Project, a collaboration of the Texas Center for Digital Knowledge at the University of North Texas and the Botanical Research Institute of Texas
Recommended from our members
Apiary Project
This abstract describes a poster about the Apiary Project. The Apiary Project, a collaboration of the Texas Center for Digital Knowledge at the University of North Texas and the Botanical Research Institute of Texas, is building a framework and web-based workflow for the extraction and parsing of herbarium specimen data. The workflow will support the transformation of written or printed specimen data into a high-quality machine-processable XML format. This poster describes an event model that informed the development of the Apiary XML Application Schem
Recommended from our members
Apiary Project
Poster presented at the 2010 Taxonomic Database Working Group Meeting. This poster discusses the Apiary Project, a collaboration of the Texas Center for Digital Knowledge at the University of North Texas and the Botanical Research Institute of Texas
Recommended from our members
Apiary Project
Paper for the 2010 International iConference. This paper discusses extraction and parsing of herbarium specimen data to make the critical information available in digital form