27,597 research outputs found
Evaluating Multilingual Gisting of Web Pages
We describe a prototype system for multilingual gisting of Web pages, and
present an evaluation methodology based on the notion of gisting as decision
support. This evaluation paradigm is straightforward, rigorous, permits fair
comparison of alternative approaches, and should easily generalize to
evaluation in other situations where the user is faced with decision-making on
the basis of information in restricted or alternative form.Comment: 7 pages, uses psfig and aaai style
English → Russian MT evaluation campaign
This paper presents the settings and the result of the ROMIP 2013 MT shared task for the English→Russian language direction. The quality of generated translations was assessed using automatic metrics and human evaluation. We also discuss ways to reduce human evaluation efforts using pairwise sentence comparisons by human judges to simulate sort operations
Evaluating the Usability of Automatically Generated Captions for People who are Deaf or Hard of Hearing
The accuracy of Automated Speech Recognition (ASR) technology has improved,
but it is still imperfect in many settings. Researchers who evaluate ASR
performance often focus on improving the Word Error Rate (WER) metric, but WER
has been found to have little correlation with human-subject performance on
many applications. We propose a new captioning-focused evaluation metric that
better predicts the impact of ASR recognition errors on the usability of
automatically generated captions for people who are Deaf or Hard of Hearing
(DHH). Through a user study with 30 DHH users, we compared our new metric with
the traditional WER metric on a caption usability evaluation task. In a
side-by-side comparison of pairs of ASR text output (with identical WER), the
texts preferred by our new metric were preferred by DHH participants. Further,
our metric had significantly higher correlation with DHH participants'
subjective scores on the usability of a caption, as compared to the correlation
between WER metric and participant subjective scores. This new metric could be
used to select ASR systems for captioning applications, and it may be a better
metric for ASR researchers to consider when optimizing ASR systems.Comment: 10 pages, 8 figures, published in ACM SIGACCESS Conference on
Computers and Accessibility (ASSETS '17
Multimedia information technology and the annotation of video
The state of the art in multimedia information technology has not progressed to the point where a single solution is available to meet all reasonable needs of documentalists and users of video archives. In general, we do not have an optimistic view of the usability of new technology in this domain, but digitization and digital power can be expected to cause a small revolution in the area of video archiving. The volume of data leads to two views of the future: on the pessimistic side, overload of data will cause lack of annotation capacity, and on the optimistic side, there will be enough data from which to learn selected concepts that can be deployed to support automatic annotation. At the threshold of this interesting era, we make an attempt to describe the state of the art in technology. We sample the progress in text, sound, and image processing, as well as in machine learning
- …