1,146 research outputs found
Combining data-driven MT systems for improved sign language translation
In this paper, we investigate the feasibility of combining two data-driven machine translation (MT) systems for the translation of sign languages (SLs). We take the MT systems of two prominent data-driven research groups, the MaTrEx system developed at DCU and the Statistical Machine
Translation (SMT) system developed at RWTH Aachen University, and apply their respective approaches to the task of translating Irish Sign Language and German Sign Language into English and German. In a set of experiments supported by automatic evaluation results, we show that
there is a definite value to the prospective merging of MaTrExâs Example-Based MT chunks and distortion limit increase with RWTHâs constraint reordering
Hand in hand: automatic sign Language to English translation
In this paper, we describe the first data-driven automatic sign-language-to- speech translation system. While both sign language (SL) recognition and translation techniques exist, both use an intermediate notation system
not directly intelligible for untrained users. We combine a SL recognizing framework with a state-of-the-art phrase-based machine translation (MT) system, using corpora of both American Sign Language and Irish Sign Language
data. In a set of experiments we show the overall results and also illustrate the importance of including a
vision-based knowledge source in the development of a complete SL translation system
Cockpit display of hazardous weather information
Information transfer and display issues associated with the dissemination of hazardous weather warnings are studied in the context of windshear alerts. Operational and developmental windshear detection systems are briefly reviewed. The July 11, 1988 microburst events observed as part of the Denver Terminal Doppler Weather Radar (TDWR) operational evaluation are analyzed in terms of information transfer and the effectiveness of the microburst alerts. Information transfer, message content and display issues associated with microburst alerts generated from ground based sources are evaluated by means of pilot opinion surveys and part task simulator studies
Label-Dependencies Aware Recurrent Neural Networks
In the last few years, Recurrent Neural Networks (RNNs) have proved effective
on several NLP tasks. Despite such great success, their ability to model
\emph{sequence labeling} is still limited. This lead research toward solutions
where RNNs are combined with models which already proved effective in this
domain, such as CRFs. In this work we propose a solution far simpler but very
effective: an evolution of the simple Jordan RNN, where labels are re-injected
as input into the network, and converted into embeddings, in the same way as
words. We compare this RNN variant to all the other RNN models, Elman and
Jordan RNN, LSTM and GRU, on two well-known tasks of Spoken Language
Understanding (SLU). Thanks to label embeddings and their combination at the
hidden layer, the proposed variant, which uses more parameters than Elman and
Jordan RNNs, but far fewer than LSTM and GRU, is more effective than other
RNNs, but also outperforms sophisticated CRF models.Comment: 22 pages, 3 figures. Accepted at CICling 2017 conference. Best
Verifiability, Reproducibility, and Working Description awar
Hazard evaluation and operational cockpit display of ground-measured windshear data
Low-altitude windshear is the leading weather-related cause of fatal aviation accidents in the U.S. Since 1964, there have been 26 accidents attributed to windshear resulting in over 500 fatalities. Low-altitude windshear can take several forms, including macroscopic forms such as cold-warm gustfronts down to the small, intense downdrafts known as microbursts. Microbursts are particularly dangerous and difficult to detect due to their small size, short duration, and occurrence under both heavy precipitation and virtually dry conditions. For these reasons, the real-time detection of windshear hazards is a very active field of research. Also, the advent of digital ground-to-air datalinks and electronic flight instrumentation opens up many options for implementation of windshear alerts in the terminal area environment. Study is required to determine the best content, format, timing, and cockpit presentation of windshear alerts in the modern ATC environment to best inform the flight crew without significantly increasing crew workload
Effects of System Characteristics on Adopting Web-Based Advanced Traveller Information System: Evidence from Taiwan
This study proposes a behavioural intention model that integrates information quality, response time, and system accessibility into the original technology acceptance model (TAM) to investigate whether system characteristics affect the adoption of Web-based advanced traveller information systems (ATIS). This study empirically tests the proposed model using data collected from an online survey of Web-based advanced traveller information system users. ConÂfirmatory factor analysis (CFA) was performed to examine the reliability and validity of the measurement model, and structural equation modelling (SEM) was used to evaluate the structural model. The results indicate that three system characteristics had indirect effects on the intention to use through perceived usefulness, perceived ease of use, and attitude toward using. Information quality was the most imÂportant system characteristic factor, followed by response time and system accessibility. This study presents implicaÂtions for practitioners and researchers, and suggests direcÂtions for future research.</p
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
- âŠ