406,351 research outputs found
Exploiting source similarity for SMT using context-informed features
In this paper, we introduce context informed features in a log-linear phrase-based SMT framework; these features enable us to exploit source similarity in addition to target similarity modeled by the language model. We
present a memory-based classification framework that enables the estimation of these features while avoiding
sparseness problems. We evaluate the performance of our approach on Italian-to-English and Chinese-to-English translation tasks using a state-of-the-art phrase-based SMT
system, and report significant improvements for both BLEU and NIST scores when adding the context-informed features
Real-Time State Estimation in a Flight Simulator Using fNIRS
Working memory is a key executive function for flying an aircraft. This function is particularly critical when pilots have to recall series of air traffic control instructions. However, working memory limitations may jeopardize flight safety. Since the functional near-infrared spectroscopy (fNIRS) method seems promising for assessing working memory load, our objective is to implement an on-line fNIRS-based inference system that integrates two complementary estimators. The first estimator is a real-time state estimation MACD-based algorithm dedicated to identifying the pilot’s instantaneous mental state (not-on-task vs. on-task). It does not require a calibration process to perform its estimation. The second estimator is an on-line SVM-based classifier that is able to discriminate task difficulty (low working memory load vs. high working memory load). These two estimators were tested with 19 pilots who were placed in a realistic flight simulator and were asked to recall air traffic control instructions. We found that the estimated pilot’s mental state matched significantly better than chance with the pilot’s real state (62% global accuracy, 58% specificity, and 72% sensitivity). The second estimator, dedicated to assessing single trial working memory loads, led to 80% classification accuracy, 72% specificity, and 89% sensitivity. These two estimators establish reusable blocks for further fNIRS-based passive brain computer interface development
English-Hindi transliteration using context-informed PB-SMT: the DCU system for NEWS 2009
This paper presents English—Hindi transliteration in the NEWS 2009 Machine Transliteration Shared Task adding source context modeling into state-of-the-art log-linear phrase-based statistical machine translation (PB-SMT). Source context features enable us to exploit source similarity in addition to target similarity, as modelled by the language model. We use a memory-based classification
framework that enables efficient estimation of these features while avoiding data sparseness problems.We carried out experiments both at character and transliteration unit (TU) level. Position-dependent source context features produce significant improvements in terms of all evaluation metrics
LSTM Pose Machines
We observed that recent state-of-the-art results on single image human pose
estimation were achieved by multi-stage Convolution Neural Networks (CNN).
Notwithstanding the superior performance on static images, the application of
these models on videos is not only computationally intensive, it also suffers
from performance degeneration and flicking. Such suboptimal results are mainly
attributed to the inability of imposing sequential geometric consistency,
handling severe image quality degradation (e.g. motion blur and occlusion) as
well as the inability of capturing the temporal correlation among video frames.
In this paper, we proposed a novel recurrent network to tackle these problems.
We showed that if we were to impose the weight sharing scheme to the
multi-stage CNN, it could be re-written as a Recurrent Neural Network (RNN).
This property decouples the relationship among multiple network stages and
results in significantly faster speed in invoking the network for videos. It
also enables the adoption of Long Short-Term Memory (LSTM) units between video
frames. We found such memory augmented RNN is very effective in imposing
geometric consistency among frames. It also well handles input quality
degradation in videos while successfully stabilizes the sequential outputs. The
experiments showed that our approach significantly outperformed current
state-of-the-art methods on two large-scale video pose estimation benchmarks.
We also explored the memory cells inside the LSTM and provided insights on why
such mechanism would benefit the prediction for video-based pose estimations.Comment: Poster in IEEE Conference on Computer Vision and Pattern Recognition
(CVPR), 201
An end-to-end machine learning system for harmonic analysis of music
We present a new system for simultaneous estimation of keys, chords, and bass
notes from music audio. It makes use of a novel chromagram representation of
audio that takes perception of loudness into account. Furthermore, it is fully
based on machine learning (instead of expert knowledge), such that it is
potentially applicable to a wider range of genres as long as training data is
available. As compared to other models, the proposed system is fast and memory
efficient, while achieving state-of-the-art performance.Comment: MIREX report and preparation of Journal submissio
MAMo: Leveraging Memory and Attention for Monocular Video Depth Estimation
We propose MAMo, a novel memory and attention frame-work for monocular video
depth estimation. MAMo can augment and improve any single-image depth
estimation networks into video depth estimation models, enabling them to take
advantage of the temporal information to predict more accurate depth. In MAMo,
we augment model with memory which aids the depth prediction as the model
streams through the video. Specifically, the memory stores learned visual and
displacement tokens of the previous time instances. This allows the depth
network to cross-reference relevant features from the past when predicting
depth on the current frame. We introduce a novel scheme to continuously update
the memory, optimizing it to keep tokens that correspond with both the past and
the present visual information. We adopt attention-based approach to process
memory features where we first learn the spatio-temporal relation among the
resultant visual and displacement memory tokens using self-attention module.
Further, the output features of self-attention are aggregated with the current
visual features through cross-attention. The cross-attended features are
finally given to a decoder to predict depth on the current frame. Through
extensive experiments on several benchmarks, including KITTI, NYU-Depth V2, and
DDAD, we show that MAMo consistently improves monocular depth estimation
networks and sets new state-of-the-art (SOTA) accuracy. Notably, our MAMo video
depth estimation provides higher accuracy with lower latency, when omparing to
SOTA cost-volume-based video depth models.Comment: Accepted at ICCV 202
- …