87,990 research outputs found
Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High Speed Scenarios
Event cameras are bio-inspired vision sensors that output pixel-level
brightness changes instead of standard intensity frames. These cameras do not
suffer from motion blur and have a very high dynamic range, which enables them
to provide reliable visual information during high speed motions or in scenes
characterized by high dynamic range. However, event cameras output only little
information when the amount of motion is limited, such as in the case of almost
still motion. Conversely, standard cameras provide instant and rich information
about the environment most of the time (in low-speed and good lighting
scenarios), but they fail severely in case of fast motions, or difficult
lighting such as high dynamic range or low light scenes. In this paper, we
present the first state estimation pipeline that leverages the complementary
advantages of these two sensors by fusing in a tightly-coupled manner events,
standard frames, and inertial measurements. We show on the publicly available
Event Camera Dataset that our hybrid pipeline leads to an accuracy improvement
of 130% over event-only pipelines, and 85% over standard-frames-only
visual-inertial systems, while still being computationally tractable.
Furthermore, we use our pipeline to demonstrate - to the best of our knowledge
- the first autonomous quadrotor flight using an event camera for state
estimation, unlocking flight scenarios that were not reachable with traditional
visual-inertial odometry, such as low-light environments and high-dynamic range
scenes.Comment: 8 pages, 9 figures, 2 table
Predicting Remaining Useful Life using Time Series Embeddings based on Recurrent Neural Networks
We consider the problem of estimating the remaining useful life (RUL) of a
system or a machine from sensor data. Many approaches for RUL estimation based
on sensor data make assumptions about how machines degrade. Additionally,
sensor data from machines is noisy and often suffers from missing values in
many practical settings. We propose Embed-RUL: a novel approach for RUL
estimation from sensor data that does not rely on any degradation-trend
assumptions, is robust to noise, and handles missing values. Embed-RUL utilizes
a sequence-to-sequence model based on Recurrent Neural Networks (RNNs) to
generate embeddings for multivariate time series subsequences. The embeddings
for normal and degraded machines tend to be different, and are therefore found
to be useful for RUL estimation. We show that the embeddings capture the
overall pattern in the time series while filtering out the noise, so that the
embeddings of two machines with similar operational behavior are close to each
other, even when their sensor readings have significant and varying levels of
noise content. We perform experiments on publicly available turbofan engine
dataset and a proprietary real-world dataset, and demonstrate that Embed-RUL
outperforms the previously reported state-of-the-art on several metrics.Comment: Presented at 2nd ML for PHM Workshop at SIGKDD 2017, Halifax, Canad
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
Reliable H ∞ filtering for stochastic spatial–temporal systems with sensor saturations and failures
This study is concerned with the reliable H∞ filtering problem for a class of stochastic spatial–temporal systems
with sensor saturations and failures. Different from the continuous spatial–temporal systems, the dynamic behaviour of the system under consideration evolves in a discrete rectangular region. The aim of this study is to estimate the system states through the measurements received from a set of sensors located at some specified points. In order to cater for more realistic signal transmission process, the phenomena of sensor saturations and sensor failures are taken into account. By using the vector reorganisation approach, the spatial–temporal system is first transformed into an equivalent ordinary differential dynamic system. Then, a filter is constructed and a sufficient condition is obtained under which the filtering error dynamics is asymptotically stable in probability and the H∞ performance requirement is met. On the basis of the analysis results, the desired reliable H∞ filter is designed. Finally, an illustrative example is given to show the effectiveness of the proposed filtering scheme.Deanship of Scientific Research (DSR) at King Abdulaziz University in Saudi Arabia under Grant 16-135-35-HiCi, the National Natural Science Foundation of China under Grants 61329301, 61134009 and 61473076, the Shanghai Rising-Star Program of China under Grant 13QA1400100, the Shu Guang project of Shanghai Municipal Education Commission and Shanghai Education Development Foundation under Grant 13SG34, the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning, the Fundamental Research Funds for the Central Universities, the DHU Distinguished Young Professor Program, and the Alexander von Humboldt Foundation of German
- …