2,832 research outputs found

    Interacting multiple-models, state augmented Particle Filtering for fault diagnostics

    Get PDF
    International audienceParticle Filtering (PF) is a model-based, filtering technique, which has drawn the attention of the Prognostic and Health Management (PHM) community due to its applicability to nonlinear models with non-additive and non-Gaussian noise. When multiple physical models can describe the evolution of the degradation of a component, the PF approach can be based on Multiple Swarms (MS) of particles, each one evolving according to a different model, from which to select the most accurate a posteriori distribution. However, MS are highly computational demanding due to the large number of particles to simulate. In this work, to tackle the problem we have developed a PF approach based on the introduction of an augmented discrete state identifying the physical model describing the component evolution, which allows to detect the occurrence of abnormal conditions and identifying the degradation mechanism causing it. A crack growth degradation problem has been considered to prove the effectiveness of the proposed method in the detection of the crack initiation and the identification of the occurring degradation mechanism. The comparison of the obtained results with that of a literature MS method and of an empirical statistical test has shown that the proposed method provides both an early detection of the crack initiation, and an accurate and early identification of the degradation mechanism. A reduction of the computational cost is also achieved.

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    Multi-sensor data fusion techniques for RPAS detect, track and avoid

    Get PDF
    Accurate and robust tracking of objects is of growing interest amongst the computer vision scientific community. The ability of a multi-sensor system to detect and track objects, and accurately predict their future trajectory is critical in the context of mission- and safety-critical applications. Remotely Piloted Aircraft System (RPAS) are currently not equipped to routinely access all classes of airspace since certified Detect-and-Avoid (DAA) systems are yet to be developed. Such capabilities can be achieved by incorporating both cooperative and non-cooperative DAA functions, as well as providing enhanced communications, navigation and surveillance (CNS) services. DAA is highly dependent on the performance of CNS systems for Detection, Tacking and avoiding (DTA) tasks and maneuvers. In order to perform an effective detection of objects, a number of high performance, reliable and accurate avionics sensors and systems are adopted including non-cooperative sensors (visual and thermal cameras, Laser radar (LIDAR) and acoustic sensors) and cooperative systems (Automatic Dependent Surveillance-Broadcast (ADS-B) and Traffic Collision Avoidance System (TCAS)). In this paper the sensors and system information candidates are fully exploited in a Multi-Sensor Data Fusion (MSDF) architecture. An Unscented Kalman Filter (UKF) and a more advanced Particle Filter (PF) are adopted to estimate the state vector of the objects based for maneuvering and non-maneuvering DTA tasks. Furthermore, an artificial neural network is conceptualised/adopted to exploit the use of statistical learning methods, which acts to combined information obtained from the UKF and PF. After describing the MSDF architecture, the key mathematical models for data fusion are presented. Conceptual studies are carried out on visual and thermal image fusion architectures

    Robust Multi-Object Tracking: A Labeled Random Finite Set Approach

    Get PDF
    The labeled random finite set based generalized multi-Bernoulli filter is a tractable analytic solution for the multi-object tracking problem. The robustness of this filter is dependent on certain knowledge regarding the multi-object system being available to the filter. This dissertation presents techniques for robust tracking, constructed upon the labeled random finite set framework, where complete information regarding the system is unavailable

    Human Motion Trajectory Prediction: A Survey

    Full text link
    With growing numbers of intelligent autonomous systems in human environments, the ability of such systems to perceive, understand and anticipate human behavior becomes increasingly important. Specifically, predicting future positions of dynamic agents and planning considering such predictions are key tasks for self-driving vehicles, service robots and advanced surveillance systems. This paper provides a survey of human motion trajectory prediction. We review, analyze and structure a large selection of work from different communities and propose a taxonomy that categorizes existing methods based on the motion modeling approach and level of contextual information used. We provide an overview of the existing datasets and performance metrics. We discuss limitations of the state of the art and outline directions for further research.Comment: Submitted to the International Journal of Robotics Research (IJRR), 37 page

    Probabilistic Framework for Behavior Characterization of Traffic Participants Enabling Long Term Prediction

    Get PDF
    This research aims at developing new methods that predict the behaviors of the human driven traffic participants to enable safe operation of autonomous vehicles in complex traffic environments. Autonomous vehicles are expected to operate amongst human driven conventional vehicles in the traffic at least for the next few decades. For safe navigation they will need to infer the intents as well as the behaviors of the human traffic participants using extrinsically observable information, so that their trajectories can be predicted for a time horizon long enough to do a predictive risk analysis and gracefully avert any risky situation. This research approaches this challenge by recognizing that any maneuver performed by a human driver can be divided into four stages that depend on the surrounding context: intent determination, maneuver preparation, gap acceptance and maneuver execution. It builds on the hypothesis that for a given driver, the behavior not only spans across these four maneuver stages, but across multiple maneuvers. As a result, identifying the driver behavior in any of these stages can help characterize the nature of all the subsequent maneuvers that the driver is likely to perform, thus resulting in a more accurate prediction for a longer time horizon. To enable this, a novel probabilistic framework is proposed that couples the different maneuver stages of the observed traffic participant together and associates them to a driving style. To realize this framework two candidate Multiple Model Adaptive Estimation approaches were compared: Autonomous Multiple Model (AMM) and Interacting Multiple Model(IMM) filtering approach. The IMM approach proved superior to the AMM approach and was eventually validated using a trajectory extracted from a real world dataset for efficacy. The proposed framework was then implemented by extending the validated IMM approach with contextual information of the observed traffic participant. The classification of the driving style of the traffic participant (behavior characterization) was then demonstrated for two use case scenarios. The proposed contextual IMM (CIMM) framework also showed improvements in the performance of the behavior classification of the traffic participants compared to the IMM for the identified use case scenarios. This outcome warrants further exploration of this framework for different traffic scenarios. Further, it contributes towards the ongoing endeavors for safe deployment of autonomous vehicles on public roads
    • …
    corecore