1,017 research outputs found

    Recent advances on recursive filtering and sliding mode design for networked nonlinear stochastic systems: A survey

    Get PDF
    Copyright © 2013 Jun Hu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Some recent advances on the recursive filtering and sliding mode design problems for nonlinear stochastic systems with network-induced phenomena are surveyed. The network-induced phenomena under consideration mainly include missing measurements, fading measurements, signal quantization, probabilistic sensor delays, sensor saturations, randomly occurring nonlinearities, and randomly occurring uncertainties. With respect to these network-induced phenomena, the developments on filtering and sliding mode design problems are systematically reviewed. In particular, concerning the network-induced phenomena, some recent results on the recursive filtering for time-varying nonlinear stochastic systems and sliding mode design for time-invariant nonlinear stochastic systems are given, respectively. Finally, conclusions are proposed and some potential future research works are pointed out.This work was supported in part by the National Natural Science Foundation of China under Grant nos. 61134009, 61329301, 61333012, 61374127 and 11301118, the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant no. GR/S27658/01, the Royal Society of the UK, and the Alexander von Humboldt Foundation of Germany

    Fusion of IMU and Vision for Absolute Scale Estimation in Monocular SLAM

    Get PDF
    The fusion of inertial and visual data is widely used to improve an object's pose estimation. However, this type of fusion is rarely used to estimate further unknowns in the visual framework. In this paper we present and compare two different approaches to estimate the unknown scale parameter in a monocular SLAM framework. Directly linked to the scale is the estimation of the object's absolute velocity and position in 3D. The first approach is a spline fitting task adapted from Jung and Taylor and the second is an extended Kalman filter. Both methods have been simulated offline on arbitrary camera paths to analyze their behavior and the quality of the resulting scale estimation. We then embedded an online multi rate extended Kalman filter in the Parallel Tracking and Mapping (PTAM) algorithm of Klein and Murray together with an inertial sensor. In this inertial/monocular SLAM framework, we show a real time, robust and fast converging scale estimation. Our approach does not depend on known patterns in the vision part nor a complex temporal synchronization between the visual and inertial senso

    Sensor Augmented Virtual Reality Based Teleoperation Using Mixed Autonomy

    Get PDF
    A multimodal teleoperation interface is introduced, featuring an integrated virtual reality (VR) based simulation augmented by sensors and image processing capabilities onboard the remotely operated vehicle. The proposed virtual reality interface fuses an existing VR model with live video feed and prediction states, thereby creating a multimodal control interface. VR addresses the typical limitations of video based teleoperation caused by signal lag and limited field of view, allowing the operator to navigate in a continuous fashion. The vehicle incorporates an onboard computer and a stereo vision system to facilitate obstacle detection. A vehicle adaptation system with a priori risk maps and a real-state tracking system enable temporary autonomous operation of the vehicle for local navigation around obstacles and automatic re-establishment of the vehicle’s teleoperated state. The system provides real time update of the virtual environment based on anomalies encountered by the vehicle. The VR based multimodal teleoperation interface is expected to be more adaptable and intuitive when compared with other interfaces

    Communication Scheduling by Deep Reinforcement Learning for Remote Traffic State Estimation with Bayesian Inference

    Get PDF
    Traffic awareness is the prerequisite of autonomous driving. Given the limitation of on-board sensors (e.g., precision and price), remote measurement from either infrastructure or other vehicles can improve traffic safety. However, the wireless communication carrying the measurement result undergoes fading, noise and interference and has a certain probability of outage. When the communication fails, the vehicle state can only be predicted by Bayesian filtering with a low precision. Higher communication resource utilization (e.g., transmission power) reduces the outage probability and hence results in an improved estimation precision. The power control subject to an estimate variance constraint is a difficult problem due to the complicated mapping from transmit power to vehicle-state estimate variance. In this paper, we develop an estimator consisting of several Kalman filters (KFs) or extended Kalman filters (EKFs) and an interacting multiple model (IMM) to estimate and predict the vehicle state. We propose to apply deep reinforcement learning (DRL) for the transmit power optimization. In particular, we consider an intersection and a lane-changing scenario and apply proximal policy optimization (PPO) and soft actor-critic (SAC) to train the DRL model. Testing results show satisfactory power control strategies confining estimate variances below given threshold. SAC achieves higher performance compared to PPO

    Intelligent GNSS Positioning using 3D Mapping and Context Detection for Better Accuracy in Dense Urban Environments

    Get PDF
    Conventional GNSS positioning in dense urban areas can exhibit errors of tens of meters due to blockage and reflection of signals by the surrounding buildings. Here, we present a full implementation of the intelligent urban positioning (IUP) 3D-mapping-aided (3DMA) GNSS concept. This combines conventional ranging-based GNSS positioning enhanced by 3D mapping with the GNSS shadow-matching technique. Shadow matching determines position by comparing the measured signal availability with that predicted over a grid of candidate positions using 3D mapping. Thus, IUP uses both pseudo-range and signal-to-noise measurements to determine position. All algorithms incorporate terrain-height aiding and use measurements from a single epoch in time. Two different 3DMA ranging algorithms are presented, one based on least-squares estimation and the other based on computing the likelihoods of a grid of candidate position hypotheses. The likelihood-based ranging algorithm uses the same candidate position hypotheses as shadow matching and makes different assumptions about which signals are direct line-of-sight (LOS) and non-line-of-sight (NLOS) at each candidate position. Two different methods for integrating likelihood-based 3DMA ranging with shadow matching are also compared. In the position-domain approach, separate ranging and shadow-matching position solutions are computed, then averaged using direction-dependent weighting. In the hypothesis-domain approach, the candidate position scores from the ranging and shadow matching algorithms are combined prior to extracting a joint position solution. Test data was recorded using a u-blox EVK M8T consumer-grade GNSS receiver and a HTC Nexus 9 tablet at 28 locations across two districts of London. The City of London is a traditional dense urban environment, while Canary Wharf is a modern environment. The Nexus 9 tablet data was recorded using the Android Nougat GNSS receiver interface and is representative of future smartphones. Best results were obtained using the likelihood-based 3DMA ranging algorithm and hypothesis-based integration with shadow matching. With the u-blox receiver, the single-epoch RMS horizontal (i.e., 2D) error across all sites was 4.0 m, compared to 28.2 m for conventional positioning, a factor of 7.1 improvement. Using the Nexus tablet, the intelligent urban positioning RMS error was 7.0 m, compared to 32.7 m for conventional GNSS positioning, a factor of 4.7 improvement. An analysis of processing and data requirements shows that intelligent urban positioning is practical to implement in real-time on a mobile device or a server. Navigation and positioning is inherently dependent on the context, which comprises both the operating environment and the behaviour of the host vehicle or user. No single technique is capable of providing reliable and accurate positioning in all contexts. In order to operate reliably across different contexts, a multi-sensor navigation system is required to detect its operating context and reconfigure the techniques accordingly. Specifically, 3DMA GNSS should be selected when the user is in a dense urban environment, not indoors or in an open environment. Algorithms for detecting indoor and outdoor context using GNSS measurements and a hidden Markov model are described and demonstrated

    From data acquisition to data fusion : a comprehensive review and a roadmap for the identification of activities of daily living using mobile devices

    Get PDF
    This paper focuses on the research on the state of the art for sensor fusion techniques, applied to the sensors embedded in mobile devices, as a means to help identify the mobile device user’s daily activities. Sensor data fusion techniques are used to consolidate the data collected from several sensors, increasing the reliability of the algorithms for the identification of the different activities. However, mobile devices have several constraints, e.g., low memory, low battery life and low processing power, and some data fusion techniques are not suited to this scenario. The main purpose of this paper is to present an overview of the state of the art to identify examples of sensor data fusion techniques that can be applied to the sensors available in mobile devices aiming to identify activities of daily living (ADLs)

    Fusion of Video and Multi-Waveform FMCW Radar for Traffic Surveillance

    Get PDF
    Modern frequency modulated continuous wave (FMCW) radar technology provides the ability to modify the system transmission frequency as a function of time, which in turn provides the ability to generate multiple output waveforms from a single radar unit. Current low-power multi-waveform FMCW radar techniques lack the ability to reliably associate measurements from the various waveform sections in the presence of multiple targets and multiple false detections within the field-of-view. Two approaches are developed here to address this problem. The first approach takes advantage of the relationships between the waveform segments to generate a weighting function for candidate combinations of measurements from the waveform sections. This weighting function is then used to choose the best candidate combinations to form polar-coordinate measurements. Simulations show that this approach provides a ten to twenty percent increase in the probability of correct association over the current approach while reducing the number of false alarms in generated in the process, but still fails to form a measurement if a detection form a waveform section is missing. The second approach models the multi-waveform FMCW radar as a set of independent sensors and uses distributed data fusion to fuse estimates from those individual sensors within a tracking structure. Tracking in this approach is performed directly with the raw frequency and angle measurements from the waveform segments. This removes the need for data association between the measurements from the individual waveform segments. A distributed data fusion model is used again to modify the radar tracking systems to include a video sensor to provide additional angular and identification information into the system. The combination of the radar and vision sensors, as an end result, provides an enhanced roadside tracking system
    • …
    corecore