163 research outputs found

    Information-Aware Guidance for Magnetic Anomaly based Navigation

    Full text link
    In the absence of an absolute positioning system, such as GPS, autonomous vehicles are subject to accumulation of positional error which can interfere with reliable performance. Improved navigational accuracy without GPS enables vehicles to achieve a higher degree of autonomy and reliability, both in terms of decision making and safety. This paper details the use of two navigation systems for autonomous agents using magnetic field anomalies to localize themselves within a map; both techniques use the information content in the environment in distinct ways and are aimed at reducing the localization uncertainty. The first method is based on a nonlinear observability metric of the vehicle model, while the second is an information theory based technique which minimizes the expected entropy of the system. These conditions are used to design guidance laws that minimize the localization uncertainty and are verified both in simulation and hardware experiments are presented for the observability approach.Comment: 2022 International Conference on Intelligent Robots and Systems October 23 to 27, 2022 Kyoto, Japa

    An advanced Bayesian model for the visual tracking of multiple interacting objects

    Get PDF
    Visual tracking of multiple objects is a key component of many visual-based systems. While there are reliable algorithms for tracking a single object in constrained scenarios, the object tracking is still a challenge in uncontrolled situations involving multiple interacting objects that have a complex dynamics. In this article, a novel Bayesian model for tracking multiple interacting objects in unrestricted situations is proposed. This is accomplished by means of an advanced object dynamic model that predicts possible interactive behaviors, which in turn depend on the inference of potential events of object occlusion. The proposed tracking model can also handle false and missing detections that are typical from visual object detectors operating in uncontrolled scenarios. On the other hand, a Rao-Blackwellization technique has been used to improve the accuracy of the estimated object trajectories, which is a fundamental aspect in the tracking of multiple objects due to its high dimensionality. Excellent results have been obtained using a publicly available database, proving the efficiency of the proposed approach

    Particle-Filter-Based Intelligent Video Surveillance System

    Get PDF
    In this study, an intelligent video surveillance (IVS) system is designed based on the particle filter. The designed IVS system can gather the information of the number of persons in the area and hot spots of the area. At first, the Gaussian mixture background model is utilized to detect moving objects by background subtraction. The moving object appearing in the margin of the video frame is considered as a new person. Then, a new particle filter is assigned to track the new person when it is detected. A particle filter is canceled when the corresponding tracked person leaves the video frame. Moreover, the Kalman filter is utilized to estimate the position of the person when the person is occluded. Information of the number of persons in the area and hot spots is gathered by tracking persons in the video frame. Finally, a user interface is designed to feedback the gathered information to users of the IVS system. By applying the proposed IVS system, the load of security guards can be reduced. Moreover, by hot spot analysis, the business operator can understand customer habits to plan the traffic flow and adjust the product placement for improving customer experience

    Multiple-hypothesis vision-based landing autonomy

    Get PDF
    Unmanned aerial vehicles (UAVs) need humans in the mission loop for many tasks, and landing is one of the tasks that typically involves a human pilot. This is because of the complexity of a maneuver itself and flight-critical factors such as recognition of a landing zone, collision avoidance, assessment of landing sites, and decision to abort the maneuver. Another critical aspect to be considered is the reliance of UAVs on GPS (global positioning system). A GPS system is not a reliable solution for landing in some scenarios (e.g. delivering a package in an urban city, and a surveillance UAV repatriating a home ship with the jammed signals), and a landing solely based on a GPS extremely decreases the UAV operation envelope. Vision is promising to achieve fully autonomous landing because it is a rich-sensing, light, affordable device that functions without any external resource. Although vision is a powerful tool for autonomous landing, the use of vision for state estimation requires extensive consideration. Firstly, vision-based landing faces a problem of occlusion. The target detected at a high altitude would be lost at certain altitudes while a vehicle descends; however, a small visual target can not be recognized at high altitude. Second, standard filtering methods such as extended Kalman filter (EKF) face difficulty due to the complex dynamics of the measurement error created due to the discrete pixel space, conversion from the pixel to physical units, the complex camera model, and complexity of detection algorithms. The vision sensor produces an unfixed number of measurements with each image, and the measurements may include false positives. Plus, the estimation system is excessively tasked in realistic conditions. The landing site would be moving, tilted, or close to an obstacle. The available landing location may not be limited to one. In addition to assessing these statuses, understanding the confidence of the estimations is also the tasks of the vision, and the decisions to initiate, continue, and abort the mission are made based on the estimated states and confidence. The system that handles those issues and consistently produces the navigation solution while a vehicle lands eliminates one of the limitations of the autonomous UAV operation. This thesis presents a novel state estimation system for UAV landing. In this system, vision data is used to both estimate the state of the vehicle and map the state of the landing target (position, velocity, and attitude) within the framework of simultaneous localization and mapping (SLAM). Using the SLAM framework, the system becomes resilient to a loss of GPS and other sensor failures. A novel vision algorithm that detects a portion of the marker is developed, and the stochastic properties of the algorithm are studied. This algorithm extends the detectable range of the vision system for any known marker. However, this vision algorithm produces highly nonlinear, non-Gaussian, and multi-modal error distribution, and a naive implementation of filters would not accurately estimate the states. A vision-aided navigation algorithm is derived within extended Kalman particle filter (PF-EKF) and Rao-Blackwellized particle filter (RBPF) frameworks in addition to a standard EKF framework. These multi-hypothesis approaches not only deal well with a highly nonlinear and non-Gaussian distribution of the measurement errors of vision but also result in numerically stable filters. The computational costs are reduced compared to a naive implementation of particle filter, and these algorithms run in real time. This system is validated through numerical simulation, image-in-the-loop simulation, and flight tests.Ph.D

    Autonomous Flight in Unknown Indoor Environments

    Get PDF
    http://multi-science.metapress.com/content/80586kml376k2711/This paper presents our solution for enabling a quadrotor helicopter, equipped with a laser rangefinder sensor, to autonomously explore and map unstructured and unknown indoor environments. While these capabilities are already commodities on ground vehicles, air vehicles seeking the same performance face unique challenges. In this paper, we describe the difficulties in achieving fully autonomous helicopter flight, highlighting the differences between ground and helicopter robots that make it difficult to use algorithms that have been developed for ground robots. We then provide an overview of our solution to the key problems, including a multilevel sensing and control hierarchy, a high-speed laser scan-matching algorithm, an EKF for data fusion, a high-level SLAM implementation, and an exploration planner. Finally, we show experimental results demonstrating the helicopter's ability to navigate accurately and autonomously in unknown environments.National Science Foundation (U.S.) (NSF Division of Information and Intelligent Systems under grant # 0546467)United States. Army Research Office (ARO MAST CTA)Singapore. Armed Force

    AN INFORMATION THEORETIC APPROACH TO INTERACTING MULTIPLE MODEL ESTIMATION FOR AUTONOMOUS UNDERWATER VEHICLES

    Get PDF
    Accurate and robust autonomous underwater navigation (AUV) requires the fundamental task of position estimation in a variety of conditions. Additionally, the U.S. Navy would prefer to have systems that are not dependent on external beacon systems such as global positioning system (GPS), since they are subject to jamming and spoofing and can reduce operational effectiveness. Current methodologies such as Terrain-Aided Navigation (TAN) use exteroceptive imaging sensors for building a local reference position estimate and will not be useful when those sensors are out of range. What is needed are multiple navigation filters where each can be more effective depending on the mission conditions. This thesis investigates how to combine multiple navigation filters to provide a more robust AUV position estimate. The solution presented is to blend two different filtering methodologies utilizing an interacting multiple model (IMM) estimation approach based on an information theoretic framework. The first filter is a model-based Extended Kalman Filter (EKF) that is effective under dead reckoning (DR) conditions. The second is a Particle Filter approach for Active Terrain Aided Navigation (ATAN) that is appropriate when in sensor range. Using data collected at Lake Crescent, Washington, each of the navigation filters are developed with results and then we demonstrate how an IMM information theoretic approach can be used to blend approaches to improve position and orientation estimation.Lieutenant, United States NavyApproved for public release. Distribution is unlimited

    Modeling and Control for Vision Based Rear Wheel Drive Robot and Solving Indoor SLAM Problem Using LIDAR

    Get PDF
    abstract: To achieve the ambitious long-term goal of a feet of cooperating Flexible Autonomous Machines operating in an uncertain Environment (FAME), this thesis addresses several critical modeling, design, control objectives for rear-wheel drive ground vehicles. Toward this ambitious goal, several critical objectives are addressed. One central objective of the thesis was to show how to build low-cost multi-capability robot platform that can be used for conducting FAME research. A TFC-KIT car chassis was augmented to provide a suite of substantive capabilities. The augmented vehicle (FreeSLAM Robot) costs less than 500butoffersthecapabilityofcommerciallyavailablevehiclescostingover500 but offers the capability of commercially available vehicles costing over 2000. All demonstrations presented involve rear-wheel drive FreeSLAM robot. The following summarizes the key hardware demonstrations presented and analyzed: (1)Cruise (v, ) control along a line, (2) Cruise (v, ) control along a curve, (3) Planar (x, y) Cartesian Stabilization for rear wheel drive vehicle, (4) Finish the track with camera pan tilt structure in minimum time, (5) Finish the track without camera pan tilt structure in minimum time, (6) Vision based tracking performance with different cruise speed vx, (7) Vision based tracking performance with different camera fixed look-ahead distance L, (8) Vision based tracking performance with different delay Td from vision subsystem, (9) Manually remote controlled robot to perform indoor SLAM, (10) Autonomously line guided robot to perform indoor SLAM. For most cases, hardware data is compared with, and corroborated by, model based simulation data. In short, the thesis uses low-cost self-designed rear-wheel drive robot to demonstrate many capabilities that are critical in order to reach the longer-term FAME goal.Dissertation/ThesisDefense PresentationMasters Thesis Electrical Engineering 201

    Recursive Bayesian inference on stochastic differential equations

    Get PDF
    This thesis is concerned with recursive Bayesian estimation of non-linear dynamical systems, which can be modeled as discretely observed stochastic differential equations. The recursive real-time estimation algorithms for these continuous-discrete filtering problems are traditionally called optimal filters and the algorithms for recursively computing the estimates based on batches of observations are called optimal smoothers. In this thesis, new practical algorithms for approximate and asymptotically optimal continuous-discrete filtering and smoothing are presented. The mathematical approach of this thesis is probabilistic and the estimation algorithms are formulated in terms of Bayesian inference. This means that the unknown parameters, the unknown functions and the physical noise processes are treated as random processes in the same joint probability space. The Bayesian approach provides a consistent way of computing the optimal filtering and smoothing estimates, which are optimal given the model assumptions and a consistent way of analyzing their uncertainties. The formal equations of the optimal Bayesian continuous-discrete filtering and smoothing solutions are well known, but the exact analytical solutions are available only for linear Gaussian models and for a few other restricted special cases. The main contributions of this thesis are to show how the recently developed discrete-time unscented Kalman filter, particle filter, and the corresponding smoothers can be applied in the continuous-discrete setting. The equations for the continuous-time unscented Kalman-Bucy filter are also derived. The estimation performance of the new filters and smoothers is tested using simulated data. Continuous-discrete filtering based solutions are also presented to the problems of tracking an unknown number of targets, estimating the spread of an infectious disease and to prediction of an unknown time series.reviewe

    Stereo Visual SLAM for Mobile Robots Navigation

    Get PDF
    Esta tesis está enfocada a la combinación de los campos de la robótica móvil y la visión por computador, con el objetivo de desarrollar métodos que permitan a un robot móvil localizarse dentro de su entorno mientras construye un mapa del mismo, utilizando como única entrada un conjunto de imágenes. Este problema se denomina SLAM visual (por las siglas en inglés de "Simultaneous Localization And Mapping") y es un tema que aún continúa abierto a pesar del gran esfuerzo investigador realizado en los últimos años. En concreto, en esta tesis utilizamos cámaras estéreo para capturar, simultáneamente, dos imágenes desde posiciones ligeramente diferentes, proporcionando así información 3D de forma directa. De entre los problemas de localización de robots, en esta tesis abordamos dos de ellos: el seguimiento de robots y la localización y mapeado simultáneo (o SLAM). El primero de ellos no tiene en cuenta el mapa del entorno sino que calcula la trayectoria del robot mediante la composición incremental de las estimaciones de su movimiento entre instantes de tiempo consecutivos. Cuando se usan imágenes para calcular esta trayectoria, el problema toma el nombre de "odometría visual", y su resolución es más sencilla que la del SLAM visual. De hecho, a menudo se integra como parte de un sistema de SLAM completo. Esta tesis contribuye con la propuesta de dos sistemas de odometría visual. Uno de ellos está basado en un solución cerrada y eficiente mientras que el otro está basado en un proceso de optimización no-lineal que implementa un nuevo método de detección y eliminación rápida de espurios. Los métodos de SLAM, por su parte, también abordan la construcción de un mapa del entorno con el objetivo de mejorar sensiblemente la localización del robot, evitando de esta forma la acumulación de error en la que incurre la odometría visual. Además, el mapa construido puede ser empleado para hacer frente a situaciones exigentes como la recuperación de la localización tras la pérdida del robot o realizar localización global. En esta tesis se presentan dos sistemas completos de SLAM visual. Uno de ellos se ha implementado dentro del marco de los filtros probabilísticos no parámetricos, mientras que el otro está basado en un método nuevo de "bundle adjustment" relativo que ha sido integrado con algunas técnicas recientes de visión por computador. Otra contribución de esta tesis es la publicación de dos colecciones de datos que contienen imágenes estéreo capturadas en entornos urbanos sin modificar, así como una estimación del camino real del robot basada en GPS (denominada "ground truth"). Estas colecciones sirven como banco de pruebas para validar métodos de odometría y SLAM visual

    Globally-Coordinated Locally-Linear Modeling of Multi-Dimensional Data

    Get PDF
    This thesis considers the problem of modeling and analysis of continuous, locally-linear, multi-dimensional spatio-temporal data. Our work extends the previously reported theoretical work on the global coordination model to temporal analysis of continuous, multi-dimensional data. We have developed algorithms for time-varying data analysis and used them in full-scale, real-world applications. The applications demonstrated in this thesis include tracking, synthesis, recognitions and retrieval of dynamic objects based on their shape, appearance and motion. The proposed approach in this thesis has advantages over existing approaches to analyzing complex spatio-temporal data. Experiments show that the new modeling features of our approach improve the performance of existing approaches in many applications. In object tracking, our approach is the first one to track nonlinear appearance variations by using low-dimensional representation of the appearance change in globally-coordinated linear subspaces. In dynamic texture synthesis, we are able to model non-stationary dynamic textures, which cannot be handled by any of the existing approaches. In human motion synthesis, we show that realistic synthesis can be performed without using specific transition points, or key frames
    corecore