10 research outputs found

    Depth-Camera-Aided Inertial Navigation Utilizing Directional Constraints.

    Full text link
    This paper presents a practical yet effective solution for integrating an RGB-D camera and an inertial sensor to handle the depth dropouts that frequently happen in outdoor environments, due to the short detection range and sunlight interference. In depth drop conditions, only the partial 5-degrees-of-freedom pose information (attitude and position with an unknown scale) is available from the RGB-D sensor. To enable continuous fusion with the inertial solutions, the scale ambiguous position is cast into a directional constraint of the vehicle motion, which is, in essence, an epipolar constraint in multi-view geometry. Unlike other visual navigation approaches, this can effectively reduce the drift in the inertial solutions without delay or under small parallax motion. If a depth image is available, a window-based feature map is maintained to compute the RGB-D odometry, which is then fused with inertial outputs in an extended Kalman filter framework. Flight results from the indoor and outdoor environments, as well as public datasets, demonstrate the improved navigation performance of the proposed approach

    Improving Scan Registration Methods Using Secondary Point Data Channels

    Get PDF
    Autonomous vehicle technology has advanced significantly in recent years and these vehicles are poised to make major strides into everyday use. Autonomous vehicles have already entered military and commercial use, performing the dirty, dull, and dangerous tasks that humans do not want to, or cannot perform. With any complex autonomy task for a mobile robot, a method is required to map the environment and to localize within that environment. In unknown environments when the mapping and localization stages are performed simultaneously, this is known as Simultaneous Localization and Mapping (SLAM). One key technology used to solve the SLAM problem involves matching sensor data in the form of point clouds. Scan registration attempts to find the transformation between two point clouds, or scans, which results in the optimal overlap of the scan information. One of the major drawbacks of existing approaches is the over-reliance on geometric features and a well structured environment in order to perform the registration. When insufficient geometric features are present to constrain the optimization, this is known as geometric degeneracy, and can be a common problem in typically environments. The reliability of these methods is of vital importance in order to improve the robustness of autonomous vehicles operating in uncontrolled environments. This thesis presents methods to improve upon existing scan registration methods by incorporating secondary information into the registration process. In this work, three methods are presented: Ground Segmented Iterative Closest Point (GSICP), Color Clus- tered Normal Distribution Transform (CCNDT), and Multi Channel Generalized Iterative Closest Point (MCGICP). Each method provides a unique addition to the scan registration literature and has its own set of benefits, limitations, and uses. GSICP segments the ground plane from a 3D scan then compresses the scan into a 2D plane. The points are then classified as either ground-adjacent, or non-ground-adjacent. Using this classification, a class constrained ICP registration is performed where only points of the same class can be corresponded. This results in the method essentially creating simulated edges for the registration to align. GSICP improves accuracy and robustness in sparse unstructured environments such as forests or rolling hills. When compared to existing methods on the Ford Vision and Lidar Dataset, GSICP shows a tighter variance in error values as well as a significant improvement in overall error. This method is also shown to be highly computationally efficient, running registrations on a low power system twice as fast as GICP, the next most accurate method. However, it does require the input scans to have specific characteristics such as a defined ground plane and spatially separated objects in the environment. This method is ideally suited for outdoor sparse environments and was used with great success by the University of Waterloo’s entry in the NASA Sample Return Robot Challenge. CCNDT provides a more adaptable method that is widely applicable to many common environments. CCNDT uses point cloud data which has been colorized either from an RGBD camera or a joint LIDAR and camera system. The method begins by clustering the points in the scan based on color and then uses the clusters to generate colored Gaussian distributions. These distributions are then used to calculate a color weighted distribution to distribution cost between all pairs of distributions. Exhaustively matching all pairs of distributions creates a smooth, continuous cost function that can be optimized efficiently. Experimental validation of the CCNDT method on the Ford and Freiburg datasets has shown that the method can perform 3D scan registrations more efficiently, three times faster on average then existing methods, and is capable of accurately registering any scans which have sufficient color variation to enable color clustering. MCGICP is a generalized approach that is capable of performing robustly in almost any situation. MCGICP uses secondary point information, such as color, intensity, etc., to augment the GICP method. MCGICP calculates a spacial covariance at each point such that the covariance normal to the local surface is set to a small value, indicating a high confidence matching surfaces, and the covariance tangent to the surface is determined based on the secondary information distribution. Having the covariance represented in both the tangential and normal directions causes non-trivial cost terms to be present in all directions. Additionally, the correspondence of points between scans is modified to use a higher dimensional search space, which incorporates the secondary descriptor channels as well as the covariance information at each point and allows for more robust point correspondences to be determined. The registration process can therefore converge more quickly due to the incorporation of additional information. The MCGICP method is capable of performing highly accurate scan registrations in almost any environmental situation. The method is validated using a diverse set of data including the Ford and Freiburg datasets, as well as a challenging degenerate dataset. MCGICP is shown to improve accuracy and reliability on all three datasets. MCGICP is robust to most common degeneracies as it incorporates multiple channels of information in an integrated approach that is reliable even in the most challenging cases. The results presented in this work demonstrate clear improvements over the existing scan registration methods. This work shows that by incorporating secondary information into the scan registration problem, more robust and accurate solutions can be obtained. Each method presented has its own unique benefits, which are valuable for a specific set of applications and environments

    Robust localization with wearable sensors

    Get PDF
    Measuring physical movements of humans and understanding human behaviour is useful in a variety of areas and disciplines. Human inertial tracking is a method that can be leveraged for monitoring complex actions that emerge from interactions between human actors and their environment. An accurate estimation of motion trajectories can support new approaches to pedestrian navigation, emergency rescue, athlete management, and medicine. However, tracking with wearable inertial sensors has several problems that need to be overcome, such as the low accuracy of consumer-grade inertial measurement units (IMUs), the error accumulation problem in long-term tracking, and the artefacts generated by movements that are less common. This thesis focusses on measuring human movements with wearable head-mounted sensors to accurately estimate the physical location of a person over time. The research consisted of (i) providing an overview of the current state of research for inertial tracking with wearable sensors, (ii) investigating the performance of new tracking algorithms that combine sensor fusion and data-driven machine learning, (iii) eliminating the effect of random head motion during tracking, (iv) creating robust long-term tracking systems with a Bayesian neural network and sequential Monte Carlo method, and (v) verifying that the system can be applied with changing modes of behaviour, defined as natural transitions from walking to running and vice versa. This research introduces a new system for inertial tracking with head-mounted sensors (which can be placed in, e.g. helmets, caps, or glasses). This technology can be used for long-term positional tracking to explore complex behaviours

    Robust dense visual SLAM using sensor fusion and motion segmentation

    Get PDF
    Visual simultaneous localisation and mapping (SLAM) is an important technique for enabling mobile robots to navigate autonomously within their environments. Using cameras, robots reconstruct a representation of their environment and simultaneously localise themselves within it. A dense visual SLAM system produces a high-resolution and detailed reconstruction of the environment which can be used for obstacle avoidance or semantic reasoning. State-of-the-art dense visual SLAM systems demonstrate robust performance and impressive accuracy in ideal conditions. However, these techniques are based on requirements which limit the extent to which they can be deployed in real applications. Fundamentally, they require constant scene illumination, smooth camera motion and no moving objects being present in the scene. Overcoming these requirements is not trivial and significant effort is needed to make dense visual SLAM approaches more robust to real-world conditions. The objective of this thesis is to develop dense visual SLAM systems which are more robust to real-world visually challenging conditions. For this, we leverage sensor fusion and motion segmentation for situations where camera data is unsuitable. The first contribution is a visual SLAM system for the NASA Valkyrie humanoid robot which is robust to the robot’s operation. It is based on a sensor fusion approach which combines visual SLAM and leg odometry to demonstrate increased robustness to illumination changes and fast camera motion. Second, we research methods for robust visual odometry in the presence of moving objects. We propose a formulation for joint visual odometry and motion segmentation that demonstrates increased robustness in scenes with moving objects compared to state-of-the-art approaches. We then extend this method using inertial information from a gyroscope to compare the contributions of motion segmentation and motion prior integration for robustness to scene dynamics. As part of this study we provide a dataset recorded in scenes with different numbers of moving objects. In conclusion, we find that both motion segmentation and motion prior integration are necessary for achieving significantly better results in real-world conditions. While motion priors increase robustness, motion segmentation increases the accuracy of the reconstruction results through filtering of moving objects.Edinburgh Centre for RoboticsEngineering and Physical Sciences Research Council (EPSRC

    Pose estimation and data fusion algorithms for an autonomous mobile robot based on vision and IMU in an indoor environment

    Get PDF
    Thesis (PhD(Computer Engineering))--University of Pretoria, 2021.Autonomous mobile robots became an active research direction during the past few years, and they are emerging in different sectors such as companies, industries, hospital, institutions, agriculture and homes to improve services and daily activities. Due to technology advancement, the demand for mobile robot has increased due to the task they perform and services they render such as carrying heavy objects, monitoring, delivering of goods, search and rescue missions, performing dangerous tasks in places like underground mines. Instead of workers being exposed to hazardous chemicals or environments that could affect health and put lives at risk, humans are being replaced with mobile robot services. It is with these concerns that the enhancement of mobile robot operation is necessary, and the process is assisted through sensors. Sensors are used as instrument to collect data or information that aids the robot to navigate and localise in its environment. Each sensor type has inherent strengths and weaknesses, therefore inappropriate combination of sensors could result into high cost of sensor deployment with low performance. Regardless, the potential and prospect of autonomous mobile robot, they are yet to attain optimal performance, this is because of integral challenges they are faced with most especially localisation. Localisation is one the fundamental issues encountered in mobile robot which demands attention and the challenging part is estimating the robot position and orientation of which this information can be acquired from sensors and other relevant systems. To tackle the issue of localisation, a good technique should be proposed to deal with errors, downgrading factors, improper measurement and estimations. Different approaches are recommended in estimating the position of a mobile robot. Some studies estimated the trajectory of the mobile robot and indoor scene reconstruction using a monocular visual odmometry. This approach cannot be feasible for large zone and complex environment. Radio frequency identification (RFID) technology on the other hand provides accuracy and robustness, but the method depend on the distance between the tags, and the distance between the tags and the reader. To increase the localisation accuracy, the number of RFID tags per unit area has to be increased. Therefore, this technique may not result in economical and easily scalable solution because of the increasing number of required tags and the associated cost of their deployment. Global Positioning System (GPS) is another approach that offers proved results in most scenarios, however, indoor localization is one of the settings in which GPS cannot be used because the signal strength is not reliable inside a building. Most approaches are not able to precisely localise autonomous mobile robot even with the high cost of equipment and complex implementation. Most the devices and sensors either requires additional infrastructures or they are not suitable to be used in an indoor environment. Therefore, this study proposes using data from vision and inertial sensors which comprise 3-axis of accelerometer and 3-axis of gyroscope, also known as 6-degree of freedom (6-DOF) to determine pose estimation of mobile robot. The inertial measurement unit (IMU) based tracking provides fast response, therefore, they can be considered to assist vision whenever it fails due to loss of visual features. The use of vision sensor helps to overcome the characteristic limitation of the acoustic sensor for simultaneous multiple object tracking. With this merit, vision is capable of estimating pose with respect to the object of interest. A singular sensor or system is not reliable to estimate the pose of a mobile robot due to limitations, therefore, data acquired from sensors and sources are combined using data fusion algorithm to estimate position and orientation within specific environment. The resulting model is more accurate because it balances the strengths of the different sensors. Information provided through sensor or data fusion can be used to support more-intelligent actions. The proposed algorithms are expedient to combine data from each of the sensor types to provide the most comprehensive and accurate environmental model possible. The algorithms use a set of mathematical equations that provides an efficient computational means to estimate the state of a process. This study investigates the state estimation methods to determine the state of a desired system that is continuously changing given some observations or measurements. From the performance and evaluation of the system, it can be observed that the integration of sources of information and sensors is necessary. This thesis has provided viable solutions to the challenging problem of localisation in autonomous mobile robot through its adaptability, accuracy, robustness and effectiveness.NRFUniversity of PretoriaElectrical, Electronic and Computer EngineeringPhD(Computer Engineering)Unrestricte

    Indoor Positioning and Navigation

    Get PDF
    In recent years, rapid development in robotics, mobile, and communication technologies has encouraged many studies in the field of localization and navigation in indoor environments. An accurate localization system that can operate in an indoor environment has considerable practical value, because it can be built into autonomous mobile systems or a personal navigation system on a smartphone for guiding people through airports, shopping malls, museums and other public institutions, etc. Such a system would be particularly useful for blind people. Modern smartphones are equipped with numerous sensors (such as inertial sensors, cameras, and barometers) and communication modules (such as WiFi, Bluetooth, NFC, LTE/5G, and UWB capabilities), which enable the implementation of various localization algorithms, namely, visual localization, inertial navigation system, and radio localization. For the mapping of indoor environments and localization of autonomous mobile sysems, LIDAR sensors are also frequently used in addition to smartphone sensors. Visual localization and inertial navigation systems are sensitive to external disturbances; therefore, sensor fusion approaches can be used for the implementation of robust localization algorithms. These have to be optimized in order to be computationally efficient, which is essential for real-time processing and low energy consumption on a smartphone or robot

    Feature Papers of Drones - Volume II

    Get PDF
    [EN] The present book is divided into two volumes (Volume I: articles 1–23, and Volume II: articles 24–54) which compile the articles and communications submitted to the Topical Collection ”Feature Papers of Drones” during the years 2020 to 2022 describing novel or new cutting-edge designs, developments, and/or applications of unmanned vehicles (drones). Articles 24–41 are focused on drone applications, but emphasize two types: firstly, those related to agriculture and forestry (articles 24–35) where the number of applications of drones dominates all other possible applications. These articles review the latest research and future directions for precision agriculture, vegetation monitoring, change monitoring, forestry management, and forest fires. Secondly, articles 36–41 addresses the water and marine application of drones for ecological and conservation-related applications with emphasis on the monitoring of water resources and habitat monitoring. Finally, articles 42–54 looks at just a few of the huge variety of potential applications of civil drones from different points of view, including the following: the social acceptance of drone operations in urban areas or their influential factors; 3D reconstruction applications; sensor technologies to either improve the performance of existing applications or to open up new working areas; and machine and deep learning development

    Detection and Compensation of Degeneracy Cases for IMU-Kinect Integrated Continuous SLAM with Plane Features

    No full text
    In a group of general geometric primitives, plane-based features are widely used for indoor localization because of their robustness against noises. However, a lack of linearly independent planes may lead to a non-trivial estimation. This in return can cause a degenerate state from which all states cannot be estimated. To solve this problem, this paper first proposed a degeneracy detection method. A compensation method that could fix orientations by projecting an inertial measurement unit’s (IMU) information was then explained. Experiments were conducted using an IMU-Kinect v2 integrated sensor system prone to fall into degenerate cases owing to its narrow field-of-view. Results showed that the proposed framework could enhance map accuracy by successful detection and compensation of degenerated orientations

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well
    corecore