264 research outputs found

    A multisensor SLAM for dense maps of large scale environments under poor lighting conditions

    Get PDF
    This thesis describes the development and implementation of a multisensor large scale autonomous mapping system for surveying tasks in underground mines. The hazardous nature of the underground mining industry has resulted in a push towards autonomous solutions to the most dangerous operations, including surveying tasks. Many existing autonomous mapping techniques rely on approaches to the Simultaneous Localization and Mapping (SLAM) problem which are not suited to the extreme characteristics of active underground mining environments. Our proposed multisensor system has been designed from the outset to address the unique challenges associated with underground SLAM. The robustness, self-containment and portability of the system maximize the potential applications.The multisensor mapping solution proposed as a result of this work is based on a fusion of omnidirectional bearing-only vision-based localization and 3D laser point cloud registration. By combining these two SLAM techniques it is possible to achieve some of the advantages of both approaches – the real-time attributes of vision-based SLAM and the dense, high precision maps obtained through 3D lasers. The result is a viable autonomous mapping solution suitable for application in challenging underground mining environments.A further improvement to the robustness of the proposed multisensor SLAM system is a consequence of incorporating colour information into vision-based localization. Underground mining environments are often dominated by dynamic sources of illumination which can cause inconsistent feature motion during localization. Colour information is utilized to identify and remove features resulting from illumination artefacts and to improve the monochrome based feature matching between frames.Finally, the proposed multisensor mapping system is implemented and evaluated in both above ground and underground scenarios. The resulting large scale maps contained a maximum offset error of ±30mm for mapping tasks with lengths over 100m

    Multisensor navigation systems: a remedy for GNSS vulnerabilities?

    Get PDF
    Space-based positioning, navigation, and timing (PNT) technologies, such as the global navigation satellite systems (GNSS) provide position, velocity, and timing information to an unlimited number of users around the world. In recent years, PNT information has become increasingly critical to the security, safety, and prosperity of the World's population, and is now widely recognized as an essential element of the global information infrastructure. Due to its vulnerabilities and line-of-sight requirements, GNSS alone is unable to provide PNT with the required levels of integrity, accuracy, continuity, and reliability. A multisensor navigation approach offers an effective augmentation in GNSS-challenged environments that holds a promise of delivering robust and resilient PNT. Traditionally, sensors such as inertial measurement units (IMUs), barometers, magnetometers, odometers, and digital compasses, have been used. However, recent trends have largely focused on image-based, terrain-based and collaborative navigation to recover the user location. This paper offers a review of the technological advances that have taken place in PNT over the last two decades, and discusses various hybridizations of multisensory systems, building upon the fundamental GNSS/IMU integration. The most important conclusion of this study is that in order to meet the challenging goals of delivering continuous, accurate and robust PNT to the ever-growing numbers of users, the hybridization of a suite of different PNT solutions is required

    A Review of Hybrid Indoor Positioning Systems Employing WLAN Fingerprinting and Image Processing

    Get PDF
    Location-based services (LBS) are a significant permissive technology. One of the main components in indoor LBS is the indoor positioning system (IPS). IPS utilizes many existing technologies such as radio frequency, images, acoustic signals, as well as magnetic sensors, thermal sensors, optical sensors, and other sensors that are usually installed in a mobile device. The radio frequency technologies used in IPS are WLAN, Bluetooth, Zig Bee, RFID, frequency modulation, and ultra-wideband. This paper explores studies that have combined WLAN fingerprinting and image processing to build an IPS. The studies on combined WLAN fingerprinting and image processing techniques are divided based on the methods used. The first part explains the studies that have used WLAN fingerprinting to support image positioning. The second part examines works that have used image processing to support WLAN fingerprinting positioning. Then, image processing and WLAN fingerprinting are used in combination to build IPS in the third part. A new concept is proposed at the end for the future development of indoor positioning models based on WLAN fingerprinting and supported by image processing to solve the effect of people presence around users and the user orientation problem

    Automatic Crack Detection in Built Infrastructure Using Unmanned Aerial Vehicles

    Full text link
    This paper addresses the problem of crack detection which is essential for health monitoring of built infrastructure. Our approach includes two stages, data collection using unmanned aerial vehicles (UAVs) and crack detection using histogram analysis. For the data collection, a 3D model of the structure is first created by using laser scanners. Based on the model, geometric properties are extracted to generate way points necessary for navigating the UAV to take images of the structure. Then, our next step is to stick together those obtained images from the overlapped field of view. The resulting image is then clustered by histogram analysis and peak detection. Potential cracks are finally identified by using locally adaptive thresholds. The whole process is automatically carried out so that the inspection time is significantly improved while safety hazards can be minimised. A prototypical system has been developed for evaluation and experimental results are included.Comment: In proceeding of The 34th International Symposium on Automation and Robotics in Construction (ISARC), pp. 823-829, Taipei, Taiwan, 201

    Seamless Positioning and Navigation in Urban Environment

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Multiple Integrated Navigation Sensors for Improving Occupancy Grid FastSLAM

    Get PDF
    An autonomous vehicle must accurately observe its location within the environment to interact with objects and accomplish its mission. When its environment is unknown, the vehicle must construct a map detailing its surroundings while using it to maintain an accurate location. Such a vehicle is faced with the circularly defined Simultaneous Localization and Mapping (SLAM) problem. However difficult, SLAM is a critical component of autonomous vehicle exploration with applications to search and rescue. To current knowledge, this research presents the first SLAM solution to integrate stereo cameras, inertial measurements, and vehicle odometry into a Multiple Integrated Navigation Sensor (MINS) path. The implementation combines the MINS path with LIDAR to observe and map the environment using the FastSLAM algorithm. In real-world tests, a mobile ground vehicle equipped with these sensors completed a 140 meter loop around indoor hallways. This SLAM solution produces a path that closes the loop and remains within 1 meter of truth, reducing the error 92% from an image-inertial navigation system and 79% from odometry FastSLAM

    Rail Track Detection and Projection-Based 3D Modeling from UAV Point Cloud

    Get PDF
    The expansion of the railway industry has increased the demand for the three-dimensional modeling of railway tracks. Due to the increasing development of UAV technology and its application advantages, in this research, the detection and 3D modeling of rail tracks are investigated using dense point clouds obtained from UAV images. Accordingly, a projection-based approach based on the overall direction of the rail track is proposed in order to generate a 3D model of the railway. In order to extract the railway lines, the height jump of points is evaluated in the neighborhood to select the candidate points of rail tracks. Then, using the RANSAC algorithm, line fitting on these candidate points is performed, and the final points related to the rail are identified. In the next step, the pre-specified rail piece model is fitted to the rail points through a projection-based process, and the orientation parameters of the model are determined. These parameters are later improved by fitting the Fourier curve, and finally a continuous 3D model for all of the rail tracks is created. The geometric distance of the final model from rail points is calculated in order to evaluate the modeling accuracy. Moreover, the performance of the proposed method is compared with another approach. A median distance of about 3 cm between the produced model and corresponding point cloud proves the high quality of the proposed 3D modeling algorithm in this study

    Self-supervised Vector-Quantization in Visual SLAM using Deep Convolutional Autoencoders

    Full text link
    In this paper, we introduce AE-FABMAP, a new self-supervised bag of words-based SLAM method. We also present AE-ORB-SLAM, a modified version of the current state of the art BoW-based path planning algorithm. That is, we have used a deep convolutional autoencoder to find loop closures. In the context of bag of words visual SLAM, vector quantization (VQ) is considered as the most time-consuming part of the SLAM procedure, which is usually performed in the offline phase of the SLAM algorithm using unsupervised algorithms such as Kmeans++. We have addressed the loop closure detection part of the BoW-based SLAM methods in a self-supervised manner, by integrating an autoencoder for doing vector quantization. This approach can increase the accuracy of large-scale SLAM, where plenty of unlabeled data is available. The main advantage of using a self-supervised is that it can help reducing the amount of labeling. Furthermore, experiments show that autoencoders are far more efficient than semi-supervised methods like graph convolutional neural networks, in terms of speed and memory consumption. We integrated this method into the state of the art long range appearance based visual bag of word SLAM, FABMAP2, also in ORB-SLAM. Experiments demonstrate the superiority of this approach in indoor and outdoor datasets over regular FABMAP2 in all cases, and it achieves higher accuracy in loop closure detection and trajectory generation

    Terrain Referenced Navigation Using SIFT Features in LiDAR Range-Based Data

    Get PDF
    The use of GNSS in aiding navigation has become widespread in aircraft. The long term accuracy of INS are enhanced by frequent updates of the highly precise position estimations GNSS provide. Unfortunately, operational environments exist where constant signal or the requisite number of satellites are unavailable, significantly degraded, or intentionally denied. This thesis describes a novel algorithm that uses scanning LiDAR range data, computer vision features, and a reference database to generate aircraft position estimations to update drifting INS estimates. The algorithm uses a single calibrated scanning LiDAR to sample the range and angle to the ground as an aircraft flies, forming a point cloud. The point cloud is orthorectified into a coordinate system common to a previously recorded reference of the flyover region. The point cloud is then interpolated into a Digital Elevation Model (DEM) of the ground. Range-based SIFT features are then extracted from both the airborne and reference DEMs. Features common to both the collected and reference range images are selected using a SIFT descriptor search. Geometrically inconsistent features are filtered out using RANSAC outlier removal, and surviving features are projected back to their source coordinates in the original point cloud. The point cloud features are used to calculate a least squares correspondence transform that aligns the collected features to the reference features. Applying the correspondence that best aligns the ground features is then applied to the nominal aircraft position, creating a new position estimate. The algorithm was tested on legacy flight data and typically produces position estimates within 10 meters of truth using threshold conditions
    • …
    corecore