626 research outputs found

    Indoor Geo-location And Tracking Of Mobile Autonomous Robot

    Get PDF
    The field of robotics has always been one of fascination right from the day of Terminator. Even though we still do not have robots that can actually replicate human action and intelligence, progress is being made in the right direction. Robotic applications range from defense to civilian, in public safety and fire fighting. With the increase in urban-warfare robot tracking inside buildings and in cities form a very important application. The numerous applications range from munitions tracking to replacing soldiers for reconnaissance information. Fire fighters use robots for survey of the affected area. Tracking robots has been limited to the local area under consideration. Decision making is inhibited due to limited local knowledge and approximations have to be made. An effective decision making would involve tracking the robot in earth co-ordinates such as latitude and longitude. GPS signal provides us sufficient and reliable data for such decision making. The main drawback of using GPS is that it is unavailable indoors and also there is signal attenuation outdoors. Indoor geolocation forms the basis of tracking robots inside buildings and other places where GPS signals are unavailable. Indoor geolocation has traditionally been the field of wireless networks using techniques such as low frequency RF signals and ultra-wideband antennas. In this thesis we propose a novel method for achieving geolocation and enable tracking. Geolocation and tracking are achieved by a combination of Gyroscope and encoders together referred to as the Inertial Navigation System (INS). Gyroscopes have been widely used in aerospace applications for stabilizing aircrafts. In our case we use gyroscope as means of determining the heading of the robot. Further, commands can be sent to the robot when it is off balance or off-track. Sensors are inherently error prone; hence the process of geolocation is complicated and limited by the imperfect mathematical modeling of input noise. We make use of Kalman Filter for processing erroneous sensor data, as it provides us a robust and stable algorithm. The error characteristics of the sensors are input to the Kalman Filter and filtered data is obtained. We have performed a large set of experiments, both indoors and outdoors to test the reliability of the system. In outdoors we have used the GPS signal to aid the INS measurements. When indoors we utilize the last known position and extrapolate to obtain the GPS co-ordinates

    Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis

    Get PDF
    The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system

    Survey of computer vision algorithms and applications for unmanned aerial vehicles

    Get PDF
    This paper presents a complete review of computer vision algorithms and vision-based intelligent applications, that are developed in the field of the Unmanned Aerial Vehicles (UAVs) in the latest decade. During this time, the evolution of relevant technologies for UAVs; such as component miniaturization, the increase of computational capabilities, and the evolution of computer vision techniques have allowed an important advance in the development of UAVs technologies and applications. Particularly, computer vision technologies integrated in UAVs allow to develop cutting-edge technologies to cope with aerial perception difficulties; such as visual navigation algorithms, obstacle detection and avoidance and aerial decision-making. All these expert technologies have developed a wide spectrum of application for UAVs, beyond the classic military and defense purposes. Unmanned Aerial Vehicles and Computer Vision are common topics in expert systems, so thanks to the recent advances in perception technologies, modern intelligent applications are developed to enhance autonomous UAV positioning, or automatic algorithms to avoid aerial collisions, among others. Then, the presented survey is based on artificial perception applications that represent important advances in the latest years in the expert system field related to the Unmanned Aerial Vehicles. In this paper, the most significant advances in this field are presented, able to solve fundamental technical limitations; such as visual odometry, obstacle detection, mapping and localization, et cetera. Besides, they have been analyzed based on their capabilities and potential utility. Moreover, the applications and UAVs are divided and categorized according to different criteria.This research is supported by the Spanish Government through the CICYT projects (TRA2015-63708-R and TRA2013-48314-C3-1-R)

    A Comprehensive Review on Autonomous Navigation

    Full text link
    The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed

    Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition

    Get PDF
    The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future

    State estimation for aggressive flight in GPS-denied environments using onboard sensing

    Get PDF
    In this paper we present a state estimation method based on an inertial measurement unit (IMU) and a planar laser range finder suitable for use in real-time on a fixed-wing micro air vehicle (MAV). The algorithm is capable of maintaing accurate state estimates during aggressive flight in unstructured 3D environments without the use of an external positioning system. Our localization algorithm is based on an extension of the Gaussian Particle Filter. We partition the state according to measurement independence relationships and then calculate a pseudo-linear update which allows us to use 20x fewer particles than a naive implementation to achieve similar accuracy in the state estimate. We also propose a multi-step forward fitting method to identify the noise parameters of the IMU and compare results with and without accurate position measurements. Our process and measurement models integrate naturally with an exponential coordinates representation of the attitude uncertainty. We demonstrate our algorithms experimentally on a fixed-wing vehicle flying in a challenging indoor environment

    Monocular Vision Localization Using a Gimbaled Laser Range Sensor

    Get PDF
    There have been great advances in recent years in the area of indoor navigation. Many of these new navigation systems rely on digital images to aid an inertial navigation estimates. The Air Force Institute of Technology (AFIT) has been conducting research in this area for a number of years. The image-aiding techniques are centered around tracking stationary features in order to improve inertial navigation estimates. Previous research has used stereo vision systems or terrain constraints with monocular systems to estimate feature locations. While these methods have shown good results, they do have drawbacks. First, as unmanned exploration vehicles become smaller in size the distance available to create a baseline between two cameras decreases resulting in a decrease of distancing accuracy. Second, if using a monocular system, terrain data might not be known in an unexplored environment. This research explores the use of a small gimbaled laser range sensor and monocular camera to estimate feature locations. The gimbaled system consists of a commercial off-the-shelf range sensor, a pair of hobby-style servos, and a micro controller that accepts azimuth and elevation commands. The system is approximately 15x8x12 cm and weighs less than 120 grams. This novel approach, called laser-aided image inertial navigation, provides precise depth measurements to key features. The location of these key features are then calculated based on the current state estimates of an Extended Kalman filter. This method of estimating feature locations is tested both by simulation and real world imagery. Navigation experiments are presented which compare this method with previous image-aided filters. While only a limited number of tests were conducted, simulated and real world flight tests show that the monocular laser-aided filter can accurately estimate the trajectory of a vehicle to within a few tenths of a meter. This is done without terrain constraints or any prior knowledge of the operational area

    Collaborative navigation as a solution for PNT applications in GNSS challenged environments: report on field trials of a joint FIG / IAG working group

    Get PDF
    PNT stands for Positioning, Navigation, and Timing. Space-based PNT refers to the capabilities enabled by GNSS, and enhanced by Ground and Space-based Augmentation Systems (GBAS and SBAS), which provide position, velocity, and timing information to an unlimited number of users around the world, allowing every user to operate in the same reference system and timing standard. Such information has become increasingly critical to the security, safety, prosperity, and overall qualityof-life of many citizens. As a result, space-based PNT is now widely recognized as an essential element of the global information infrastructure. This paper discusses the importance of the availability and continuity of PNT information, whose application, scope and significance have exploded in the past 10–15 years. A paradigm shift in the navigation solution has been observed in recent years. It has been manifested by an evolution from traditional single sensor-based solutions, to multiple sensor-based solutions and ultimately to collaborative navigation and layered sensing, using non-traditional sensors and techniques – so called signals of opportunity. A joint working group under the auspices of the International Federation of Surveyors (FIG) and the International Association of Geodesy (IAG), entitled ‘Ubiquitous Positioning Systems’ investigated the use of Collaborative Positioning (CP) through several field trials over the past four years. In this paper, the concept of CP is discussed in detail and selected results of these experiments are presented. It is demonstrated here, that CP is a viable solution if a ‘network’ or ‘neighbourhood’ of users is to be positioned / navigated together, as it increases the accuracy, integrity, availability, and continuity of the PNT information for all users

    Adaptive covariance estimation method for LiDAR-Aided multi-sensor integrated navigation systems

    Get PDF
    The accurate estimation of measurements covariance is a fundamental problem in sensors fusion algorithms and is crucial for the proper operation of filtering algorithms. This paper provides an innovative solution for this problem and realizes the proposed solution on a 2D indoor navigation system for unmanned ground vehicles (UGVs) that fuses measurements from a MEMS-grade gyroscope, speed measurements and a light detection and ranging (LiDAR) sensor. A computationally efficient weighted line extraction method is introduced, where the LiDAR intensity measurements are used, such that the random range errors and systematic errors due to surface reflectivity in LiDAR measurements are considered. The vehicle pose change is obtained from LiDAR line feature matching, and the corresponding pose change covariance is also estimated by a weighted least squares-based technique. The estimated LiDAR-based pose changes are applied as periodic updates to the Inertial Navigation System (INS) in an innovative extended Kalman filter (EKF) design. Besides, the influences of the environment geometry layout and line estimation error are discussed. Real experiments in indoor environment are performed to evaluate the proposed algorithm. The results showed the great consistency between the LiDAR-estimated pose chan
    • 

    corecore