1,369 research outputs found

    Survey on Recent Advances in Integrated GNSSs Towards Seamless Navigation Using Multi-Sensor Fusion Technology

    Get PDF
    During the past few decades, the presence of global navigation satellite systems (GNSSs) such as GPS, GLONASS, Beidou and Galileo has facilitated positioning, navigation and timing (PNT) for various outdoor applications. With the rapid increase in the number of orbiting satellites per GNSS, enhancements in the satellite-based augmentation systems (SBASs) such as EGNOS and WAAS, as well as commissioning new GNSS constellations, the PNT capabilities are maximized to reach new frontiers. Additionally, the recent developments in precise point positioning (PPP) and real time kinematic (RTK) algorithms have provided more feasibility to carrier-phase precision positioning solutions up to the third-dimensional localization. With the rapid growth of internet of things (IoT) applications, seamless navigation becomes very crucial for numerous PNT dependent applications especially in sensitive fields such as safety and industrial applications. Throughout the years, GNSSs have maintained sufficiently acceptable performance in PNT, in RTK and PPP applications however GNSS experienced major challenges in some complicated signal environments. In many scenarios, GNSS signal suffers deterioration due to multipath fading and attenuation in densely obscured environments that comprise stout obstructions. Recently, there has been a growing demand e.g. in the autonomous-things domain in adopting reliable systems that accurately estimate position, velocity and time (PVT) observables. Such demand in many applications also facilitates the retrieval of information about the six degrees of freedom (6-DOF - x, y, z, roll, pitch, and heading) movements of the target anchors. Numerous modern applications are regarded as beneficiaries of precise PNT solutions such as the unmanned aerial vehicles (UAV), the automatic guided vehicles (AGV) and the intelligent transportation system (ITS). Hence, multi-sensor fusion technology has become very vital in seamless navigation systems owing to its complementary capabilities to GNSSs. Fusion-based positioning in multi-sensor technology comprises the use of multiple sensors measurements for further refinement in addition to the primary GNSS, which results in high precision and less erroneous localization. Inertial navigation systems (INSs) and their inertial measurement units (IMUs) are the most commonly used technologies for augmenting GNSS in multi-sensor integrated systems. In this article, we survey the most recent literature on multi-sensor GNSS technology for seamless navigation. We provide an overall perspective for the advantages, the challenges and the recent developments of the fusion-based GNSS navigation realm as well as analyze the gap between scientific advances and commercial offerings. INS/GNSS and IMU/GNSS systems have proven to be very reliable in GNSS-denied environments where satellite signal degradation is at its peak, that is why both integrated systems are very abundant in the relevant literature. In addition, the light detection and ranging (LiDAR) systems are widely adopted in the literature for its capability to provide 6-DOF to mobile vehicles and autonomous robots. LiDARs are very accurate systems however they are not suitable for low-cost positioning due to the expensive initial costs. Moreover, several other techniques from the radio frequency (RF) spectrum are utilized as multi-sensor systems such as cellular networks, WiFi, ultra-wideband (UWB) and Bluetooth. The cellular-based systems are very suitable for outdoor navigation applications while WiFi-based, UWB-based and Bluetooth-based systems are efficient in indoor positioning systems (IPS). However, to achieve reliable PVT estimations in multi-sensor GNSS navigation, optimal algorithms should be developed to mitigate the estimation errors resulting from non-line-of-sight (NLOS) GNSS situations. Examples of the most commonly used algorithms for trilateration-based positioning are Kalman filters, weighted least square (WLS), particle filters (PF) and many other hybrid algorithms by mixing one or more algorithms together. In this paper, the reviewed articles under study and comparison are presented by highlighting their motivation, the methodology of implementation, the modelling utilized and the performed experiments. Then they are assessed with respect to the published results focusing on achieved accuracy, robustness and overall implementation cost-benefits as performance metrics. Our summarizing survey assesses the most promising, highly ranked and recent articles that comprise insights into the future of GNSS technology with multi-sensor fusion technique.©2021 The Authors. Published by ION.fi=vertaisarvioimaton|en=nonPeerReviewed

    A simulation framework for UAV sensor fusion

    Get PDF
    Proceedings of: 5th International Conference, HAIS 2010, San Sebastián, Spain, June 23-25, 2010.he behavior recognition is one of the most prolific lines of research in recent decades in the field of computer vision. Within this field, the majority of researches have focused on the recognition of the activities carried out by a single individual, however this paper deals with the problem of recognizing the behavior of a group of individuals, in which relations between the component elements are of great importance. For this purpose it is exposed a new representation that concentrates all necessary information concerning relations peer to peer present in the group, and the semantics of the different groups formed by individuals and training (or structure) of each one of them. The work is presented with the dataset created in CVBASE06 dealing the European handballThis work was supported in part by Projects ATLANTIDA, CICYT TIN2008-06742- C02-02/TSI, CICYT TEC2008-06732-C02-02/TEC, SINPROB, CAM CONTEXTS S2009/TIC-1485 and DPS2008-07029-C02-02.Publicad

    Low-cost RPAS navigation and guidance system using Square Root Unscented Kalman Filter

    Get PDF
    Multi-Sensor Data Fusion (MSDF) techniques involving satellite and inertial-based sensors are widely adopted to improve the navigation solution of a number of mission- and safety-critical tasks. Such integrated Navigation and Guidance Systems (NGS) currently do not meet the required level of performance in all flight phases of small Remotely Piloted Aircraft Systems (RPAS). In this paper an innovative Square Root-Unscented Kalman Filter (SR-UKF) based NGS is presented and compared with a conventional UKF governed design. The presented system architectures adopt state-of-the-art information fusion approach based on a number of low-cost sensors including; Global Navigation Satellite Systems (GNSS), Micro-Electro-Mechanical System (MEMS) based Inertial Measurement Unit (IMU) and Vision Based Navigation (VBN) sensors. Additionally, an Aircraft Dynamics Model (ADM), which is essentially a knowledge based module, is employed to compensate for the MEMS-IMU sensor shortcomings in high-dynamics attitude determination tasks. The ADM acts as a virtual sensor and its measurements are processed with non-linear estimation in order to increase the operational validity time. An improvement in the ADM navigation state vector (i.e., position, velocity and attitude) measurements is obtained, thanks to the accurate modeling of aircraft dynamics and advanced processing techniques. An innovative SR-UKF based VBN-IMU-GNSS-ADM (SR-U-VIGA) architecture design was implemented and compared with a typical UKF design (U-VIGA) in a small RPAS (AEROSONDE) integration arrangement exploring a representative cross-section of the operational flight envelope. The comparison of position and attitude data shows that the SR-U-VIGA and U-VIGA NGS fulfill the relevant RNP criteria, including precision approach tasks

    Unmanned Aircraft System Navigation in the Urban Environment: A Systems Analysis

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/140665/1/1.I010280.pd

    Low-cost sensors based multi-sensor data fusion techniques for RPAS navigation and guidance

    Get PDF
    In order for Remotely Piloted Aircraft Systems (RPAS) to coexist seamlessly with manned aircraft in non-segregated airspace, enhanced navigational capabilities are essential to meet the Required Navigational Performance (RNP) levels in all flight phases. A Multi-Sensor Data Fusion (MSDF) framework is adopted to improve the navigation capabilities of an integrated Navigation and Guidance System (NGS) designed for small-sized RPAS. The MSDF architecture includes low-cost and low weight/volume navigation sensors suitable for various classes of RPAS. The selected sensors include Global Navigation Satellite Systems (GNSS), Micro-Electro-Mechanical System (MEMS) based Inertial Measurement Unit (IMU) and Vision Based Sensors (VBS). A loosely integrated navigation architecture is presented where an Unscented Kalman Filter (UKF) is used to combine the navigation sensor measurements. The presented UKF based VBS-INS-GNSS-ADM (U-VIGA) architecture is an evolution of previous research performed on Extended Kalman Filter (EKF) based VBS-INS-GNSS (E-VIGA) systems. An Aircraft Dynamics Model (ADM) is adopted as a virtual sensor and acts as a knowledge-based module providing additional position and attitude information, which is pre-processed by an additional/local UKF. The E-VIGA and U-VIGA performances are evaluated in a small RPAS integration scheme (i.e., AEROSONDE RPAS platform) by exploring a representative cross-section of this RPAS operational flight envelope. The position and attitude accuracy comparison shows that the E-VIGA and U-VIGA systems fulfill the relevant RNP criteria, including precision approach in CAT-II. A novel Human Machine Interface (HMI) architecture is also presented, whose design takes into consideration the coordination tasks of multiple human operators. In addition, the interface scheme incorporates the human operator as an integral part of the control loop providing a higher level of situational awareness

    Validation and Experimental Testing of Observers for Robust GNSS-Aided Inertial Navigation

    Get PDF
    This chapter is the study of state estimators for robust navigation. Navigation of vehicles is a vast field with multiple decades of research. The main aim is to estimate position, linear velocity, and attitude (PVA) under all dynamics, motions, and conditions via data fusion. The state estimation problem will be considered from two different perspectives using the same kinematic model. First, the extended Kalman filter (EKF) will be reviewed, as an example of a stochastic approach; second, a recent nonlinear observer will be considered as a deterministic case. A comparative study of strapdown inertial navigation methods for estimating PVA of aerial vehicles fusing inertial sensors with global navigation satellite system (GNSS)-based positioning will be presented. The focus will be on the loosely coupled integration methods and performance analysis to compare these methods in terms of their stability, robustness to vibrations, and disturbances in measurements

    Multi-sensor data fusion techniques for RPAS detect, track and avoid

    Get PDF
    Accurate and robust tracking of objects is of growing interest amongst the computer vision scientific community. The ability of a multi-sensor system to detect and track objects, and accurately predict their future trajectory is critical in the context of mission- and safety-critical applications. Remotely Piloted Aircraft System (RPAS) are currently not equipped to routinely access all classes of airspace since certified Detect-and-Avoid (DAA) systems are yet to be developed. Such capabilities can be achieved by incorporating both cooperative and non-cooperative DAA functions, as well as providing enhanced communications, navigation and surveillance (CNS) services. DAA is highly dependent on the performance of CNS systems for Detection, Tacking and avoiding (DTA) tasks and maneuvers. In order to perform an effective detection of objects, a number of high performance, reliable and accurate avionics sensors and systems are adopted including non-cooperative sensors (visual and thermal cameras, Laser radar (LIDAR) and acoustic sensors) and cooperative systems (Automatic Dependent Surveillance-Broadcast (ADS-B) and Traffic Collision Avoidance System (TCAS)). In this paper the sensors and system information candidates are fully exploited in a Multi-Sensor Data Fusion (MSDF) architecture. An Unscented Kalman Filter (UKF) and a more advanced Particle Filter (PF) are adopted to estimate the state vector of the objects based for maneuvering and non-maneuvering DTA tasks. Furthermore, an artificial neural network is conceptualised/adopted to exploit the use of statistical learning methods, which acts to combined information obtained from the UKF and PF. After describing the MSDF architecture, the key mathematical models for data fusion are presented. Conceptual studies are carried out on visual and thermal image fusion architectures

    Vision-Aided Navigation for GPS-Denied Environments Using Landmark Feature Identification

    Get PDF
    In recent years, unmanned autonomous vehicles have been used in diverse applications because of their multifaceted capabilities. In most cases, the navigation systems for these vehicles are dependent on Global Positioning System (GPS) technology. Many applications of interest, however, entail operations in environments in which GPS is intermittent or completely denied. These applications include operations in complex urban or indoor environments as well as missions in adversarial environments where GPS might be denied using jamming technology. This thesis investigate the development of vision-aided navigation algorithms that utilize processed images from a monocular camera as an alternative to GPS. The vision-aided navigation approach explored in this thesis entails defining a set of inertial landmarks, the locations of which are known within the environment, and employing image processing algorithms to detect these landmarks in image frames collected from an onboard monocular camera. These vision-based landmark measurements effectively serve as surrogate GPS measurements that can be incorporated into a navigation filter. Several image processing algorithms were considered for landmark detection and this thesis focuses in particular on two approaches: the continuous adaptive mean shift (CAMSHIFT) algorithm and the adaptable compressive (ADCOM) tracking algorithm. These algorithms are discussed in detail and applied for the detection and tracking of landmarks in monocular camera images. Navigation filters are then designed that employ sensor fusion of accelerometer and rate gyro data from an inertial measurement unit (IMU) with vision-based measurements of the centroids of one or more landmarks in the scene. These filters are tested in simulated navigation scenarios subject to varying levels of sensor and measurement noise and varying number of landmarks. Finally, conclusions and recommendations are provided regarding the implementation of this vision-aided navigation approach for autonomous vehicle navigation systems

    Aerial Simultaneous Localization and Mapping Using Earth\u27s Magnetic Anomaly Field

    Get PDF
    Aerial magnetic navigation has been shown to be a viable GPS-alternative, but requires a prior-surveyed magnetic map. The miniaturization of atomic magnetometers extends their application to small aircraft at low altitudes where magnetic maps are especially inaccurate or unavailable. This research presents a simultaneous localization and mapping (SLAM) approach to constrain the drift of an inertial navigation system (INS) without the need for a magnetic map. The filter was demonstrated using real measurements on a professional survey flight, and on an AFIT unmanned aerial vehicle
    corecore