1,408 research outputs found
Information Aided Navigation: A Review
The performance of inertial navigation systems is largely dependent on the
stable flow of external measurements and information to guarantee continuous
filter updates and bind the inertial solution drift. Platforms in different
operational environments may be prevented at some point from receiving external
measurements, thus exposing their navigation solution to drift. Over the years,
a wide variety of works have been proposed to overcome this shortcoming, by
exploiting knowledge of the system current conditions and turning it into an
applicable source of information to update the navigation filter. This paper
aims to provide an extensive survey of information aided navigation, broadly
classified into direct, indirect, and model aiding. Each approach is described
by the notable works that implemented its concept, use cases, relevant state
updates, and their corresponding measurement models. By matching the
appropriate constraint to a given scenario, one will be able to improve the
navigation solution accuracy, compensate for the lost information, and uncover
certain internal states, that would otherwise remain unobservable.Comment: 8 figures, 3 table
Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition
The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future
Data-Driven Meets Navigation: Concepts, Models, and Experimental Validation
The purpose of navigation is to determine the position, velocity, and
orientation of manned and autonomous platforms, humans, and animals. Obtaining
accurate navigation commonly requires fusion between several sensors, such as
inertial sensors and global navigation satellite systems, in a model-based,
nonlinear estimation framework. Recently, data-driven approaches applied in
various fields show state-of-the-art performance, compared to model-based
methods. In this paper we review multidisciplinary, data-driven based
navigation algorithms developed and experimentally proven at the Autonomous
Navigation and Sensor Fusion Lab (ANSFL) including algorithms suitable for
human and animal applications, varied autonomous platforms, and multi-purpose
navigation and fusion approachesComment: 22 pages, 13 figure
Collaborative navigation as a solution for PNT applications in GNSS challenged environments: report on field trials of a joint FIG / IAG working group
PNT stands for Positioning, Navigation, and Timing. Space-based PNT refers to the capabilities enabled by GNSS, and enhanced by Ground and Space-based Augmentation Systems (GBAS and SBAS), which provide position, velocity, and timing information to an unlimited number of users around the world, allowing every user to operate in the same reference system and timing standard. Such information has become increasingly critical to the security, safety, prosperity, and overall qualityof-life of many citizens. As a result, space-based PNT is now widely recognized as an essential element of the global information infrastructure. This paper discusses the importance of the availability and continuity of PNT information, whose application, scope and significance have exploded in the past 10–15 years. A paradigm shift in the navigation solution has been observed in recent years. It has been manifested by an evolution from traditional single sensor-based solutions, to multiple sensor-based solutions and ultimately to collaborative navigation and layered sensing, using non-traditional sensors and techniques – so called signals of opportunity. A joint working group under the auspices of the International Federation of Surveyors (FIG) and the International Association of Geodesy (IAG), entitled ‘Ubiquitous Positioning Systems’ investigated the use of Collaborative Positioning (CP) through several field trials over the past four years. In this paper, the concept of CP is discussed in detail and selected results of these experiments are presented. It is demonstrated here, that CP is a viable solution if a ‘network’ or ‘neighbourhood’ of users is to be positioned / navigated together, as it increases the accuracy, integrity, availability, and continuity of the PNT information for all users
Multisensor navigation systems: a remedy for GNSS vulnerabilities?
Space-based positioning, navigation, and timing (PNT) technologies, such as the global navigation satellite systems (GNSS) provide position, velocity, and timing information to an unlimited number of users around the world. In recent years, PNT information has become increasingly critical to the security, safety, and prosperity of the World's population, and is now widely recognized as an essential element of the global information infrastructure. Due to its vulnerabilities and line-of-sight requirements, GNSS alone is unable to provide PNT with the required levels of integrity, accuracy, continuity, and reliability. A multisensor navigation approach offers an effective augmentation in GNSS-challenged environments that holds a promise of delivering robust and resilient PNT. Traditionally, sensors such as inertial measurement units (IMUs), barometers, magnetometers, odometers, and digital compasses, have been used. However, recent trends have largely focused on image-based, terrain-based and collaborative navigation to recover the user location. This paper offers a review of the technological advances that have taken place in PNT over the last two decades, and discusses various hybridizations of multisensory systems, building upon the fundamental GNSS/IMU integration. The most important conclusion of this study is that in order to meet the challenging goals of delivering continuous, accurate and robust PNT to the ever-growing numbers of users, the hybridization of a suite of different PNT solutions is required
Planetary Rover Inertial Navigation Applications: Pseudo Measurements and Wheel Terrain Interactions
Accurate localization is a critical component of any robotic system. During planetary missions, these systems are often limited by energy sources and slow spacecraft computers. Using proprioceptive localization (e.g., using an inertial measurement unit and wheel encoders) without external aiding is insufficient for accurate localization. This is mainly due to the integrated and unbounded errors of the inertial navigation solutions and the drifted position information from wheel encoders caused by wheel slippage. For this reason, planetary rovers often utilize exteroceptive (e.g., vision-based) sensors. On the one hand, localization with proprioceptive sensors is straightforward, computationally efficient, and continuous. On the other hand, using exteroceptive sensors for localization slows rover driving speed, reduces rover traversal rate, and these sensors are sensitive to the terrain features. Given the advantages and disadvantages of both methods, this thesis focuses on two objectives. First, improving the proprioceptive localization performance without significant changes to the rover operations. Second, enabling adaptive traversability rate based on the wheel-terrain interactions while keeping the localization reliable.
To achieve the first objective, we utilized the zero-velocity, zero-angular rate updates, and non-holonomicity of a rover to improve rover localization performance even with the limited available sensor usage in a computationally efficient way. Pseudo-measurements generated from proprioceptive sensors when the rover is stationary conditions and the non-holonomic constraints while traversing can be utilized to improve the localization performance without any significant changes to the rover operations. Through this work, it is observed that a substantial improvement in localization performance, without the aid of additional exteroceptive sensor information.
To achieve the second objective, the relationship between the estimation of localization uncertainty and wheel-terrain interactions through slip-ratio was investigated. This relationship was exposed with a Gaussian process with time series implementation by using the slippage estimation while the rover is moving. Then, it is predicted when to change from moving to stationary conditions by mapping the predicted slippage into localization uncertainty prediction. Instead of a periodic stopping framework, the method introduced in this work is a slip-aware localization method that enables the rover to stop more frequently in high-slip terrains whereas stops rover less frequently for low-slip terrains while keeping the proprioceptive localization reliable
Vision-Aided Pedestrian Navigation for Challenging GNSS Environments
There is a strong need for an accurate pedestrian navigation system, functional also in GNSS challenging environments, namely urban areas and indoors, for improved safety and to enhance everyday life. Pedestrian navigation is mainly needed in these environments that are challenging for GNSS but also for other RF positioning systems and some non-RF systems such as the magnetometry used for heading due to the presence of ferrous material. Indoor and urban navigation has been an active research area for years. There is no individual system at this time that can address all needs set for pedestrian navigation in these environments, but a fused solution of different sensors can provide better accuracy, availability and continuity. Self-contained sensors, namely digital compasses for measuring heading, gyroscopes for heading changes and accelerometers for the user speed, constitute a good option for pedestrian navigation. However, their performance suffers from noise and biases that result in large position errors increasing with time. Such errors can however be mitigated using information about the user motion obtained from consecutive images taken by a camera carried by the user, provided that its position and orientation with respect to the user’s body are known. The motion of the features in the images may then be transformed into information about the user’s motion. Due to its distinctive characteristics, this vision-aiding complements other positioning technologies in order to provide better pedestrian navigation accuracy and reliability. This thesis discusses the concepts of a visual gyroscope that provides the relative user heading and a visual odometer that provides the translation of the user between the consecutive images. Both methods use a monocular camera carried by the user. The visual gyroscope monitors the motion of virtual features, called vanishing points, arising from parallel straight lines in the scene, and from the change of their location that resolves heading, roll and pitch. The method is applicable to the human environments as the straight lines in the structures enable the vanishing point perception. For the visual odometer, the ambiguous scale arising when using the homography between consecutive images to observe the translation is solved using two different methods. First, the scale is computed using a special configuration intended for indoors. Secondly, the scale is resolved using differenced GNSS carrier phase measurements of the camera in a method aimed at urban environments, where GNSS can’t perform alone due to tall buildings blocking the required line-of-sight to four satellites. However, the use of visual perception provides position information by exploiting a minimum of two satellites and therefore the availability of navigation solution is substantially increased. Both methods are sufficiently tolerant for the challenges of visual perception in indoor and urban environments, namely low lighting and dynamic objects hindering the view. The heading and translation are further integrated with other positioning systems and a navigation solution is obtained. The performance of the proposed vision-aided navigation was tested in various environments, indoors and urban canyon environments to demonstrate its effectiveness. These experiments, although of limited durations, show that visual processing efficiently complements other positioning technologies in order to provide better pedestrian navigation accuracy and reliability
Multi-Antenna Vision-and-Inertial-Aided CDGNSS for Micro Aerial Vehicle Pose Estimation
A system is presented for multi-antenna carrier phase differential GNSS (CDGNSS)-based pose (position and orientation) estimation aided by monocular visual measurements and a smartphone-grade inertial sensor. The system is designed for micro aerial vehicles, but can be applied generally for low-cost, lightweight, high-accuracy, geo-referenced pose estimation. Visual and inertial measurements enable robust operation despite GNSS degradation by constraining uncertainty in the dynamics propagation, which improves fixed-integer CDGNSS availability and reliability in areas with limited sky visibility. No prior work has demonstrated an increased CDGNSS integer fixing rate when incorporating visual measurements with smartphone-grade inertial sensing. A central pose estimation filter receives measurements from separate CDGNSS position and attitude estimators, visual feature measurements based on the ROVIO measurement model, and inertial measurements. The filter's pose estimates are fed back as a prior for CDGNSS integer fixing. A performance analysis under both simulated and real-world GNSS degradation shows that visual measurements greatly increase the availability and accuracy of low-cost inertial-aided CDGNSS pose estimation.Aerospace Engineering and Engineering Mechanic
The four key challenges of advanced multisensor navigation and positioning
The next generation of navigation and positioning
systems must provide greater accuracy and reliability in a range
of challenging environments to meet the needs of a variety of
mission-critical applications. No single navigation technology is
robust enough to meet these requirements on its own, so a
multisensor solution is required. Although many new navigation
and positioning methods have been developed in recent years,
little has been done to bring them together into a robust, reliable,
and cost-effective integrated system. To achieve this, four key
challenges must be met: complexity, context, ambiguity, and
environmental data handling. This paper addresses each of these
challenges. It describes the problems, discusses possible
approaches, and proposes a program of research and
standardization activities to solve them. The discussion is
illustrated with results from research into urban GNSS
positioning, GNSS shadow matching, environmental feature
matching, and context detection
Robust Positioning in the Presence of Multipath and NLOS GNSS Signals
GNSS signals can be blocked and reflected by nearby objects, such as buildings, walls, and vehicles. They can also be reflected by the ground and by water. These effects are the dominant source of GNSS positioning errors in dense urban environments, though they can have an impact almost anywhere. Non- line-of-sight (NLOS) reception occurs when the direct path from the transmitter to the receiver is blocked and signals are received only via a reflected path. Multipath interference occurs, as the name suggests, when a signal is received via multiple paths. This can be via the direct path and one or more reflected paths, or it can be via multiple reflected paths. As their error characteristics are different, NLOS and multipath interference typically require different mitigation techniques, though some techniques are applicable to both. Antenna design and advanced receiver signal processing techniques can substantially reduce multipath errors. Unless an antenna array is used, NLOS reception has to be detected using the receiver's ranging and carrier-power-to-noise-density ratio (C/N0) measurements and mitigated within the positioning algorithm. Some NLOS mitigation techniques can also be used to combat severe multipath interference. Multipath interference, but not NLOS reception, can also be mitigated by comparing or combining code and carrier measurements, comparing ranging and C/N0 measurements from signals on different frequencies, and analyzing the time evolution of the ranging and C/N0 measurements
- …