15 research outputs found

    Improvement Schemes for Indoor Mobile Location Estimation: A Survey

    Get PDF
    Location estimation is significant in mobile and ubiquitous computing systems. The complexity and smaller scale of the indoor environment impose a great impact on location estimation. The key of location estimation lies in the representation and fusion of uncertain information from multiple sources. The improvement of location estimation is a complicated and comprehensive issue. A lot of research has been done to address this issue. However, existing research typically focuses on certain aspects of the problem and specific methods. This paper reviews mainstream schemes on improving indoor location estimation from multiple levels and perspectives by combining existing works and our own working experiences. Initially, we analyze the error sources of common indoor localization techniques and provide a multilayered conceptual framework of improvement schemes for location estimation. This is followed by a discussion of probabilistic methods for location estimation, including Bayes filters, Kalman filters, extended Kalman filters, sigma-point Kalman filters, particle filters, and hidden Markov models. Then, we investigate the hybrid localization methods, including multimodal fingerprinting, triangulation fusing multiple measurements, combination of wireless positioning with pedestrian dead reckoning (PDR), and cooperative localization. Next, we focus on the location determination approaches that fuse spatial contexts, namely, map matching, landmark fusion, and spatial model-aided methods. Finally, we present the directions for future research

    Frequency Modulated Continuous Wave Radar and Video Fusion for Simultaneous Localization and Mapping

    Get PDF
    There has been a push recently to develop technology to enable the use of UAVs in GPS-denied environments. As UAVs become smaller, there is a need to reduce the number and sizes of sensor systems on board. A video camera on a UAV can serve multiple purposes. It can return imagery for processing by human users. The highly accurate bearing information provided by video makes it a useful tool to be incorporated into a navigation and tracking system. Radars can provide information about the types of objects in a scene and can operate in adverse weather conditions. The range and velocity measurements provided by the radar make it a good tool for navigation. FMCW radar and color video were fused to perform SLAM in an outdoor environment. A radar SLAM solution provided the basis for the fusion. Correlations between radar returns were used to estimate dead-reckoning parameters to obtain an estimate of the platform location. A new constraint was added in the radar detection process to prevent detecting poorly observable reflectors while maintaining a large number of measurements on highly observable reflectors. The radar measurements were mapped as landmarks, further improving the platform location estimates. As images were received from the video camera, changes in platform orientation were estimated, further improving the platform orientation estimates. The expected locations of radar measurements, whose uncertainty was modeled as Gaussian, were projected onto the images and used to estimate the location of the radar reflector in the image. The colors of the most likely reflector were saved and used to detect the reflector in subsequent images. The azimuth angles obtained from the image detections were used to improve the estimates of the landmarks in the SLAM map over previous estimates where only the radar was used

    Contributions to automated realtime underwater navigation

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution February 2012This dissertation presents three separate–but related–contributions to the art of underwater navigation. These methods may be used in postprocessing with a human in the loop, but the overarching goal is to enhance vehicle autonomy, so the emphasis is on automated approaches that can be used in realtime. The three research threads are: i) in situ navigation sensor alignment, ii) dead reckoning through the water column, and iii) model-driven delayed measurement fusion. Contributions to each of these areas have been demonstrated in simulation, with laboratory data, or in the field–some have been demonstrated in all three arenas. The solution to the in situ navigation sensor alignment problem is an asymptotically stable adaptive identifier formulated using rotors in Geometric Algebra. This identifier is applied to precisely estimate the unknown alignment between a gyrocompass and Doppler velocity log, with the goal of improving realtime dead reckoning navigation. Laboratory and field results show the identifier performs comparably to previously reported methods using rotation matrices, providing an alignment estimate that reduces the position residuals between dead reckoning and an external acoustic positioning system. The Geometric Algebra formulation also encourages a straightforward interpretation of the identifier as a proportional feedback regulator on the observable output error. Future applications of the identifier may include alignment between inertial, visual, and acoustic sensors. The ability to link the Global Positioning System at the surface to precision dead reckoning near the seafloor might enable new kinds of missions for autonomous underwater vehicles. This research introduces a method for dead reckoning through the water column using water current profile data collected by an onboard acoustic Doppler current profiler. Overlapping relative current profiles provide information to simultaneously estimate the vehicle velocity and local ocean current–the vehicle velocity is then integrated to estimate position. The method is applied to field data using online bin average, weighted least squares, and recursive least squares implementations. This demonstrates an autonomous navigation link between the surface and the seafloor without any dependence on a ship or external acoustic tracking systems. Finally, in many state estimation applications, delayed measurements present an interesting challenge. Underwater navigation is a particularly compelling case because of the relatively long delays inherent in all available position measurements. This research develops a flexible, model-driven approach to delayed measurement fusion in realtime Kalman filters. Using a priori estimates of delayed measurements as augmented states minimizes the computational cost of the delay treatment. Managing the augmented states with time-varying conditional process and measurement models ensures the approach works within the proven Kalman filter framework–without altering the filter structure or requiring any ad-hoc adjustments. The end result is a mathematically principled treatment of the delay that leads to more consistent estimates with lower error and uncertainty. Field results from dead reckoning aided by acoustic positioning systems demonstrate the applicability of this approach to real-world problems in underwater navigation.I have been financially supported by: the National Defense Science and Engineering Graduate (NDSEG) Fellowship administered by the American Society for Engineering Education, the Edwin A. Link Foundation Ocean Engineering and Instrumentation Fellowship, and WHOI Academic Programs office

    Robust localization with wearable sensors

    Get PDF
    Measuring physical movements of humans and understanding human behaviour is useful in a variety of areas and disciplines. Human inertial tracking is a method that can be leveraged for monitoring complex actions that emerge from interactions between human actors and their environment. An accurate estimation of motion trajectories can support new approaches to pedestrian navigation, emergency rescue, athlete management, and medicine. However, tracking with wearable inertial sensors has several problems that need to be overcome, such as the low accuracy of consumer-grade inertial measurement units (IMUs), the error accumulation problem in long-term tracking, and the artefacts generated by movements that are less common. This thesis focusses on measuring human movements with wearable head-mounted sensors to accurately estimate the physical location of a person over time. The research consisted of (i) providing an overview of the current state of research for inertial tracking with wearable sensors, (ii) investigating the performance of new tracking algorithms that combine sensor fusion and data-driven machine learning, (iii) eliminating the effect of random head motion during tracking, (iv) creating robust long-term tracking systems with a Bayesian neural network and sequential Monte Carlo method, and (v) verifying that the system can be applied with changing modes of behaviour, defined as natural transitions from walking to running and vice versa. This research introduces a new system for inertial tracking with head-mounted sensors (which can be placed in, e.g. helmets, caps, or glasses). This technology can be used for long-term positional tracking to explore complex behaviours

    Dense real-time 3D reconstruction from multiple images

    Get PDF
    The rapid increase in computer graphics and acquisition technologies has led to the widespread use of 3D models. Techniques for 3D reconstruction from multiple views aim to recover the structure of a scene and the position and orientation (motion) of the camera using only the geometrical constraints in 2D images. This problem, known as Structure from Motion (SfM) has been the focus of a great deal of research effort in recent years; however, the automatic, dense, real-time and accurate reconstruction of a scene is still a major research challenge. This thesis presents work that targets the development of efficient algorithms to produce high quality and accurate reconstructions, introducing new computer vision techniques for camera motion calibration, dense SfM reconstruction and dense real-time 3D reconstruction. In SfM, a second challenge is to build an effective reconstruction framework that provides dense and high quality surface modelling. This thesis develops a complete, automatic and flexible system with a simple user-interface of `raw images to 3D surface representation'. As part of the proposed image reconstruction approach, this thesis introduces an accurate and reliable region-growing algorithm to propagate the dense matching points from the sparse key points among all stereo pairs. This dense 3D reconstruction proposal addresses the deficiencies of existing SfM systems built on sparsely distributed 3D point clouds which are insufficient for reconstructing a complete 3D model of a scene. The existing SfM reconstruction methods perform a bundle adjustment optimization of the global geometry in order to obtain an accurate model. Such an optimization is very computational expensive and cannot be implemented in a real-time application. Extended Kalman Filter (EKF) Simultaneous Localization and Mapping (SLAM) considers the problem of concurrently estimating in real-time the structure of the surrounding world, perceived by moving sensors (cameras), simultaneously localizing in it. However, standard EKF-SLAM techniques are susceptible to errors introduced during the state prediction and measurement prediction linearization.

    Model-Based Control Using Model and Mechanization Fusion Techniques for Image-Aided Navigation

    Get PDF
    Unmanned aerial vehicles are no longer used for just reconnaissance. Current requirements call for smaller autonomous vehicles that replace the human in high-risk activities. Many times these activities are performed in GPS-degraded environments. Without GPS providing today\u27s most accurate navigation solution, autonomous navigation in tight areas is more difficult. Today, image-aided navigation is used and other methods are explored to more accurately navigate in such areas (e.g., indoors). This thesis explores the use of inertial measurements and navigation solution updates using cameras with a model-based Linear Quadratic Gaussian controller. To demonstrate the methods behind this research, the controller will provide inputs to a micro-sized helicopter that allows the vehicle to maintain hover. A new method for obtaining a more accurate navigation solution was devised, originating from the following basic setup. To begin, a nonlinear system model was identified for a micro-sized, commercial, off-the-shelf helicopter. This model was verified, then linearized about the hover condition to construct a Linear Quadratic Regulator (LQR). The state error estimates, provided by an Unscented Kalman Filter using simulated image measurement updates, are used to update the navigation solution provided by inertial measurement sensors using strapdown mechanization equations. The navigation solution is used with a reference signal to determine the position and heading error. This error, along with other states, is fed to the LQR, which controls the helicopter. Research revealed that by combining the navigation solution from the INS mechanization block with a model-based navigation solution, and combining the INS error model and system model during the time propagation in the UKF, the navigation solution error decreases by 20%. The equations used for this modification stem from state and covariance combination methods utilized in the Federated Kalman Filter

    Active Perception for Autonomous Systems : In a Deep Space Navigation Scenario

    Get PDF
    Autonomous systems typically pursue certain goals for an extended amount of time in a self-sustainable fashion. To this end, they are equipped with a set of sensors and actuators to perceive certain aspects of the world and thereupon manipulate it in accordance with some given goals. This kind of interaction can be thought of as a closed loop in which a perceive-reason-act process takes place. The bi-directional interface between an autonomous system and the outer world is then given by a sequence of imperfect observations of the world and corresponding controls which are as well imperfectly actuated. To be able to reason in such a setting, it is customary for an autonomous system to maintain a probabilistic state estimate. The quality of the estimate -- or its uncertainty -- is, in turn, dependent on the information acquired within the perceive-reason-act loop described above. Hence, this thesis strives to investigate the question of how to actively steer such a process in order to maximize the quality of the state estimate. The question will be approached by introducing different probabilistic state estimation schemes jointly working on a manifold-based encapsuled state representation. On top of the resultant state estimate different active perception approaches are introduced, which determine optimal actions with respect to uncertainty minimization. The informational value of the particular actions is given by the expected impact of measurements on the uncertainty. The latter can be obtained by different direct and indirect measures, which will be introduced and discussed. The active perception schemes for autonomous systems will be investigated with a focus on two specific deep space navigation scenarios deduced from a potential mining mission to the main asteroid belt. In the first scenario, active perception strategies are proposed, which foster the correctional value of the sensor information acquired within a heliocentric navigation approach. Here, the expected impact of measurements is directly estimated, thus omitting counterfactual updates of the state based on hypothetical actions. Numerical evaluations of this scenario show that active perception is beneficial, i.e., the quality of the state estimate is increased. In addition, it is shown that the more uncertain a state estimate is, the more the value of active perception increases. In the second scenario, active autonomous deep space navigation in the vicinity of asteroids is investigated. A trajectory and a map are jointly estimated by a Graph SLAM algorithm based on measurements of a 3D Flash-LiDAR. The active perception strategy seeks to trade-off the exploration of the asteroid against the localization performance. To this end, trajectories are generated as well as evaluated in a novel twofold approach specifically tailored to the scenario. Finally, the position uncertainty can be extracted from the graph structure and subsequently be used to dynamically control the trade-off between localization and exploration. In a numerical evaluation, it is shown that the localization performance of the Graph SLAM approach to navigation in the vicinity of asteroids is generally high. Furthermore, the active perception strategy is able to trade-off between localization performance and the degree of exploration of the asteroid. Finally, when the latter process is dynamically controlled, based on the current localization uncertainty, a joint improvement of localization as well as exploration performance can be achieved. In addition, this thesis comprises an excursion into active sensorimotor object recognition. A sensorimotor feature is derived from biological principles of the human perceptual system. This feature is then employed in different probabilistic classification schemes. Furthermore, it enables the implementation of an active perception strategy, which can be thought of as a feature selection process in a classification scheme. It is shown that those strategies might be driven by top-down factors, i.e., based on previously learned information, or by bottom-up factors, i.e., based on saliency detected in the currently considered data. Evaluations are conducted based on real data acquired by a camera mounted on a robotic arm as well as on datasets. It is shown that the integrated representation of perception and action fosters classification performance and that the application of an active perception strategy accelerates the classification process

    Autonomous vision-based terrain-relative navigation for planetary exploration

    Get PDF
    Abstract: The interest of major space agencies in the world for vision sensors in their mission designs has been increasing over the years. Indeed, cameras offer an efficient solution to address the ever-increasing requirements in performance. In addition, these sensors are multipurpose, lightweight, proven and a low-cost technology. Several researchers in vision sensing for space application currently focuse on the navigation system for autonomous pin-point planetary landing and for sample and return missions to small bodies. In fact, without a Global Positioning System (GPS) or radio beacon around celestial bodies, high-accuracy navigation around them is a complex task. Most of the navigation systems are based only on accurate initialization of the states and on the integration of the acceleration and the angular rate measurements from an Inertial Measurement Unit (IMU). This strategy can track very accurately sudden motions of short duration, but their estimate diverges in time and leads normally to high landing error. In order to improve navigation accuracy, many authors have proposed to fuse those IMU measurements with vision measurements using state estimators, such as Kalman filters. The first proposed vision-based navigation approach relies on feature tracking between sequences of images taken in real time during orbiting and/or landing operations. In that case, image features are image pixels that have a high probability of being recognized between images taken from different camera locations. By detecting and tracking these features through a sequence of images, the relative motion of the spacecraft can be determined. This technique, referred to as Terrain-Relative Relative Navigation (TRRN), relies on relatively simple, robust and well-developed image processing techniques. It allows the determination of the relative motion (velocity) of the spacecraft. Despite the fact that this technology has been demonstrated with space qualified hardware, its gain in accuracy remains limited since the spacecraft absolute position is not observable from the vision measurements. The vision-based navigation techniques currently studied consist in identifying features and in mapping them into an on-board cartographic database indexed by an absolute coordinate system, thereby providing absolute position determination. This technique, referred to as Terrain-Relative Absolute Navigation (TRAN), relies on very complex Image Processing Software (IPS) having an obvious lack of robustness. In fact, these software depend often on the spacecraft attitude and position, they are sensitive to illumination conditions (the elevation and azimuth of the Sun when the geo-referenced database is built must be similar to the ones present during mission), they are greatly influenced by the image noise and finally they hardly manage multiple varieties of terrain seen during the same mission (the spacecraft can fly over plain zone as well as mountainous regions, the images may contain old craters with noisy rims as well as young crater with clean rims and so on). At this moment, no real-time hardware-in-the-loop experiment has been conducted to demonstrate the applicability of this technology to space mission. The main objective of the current study is to develop autonomous vision-based navigation algorithms that provide absolute position and surface-relative velocity during the proximity operations of a planetary mission (orbiting phase and landing phase) using a combined approach of TRRN and TRAN technologies. The contributions of the study are: (1) reference mission definition, (2) advancements in the TRAN theory (image processing as well as state estimation) and (3) practical implementation of vision-based navigation.Résumé: L’intérêt des principales agences spatiales envers les technologies basées sur la vision artificielle ne cesse de croître. En effet, les caméras offrent une solution efficace pour répondre aux exigences de performance, toujours plus élevées, des missions spatiales. De surcroît, ces capteurs sont multi-usages, légers, éprouvés et peu coûteux. Plusieurs chercheurs dans le domaine de la vision artificielle se concentrent actuellement sur les systèmes autonomes pour l’atterrissage de précision sur des planètes et sur les missions d’échantillonnage sur des astéroïdes. En effet, sans système de positionnement global « Global Positioning System (GPS) » ou de balises radio autour de ces corps célestes, la navigation de précision est une tâche très complexe. La plupart des systèmes de navigation sont basés seulement sur l’intégration des mesures provenant d’une centrale inertielle. Cette stratégie peut être utilisée pour suivre les mouvements du véhicule spatial seulement sur une courte durée, car les données estimées divergent rapidement. Dans le but d’améliorer la précision de la navigation, plusieurs auteurs ont proposé de fusionner les mesures provenant de la centrale inertielle avec des mesures d’images du terrain. Les premiers algorithmes de navigation utilisant l’imagerie du terrain qui ont été proposés reposent sur l’extraction et le suivi de traits caractéristiques dans une séquence d’images prises en temps réel pendant les phases d’orbite et/ou d’atterrissage de la mission. Dans ce cas, les traits caractéristiques de l’image correspondent à des pixels ayant une forte probabilité d’être reconnus entre des images prises avec différentes positions de caméra. En détectant et en suivant ces traits caractéristiques, le déplacement relatif du véhicule (la vitesse) peut être déterminé. Ces techniques, nommées navigation relative, utilisent des algorithmes de traitement d’images robustes, faciles à implémenter et bien développés. Bien que cette technologie a été éprouvée sur du matériel de qualité spatiale, le gain en précision demeure limité étant donné que la position absolue du véhicule n’est pas observable dans les mesures extraites de l’image. Les techniques de navigation basées sur la vision artificielle actuellement étudiées consistent à identifier des traits caractéristiques dans l’image pour les apparier avec ceux contenus dans une base de données géo-référencées de manière à fournir une mesure de position absolue au filtre de navigation. Cependant, cette technique, nommée navigation absolue, implique l’utilisation d’algorithmes de traitement d’images très complexes souffrant pour le moment des problèmes de robustesse. En effet, ces algorithmes dépendent souvent de la position et de l’attitude du véhicule. Ils sont très sensibles aux conditions d’illuminations (l’élévation et l’azimut du Soleil présents lorsque la base de données géo-référencée est construite doit être similaire à ceux observés pendant la mission). Ils sont grandement influencés par le bruit dans l’image et enfin ils supportent mal les multiples variétés de terrain rencontrées pendant la même mission (le véhicule peut survoler autant des zones de plaine que des régions montagneuses, les images peuvent contenir des vieux cratères avec des contours flous aussi bien que des cratères jeunes avec des contours bien définis, etc.). De plus, actuellement, aucune expérimentation en temps réel et sur du matériel de qualité spatiale n’a été réalisée pour démontrer l’applicabilité de cette technologie pour les missions spatiales. Par conséquent, l’objectif principal de ce projet de recherche est de développer un système de navigation autonome par imagerie du terrain qui fournit la position absolue et la vitesse relative au terrain d’un véhicule spatial pendant les opérations à basse altitude sur une planète. Les contributions de ce travail sont : (1) la définition d’une mission de référence, (2) l’avancement de la théorie de la navigation par imagerie du terrain (algorithmes de traitement d’images et estimation d’états) et (3) implémentation pratique de cette technologie

    Modeling and State Estimation of Lithium-Ion Battery Packs for Application in Battery Management Systems

    Get PDF
    As lithium-ion (Li-Ion) battery packs grow in popularity, so do the concerns of its safety, reliability, and cost. An efficient and robust battery management system (BMS) can help ease these concerns. By measuring the voltage, temperature, and current for each cell, the BMS can balance the battery pack, and ensure it is operating within the safety limits. In addition, these measurements can be used to estimate the remaining charge in the battery (state-of-charge (SOC)) and determine the health of the battery (state-of-health (SOH)). Accurate estimation of these battery and system variables can help improve the safety and reliability of the energy storage system (ESS). This research aims to develop high-fidelity battery models and robust SOC and SOH algorithms that have low computational cost and require minimal training data. More specifically, this work will focus on SOC and SOH estimation at the pack-level, as well as modeling and simulation of a battery pack. An accurate and computationally efficient Li-Ion battery model can be highly beneficial when developing SOC and SOH algorithms on the BMS. These models allow for software-in-the-loop (SIL) and hardware-in-the-loop (HIL) testing, where the battery pack is simulated in software. However, development of these battery models can be time-consuming, especially when trying to model the effect of temperature and SOC on the equivalent circuit model (ECM) parameters. Estimation of this relationship is often accomplished by carrying out a large number of experiments, which can be too costly for many BMS manufacturers. Therefore, the first contribution of this research is to develop a comprehensive battery model, where the ECM parameter surface is generated using a set of carefully designed experiments. This technique is compared with existing approaches from literature, and it is shown that by using the proposed method, the same degree of accuracy can be obtained while requiring significantly less experimental runs. This can be advantageous for BMS manufacturers that require a high-fidelity model but cannot afford to carry out a large number of experiments. Once a comprehensive model has been developed for SIL and HIL testing, research was carried out in advancing SOH and SOC algorithms. With respect to SOH, research was conducted in developing a steady and reliable SOH metric that can be determined at the cell level and is stable at different battery operating conditions. To meet these requirements, a moving window direct resistance estimation (DRE) algorithm is utilized, where the resistance is estimated only when the battery experiences rapid current transients. The DRE approach is then compared with more advanced resistance estimation techniques such as extended Kalman filter (EKF) and recursive least squares (RLS). It is shown that by using the proposed algorithm, the same degree of accuracy can be achieved as the more advanced methods. The DRE algorithm does, however, have a much lower computational complexity and therefore, can be implemented on a battery pack composed of hundreds of cells. Research has also been conducted in converting these raw resistance values into a stable SOH metric. First, an outlier removal technique is proposed for removing any outliers in the resistance estimates; specifically, outliers that are an artifact of the sampling rate. The technique involves using an adaptive control chart, where the bounds on the control chart change as the internal resistance of the battery varies during operation. An exponentially weighted moving average (EWMA) is then applied to filter out the noise present in the raw estimates. Finally, the resistance values are filtered once more based on temperature and battery SOC. This additional filtering ensures that the SOH value is independent of the battery operating conditions. The proposed SOH framework was validated over a 27-day period for a lithium iron phosphate (LFP) battery. The results show an accurate estimation of battery resistance over time with a mean error of 1.1% as well as a stable SOH metric. The findings are significant for BMS developers who have limited computational resources but still require a robust and reliable SOH algorithm. Concerning SOC, most publications in literature examine SOC estimation at the cell level. Determining the SOC for a battery pack can be challenging, especially an estimate that behaves logically to the battery user. This work proposes a three-level approach, where the final output from the algorithm is a well-behaved pack SOC estimate. The first level utilizes an EKF for estimating SOC while an RLS approach is used to adapt the model parameters. To reduce computational time, both algorithms will be executed on two specific cells: the first cell to charge to full and the first cell to discharge to empty. The second level consists of using the SOC estimates from these two cells and estimating a pack SOC value. Finally, a novel adaptive coulomb counting approach is proposed to ensure the pack SOC estimate behaves logically. The accuracy of the algorithm is tested using a 40 Ah Li-Ion battery. The results show that the algorithm produces accurate and stable SOC estimates. Finally, this work extends the developed comprehensive battery model to examine the effect of replacing damaged cells in a battery pack with new ones. The cells within the battery pack vary stochastically, and the performance of the entire pack is evaluated under different conditions. The results show that by changing out cells in the battery pack, the SOH of the pack can be maintained indefinitely above a specific threshold value. In situations where the cells are checked for replacement at discrete intervals, referred to as maintenance event intervals, it is found that the length of the interval is dependent on the mean time to failure of the individual cells. The simulation framework, as well as the results from this paper, can be utilized to better optimize Li-ion battery pack design in electric vehicles (EVs) and make long-term deployment of EVs more economically feasible

    Magnetic Local Positioning System with Supplemental Magnetometer-Accelerometer Data Fusion

    Get PDF
    Geo-location and tracking technology, once confined to the industrial and military sectors, have been widely proliferated to the consumer world since early in the twenty-first century. The commoditization of Global Positioning System (GPS) and inertial measurement integrated circuits has made this possible, with devices small enough to fit in a cellular phone. However, GPS technology is not without its drawbacks: Its power use is high, and it can fail in smaller, obstructed spaces. Magnetic positioning, which exploits the magnetic field coupling between a set of transmitter beacon coils and a set of receiver coils, is an often overlooked, complementary technology that does not suffer from these problems. Magnetic positioning is strong where GPS is weak; however, it has some weaknesses of its own. Namely, it is subject to distortions due to metal objects in its immediate vicinity. In much of the prior art, these distortions are ignored or either statically measured and then corrected. This work presents a novel technique to dynamically correct for distorted fields. Specifically, a tri-axial magnetometer and a tri-axial accelerometer are integrated with the magnetic positioning system using a complementary Kalman filter. The end result resembles a tightly-coupled integrated GPS/inertial navigation system. The results achieved by this integrated magnetic positioning system prove the viability of the approach. The results are demonstrated in a real-world environment, where both strong, localized distortions and spatially broad distortions are corrected. In addition to the integrated magnetic position system, this work presents a novel scheme for calibrating the magnetic receiver; this technique is termed application domain calibration. In many real-world situations, low-level measurement and calibration will not be possible; therefore, this new technique uses the same set of demodulated and down-mixed data that is used by the magnetic positioning algorithms
    corecore