63 research outputs found

    Development of an evaluation technique for strapdown guidance systems Interim scientific report

    Get PDF
    Evaluation technique to measure performance of strapdown guidance systems designed for unmanned interplanetary mission

    Accounting for Vibration Noise in Stochastic Measurement Errors

    Full text link
    The measurement of data over time and/or space is of utmost importance in a wide range of domains from engineering to physics. Devices that perform these measurements therefore need to be extremely precise to obtain correct system diagnostics and accurate predictions, consequently requiring a rigorous calibration procedure which models their errors before being employed. While the deterministic components of these errors do not represent a major modelling challenge, most of the research over the past years has focused on delivering methods that can explain and estimate the complex stochastic components of these errors. This effort has allowed to greatly improve the precision and uncertainty quantification of measurement devices but has this far not accounted for a significant stochastic noise that arises for many of these devices: vibration noise. Indeed, having filtered out physical explanations for this noise, a residual stochastic component often carries over which can drastically affect measurement precision. This component can originate from different sources, including the internal mechanics of the measurement devices as well as the movement of these devices when placed on moving objects or vehicles. To remove this disturbance from signals, this work puts forward a modelling framework for this specific type of noise and adapts the Generalized Method of Wavelet Moments to estimate these models. We deliver the asymptotic properties of this method when applied to processes that include vibration noise and show the considerable practical advantages of this approach in simulation and applied case studies.Comment: 30 pages, 9 figure

    Modeling, Simulation and Control of Very Flexible Unmanned Aerial Vehicle

    Full text link
    This dissertation presents research on modeling, simulation and control of very flexible aircraft. This work includes theoretical and numerical developments, as well as experimental validations. On the theoretical front, new kinematic equations for modeling sensors are derived. This formulation uses geometrically nonlinear strain-based finite elements developed as part of University of Michigan Nonlinear Aeroelastic Simulation Toolbox (UM/NAST). Numerical linearizations of both the flexible vehicle and the sensor measurements are developed, allowing a linear time invariant model to be extracted for control analysis and design. Two different algorithms to perform sensor fusion from different sensor sources to extract elastic deformation are investigated. Nonlinear least square method uses geometry and nonlinear beam strain-displacement kinematics to reconstruct the wing shape. Detailed information such as material properties or loading conditions are not required. The second method is the Kalman filter, implemented in a multi-rate form. This method requires a dynamical system representation to be available. However, it is more robust to noise corruption in sensor measurements. In order to control maneuver loads, Model Predictive Control is applied to maneuver load alleviation of a representative very flexible aircraft (X-HALE). Numerical studies are performed in UM/NAST for pitch up and roll maneuvers. Both control and state constraints are successfully enforced, while reference commands are still being tracked. MPC execution is also timed and current implementation is capable of almost real-time operation. On the experimental front, two aeroelastic testbed vehicles (ATV-6B and RRV-6B) are instrumented with sensors. On ATV-6B, an extensive set of sensors measuring structural, flight dynamic, and aerodynamic information are integrated on-board. A novel stereo-vision measurement system mounted on the body center looking towards the wing tip measures wing deformation. High brightness LEDs are used as target markers for easy detection and to allow each view to be captured with fast camera shutter speed. Experimental benchmarks are conducted to verify the accuracy of this methodology. RRV-6B flight test results are presented. System identification is applied to the experimental data to generate a SISO description of the flexible aircraft. System identification results indicate that the UM/NAST X-HALE model requires some tuning to match observed dynamics. However, the general trends predicted by the numerical model are in agreement with flight test results. Finally, using this identified plant, a stability augmentation autopilot is designed and flight tested. This augmentation autopilot utilizes a cascaded two-loop proportional integral control design, with the inner loop regulating angular rates and outer loop regulating attitude. Each of the three axes is assumed to be decoupled and designed using SISO methodology. This stabilization system demonstrates significant improvements in the RRV-6B handling qualities. This dissertation ends with a summary of the results and conclusions, and its main contribution to the field. Suggestions for future work are also presented.PHDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/144019/1/pziyang_1.pd

    Space shuttle landing navigation using precision distance measuring equipment

    Get PDF
    Evaluation of precision distance measuring equipment for space shuttle landing navigatio

    Low-Cost Sensors and Biological Signals

    Get PDF
    Many sensors are currently available at prices lower than USD 100 and cover a wide range of biological signals: motion, muscle activity, heart rate, etc. Such low-cost sensors have metrological features allowing them to be used in everyday life and clinical applications, where gold-standard material is both too expensive and time-consuming to be used. The selected papers present current applications of low-cost sensors in domains such as physiotherapy, rehabilitation, and affective technologies. The results cover various aspects of low-cost sensor technology from hardware design to software optimization

    Robust vision based slope estimation and rocks detection for autonomous space landers

    Get PDF
    As future robotic surface exploration missions to other planets, moons and asteroids become more ambitious in their science goals, there is a rapidly growing need to significantly enhance the capabilities of entry, descent and landing technology such that landings can be carried out with pin-point accuracy at previously inaccessible sites of high scientific value. As a consequence of the extreme uncertainty in touch-down locations of current missions and the absence of any effective hazard detection and avoidance capabilities, mission designers must exercise extreme caution when selecting candidate landing sites. The entire landing uncertainty footprint must be placed completely within a region of relatively flat and hazard free terrain in order to minimise the risk of mission ending damage to the spacecraft at touchdown. Consequently, vast numbers of scientifically rich landing sites must be rejected in favour of safer alternatives that may not offer the same level of scientific opportunity. The majority of truly scientifically interesting locations on planetary surfaces are rarely found in such hazard free and easily accessible locations, and so goals have been set for a number of advanced capabilities of future entry, descent and landing technology. Key amongst these is the ability to reliably detect and safely avoid all mission critical surface hazards in the area surrounding a pre-selected landing location. This thesis investigates techniques for the use of a single camera system as the primary sensor in the preliminary development of a hazard detection system that is capable of supporting pin-point landing operations for next generation robotic planetary landing craft. The requirements for such a system have been stated as the ability to detect slopes greater than 5 degrees and surface objects greater than 30cm in diameter. The primary contribution in this thesis, aimed at achieving these goals, is the development of a feature-based,self-initialising, fully adaptive structure from motion (SFM) algorithm based on a robust square-root unscented Kalman filtering framework and the fusion of the resulting SFM scene structure estimates with a sophisticated shape from shading (SFS) algorithm that has the potential to produce very dense and highly accurate digital elevation models (DEMs) that possess sufficient resolution to achieve the sensing accuracy required by next generation landers. Such a system is capable of adapting to potential changes in the external noise environment that may result from intermittent and varying rocket motor thrust and/or sudden turbulence during descent, which may translate to variations in the vibrations experienced by the platform and introduce varying levels of motion blur that will affect the accuracy of image feature tracking algorithms. Accurate scene structure estimates have been obtained using this system from both real and synthetic descent imagery, allowing for the production of accurate DEMs. While some further work would be required in order to produce DEMs that possess the resolution and accuracy needed to determine slopes and the presence of small objects such as rocks at the levels of accuracy required, this thesis presents a very strong foundation upon which to build and goes a long way towards developing a highly robust and accurate solution

    Preliminary design of a redundant strapped down inertial navigation unit using two-degree-of-freedom tuned-gimbal gyroscopes

    Get PDF
    This redundant strapdown INS preliminary design study demonstrates the practicality of a skewed sensor system configuration by means of: (1) devising a practical system mechanization utilizing proven strapdown instruments, (2) thoroughly analyzing the skewed sensor redundancy management concept to determine optimum geometry, data processing requirements, and realistic reliability estimates, and (3) implementing the redundant computers into a low-cost, maintainable configuration

    Autonomous vision-based terrain-relative navigation for planetary exploration

    Get PDF
    Abstract: The interest of major space agencies in the world for vision sensors in their mission designs has been increasing over the years. Indeed, cameras offer an efficient solution to address the ever-increasing requirements in performance. In addition, these sensors are multipurpose, lightweight, proven and a low-cost technology. Several researchers in vision sensing for space application currently focuse on the navigation system for autonomous pin-point planetary landing and for sample and return missions to small bodies. In fact, without a Global Positioning System (GPS) or radio beacon around celestial bodies, high-accuracy navigation around them is a complex task. Most of the navigation systems are based only on accurate initialization of the states and on the integration of the acceleration and the angular rate measurements from an Inertial Measurement Unit (IMU). This strategy can track very accurately sudden motions of short duration, but their estimate diverges in time and leads normally to high landing error. In order to improve navigation accuracy, many authors have proposed to fuse those IMU measurements with vision measurements using state estimators, such as Kalman filters. The first proposed vision-based navigation approach relies on feature tracking between sequences of images taken in real time during orbiting and/or landing operations. In that case, image features are image pixels that have a high probability of being recognized between images taken from different camera locations. By detecting and tracking these features through a sequence of images, the relative motion of the spacecraft can be determined. This technique, referred to as Terrain-Relative Relative Navigation (TRRN), relies on relatively simple, robust and well-developed image processing techniques. It allows the determination of the relative motion (velocity) of the spacecraft. Despite the fact that this technology has been demonstrated with space qualified hardware, its gain in accuracy remains limited since the spacecraft absolute position is not observable from the vision measurements. The vision-based navigation techniques currently studied consist in identifying features and in mapping them into an on-board cartographic database indexed by an absolute coordinate system, thereby providing absolute position determination. This technique, referred to as Terrain-Relative Absolute Navigation (TRAN), relies on very complex Image Processing Software (IPS) having an obvious lack of robustness. In fact, these software depend often on the spacecraft attitude and position, they are sensitive to illumination conditions (the elevation and azimuth of the Sun when the geo-referenced database is built must be similar to the ones present during mission), they are greatly influenced by the image noise and finally they hardly manage multiple varieties of terrain seen during the same mission (the spacecraft can fly over plain zone as well as mountainous regions, the images may contain old craters with noisy rims as well as young crater with clean rims and so on). At this moment, no real-time hardware-in-the-loop experiment has been conducted to demonstrate the applicability of this technology to space mission. The main objective of the current study is to develop autonomous vision-based navigation algorithms that provide absolute position and surface-relative velocity during the proximity operations of a planetary mission (orbiting phase and landing phase) using a combined approach of TRRN and TRAN technologies. The contributions of the study are: (1) reference mission definition, (2) advancements in the TRAN theory (image processing as well as state estimation) and (3) practical implementation of vision-based navigation.Résumé: L’intérêt des principales agences spatiales envers les technologies basées sur la vision artificielle ne cesse de croître. En effet, les caméras offrent une solution efficace pour répondre aux exigences de performance, toujours plus élevées, des missions spatiales. De surcroît, ces capteurs sont multi-usages, légers, éprouvés et peu coûteux. Plusieurs chercheurs dans le domaine de la vision artificielle se concentrent actuellement sur les systèmes autonomes pour l’atterrissage de précision sur des planètes et sur les missions d’échantillonnage sur des astéroïdes. En effet, sans système de positionnement global « Global Positioning System (GPS) » ou de balises radio autour de ces corps célestes, la navigation de précision est une tâche très complexe. La plupart des systèmes de navigation sont basés seulement sur l’intégration des mesures provenant d’une centrale inertielle. Cette stratégie peut être utilisée pour suivre les mouvements du véhicule spatial seulement sur une courte durée, car les données estimées divergent rapidement. Dans le but d’améliorer la précision de la navigation, plusieurs auteurs ont proposé de fusionner les mesures provenant de la centrale inertielle avec des mesures d’images du terrain. Les premiers algorithmes de navigation utilisant l’imagerie du terrain qui ont été proposés reposent sur l’extraction et le suivi de traits caractéristiques dans une séquence d’images prises en temps réel pendant les phases d’orbite et/ou d’atterrissage de la mission. Dans ce cas, les traits caractéristiques de l’image correspondent à des pixels ayant une forte probabilité d’être reconnus entre des images prises avec différentes positions de caméra. En détectant et en suivant ces traits caractéristiques, le déplacement relatif du véhicule (la vitesse) peut être déterminé. Ces techniques, nommées navigation relative, utilisent des algorithmes de traitement d’images robustes, faciles à implémenter et bien développés. Bien que cette technologie a été éprouvée sur du matériel de qualité spatiale, le gain en précision demeure limité étant donné que la position absolue du véhicule n’est pas observable dans les mesures extraites de l’image. Les techniques de navigation basées sur la vision artificielle actuellement étudiées consistent à identifier des traits caractéristiques dans l’image pour les apparier avec ceux contenus dans une base de données géo-référencées de manière à fournir une mesure de position absolue au filtre de navigation. Cependant, cette technique, nommée navigation absolue, implique l’utilisation d’algorithmes de traitement d’images très complexes souffrant pour le moment des problèmes de robustesse. En effet, ces algorithmes dépendent souvent de la position et de l’attitude du véhicule. Ils sont très sensibles aux conditions d’illuminations (l’élévation et l’azimut du Soleil présents lorsque la base de données géo-référencée est construite doit être similaire à ceux observés pendant la mission). Ils sont grandement influencés par le bruit dans l’image et enfin ils supportent mal les multiples variétés de terrain rencontrées pendant la même mission (le véhicule peut survoler autant des zones de plaine que des régions montagneuses, les images peuvent contenir des vieux cratères avec des contours flous aussi bien que des cratères jeunes avec des contours bien définis, etc.). De plus, actuellement, aucune expérimentation en temps réel et sur du matériel de qualité spatiale n’a été réalisée pour démontrer l’applicabilité de cette technologie pour les missions spatiales. Par conséquent, l’objectif principal de ce projet de recherche est de développer un système de navigation autonome par imagerie du terrain qui fournit la position absolue et la vitesse relative au terrain d’un véhicule spatial pendant les opérations à basse altitude sur une planète. Les contributions de ce travail sont : (1) la définition d’une mission de référence, (2) l’avancement de la théorie de la navigation par imagerie du terrain (algorithmes de traitement d’images et estimation d’états) et (3) implémentation pratique de cette technologie

    Analysis of Visual-Inertial Odometry Algorithms for Outdoor Drone Applications

    Get PDF
    Visual-inertial odometry (VIO) and visual-inertial simultaneous localisation and mapping (VISLAM) enables mobile robots to localise without relying on global navigation satellite systems (GNSS) or heavy sensors. They enable mobile robots, especially payload critical robots, such as drones, to perform autonomous tasks with limited resources. Localisation of drones for outdoor applications using visual and inertial sensor fusion is of particular interest, since it widens the use cases and reliability of autonomous drones in different flying conditions and environments. The goal of this thesis is to identify suitable VIO/VISLAM algorithms, and to develop a platform for localising a drone for outdoor applications. A stereo camera and IMU sensor suite was developed to collect visual-inertial data, since suitable off-the-shelf systems were not available. Three state-of-the-art VIO/VISLAM algorithms, FLVIS, ORB-SLAM3 and VINS-Fusion, were evaluated with outdoor drone datasets of varying flight altitudes of 40, 60, 80 and 100 m and speeds of 2, 3 and 4 m/s. The estimation results were compared with the ground truth and were quantitatively evaluated. VINS-Fusion estimated the trajectories most accurately among the three algorithms with an absolute trajectory error of 2.186 m and a relative rotation error of 0.862 deg at an altitude of 60 m for a trajectory of length 800 m. System configurations, algorithm parameters, external conditions, and scene content impacted the estimation results. These factors, further developments and future scopes are discussed along with the obtained results
    • …
    corecore