547 research outputs found

    State estimation for aggressive flight in GPS-denied environments using onboard sensing

    Get PDF
    In this paper we present a state estimation method based on an inertial measurement unit (IMU) and a planar laser range finder suitable for use in real-time on a fixed-wing micro air vehicle (MAV). The algorithm is capable of maintaing accurate state estimates during aggressive flight in unstructured 3D environments without the use of an external positioning system. Our localization algorithm is based on an extension of the Gaussian Particle Filter. We partition the state according to measurement independence relationships and then calculate a pseudo-linear update which allows us to use 20x fewer particles than a naive implementation to achieve similar accuracy in the state estimate. We also propose a multi-step forward fitting method to identify the noise parameters of the IMU and compare results with and without accurate position measurements. Our process and measurement models integrate naturally with an exponential coordinates representation of the attitude uncertainty. We demonstrate our algorithms experimentally on a fixed-wing vehicle flying in a challenging indoor environment

    Cameras and Inertial/Magnetic Sensor Units Alignment Calibration

    Get PDF
    Due to the external acceleration interference/ magnetic disturbance, the inertial/magnetic measurements are usually fused with visual data for drift-free orientation estimation, which plays an important role in a wide variety of applications, ranging from virtual reality, robot, and computer vision to biomotion analysis and navigation. However, in order to perform data fusion, alignment calibration must be performed in advance to determine the difference between the sensor coordinate system and the camera coordinate system. Since orientation estimation performance of the inertial/magnetic sensor unit is immune to the selection of the inertial/magnetic sensor frame original point, we therefore ignore the translational difference by assuming the sensor and camera coordinate systems sharing the same original point and focus on the rotational alignment difference only in this paper. By exploiting the intrinsic restrictions among the coordinate transformations, the rotational alignment calibration problem is formulated by a simplified hand–eye equation AX = XB (A, X, and B are all rotation matrices). A two-step iterative algorithm is then proposed to solve such simplified handeye calibration task. Detailed laboratory validation has been performed and the good experimental results have illustrated the effectiveness of the proposed alignment calibration method

    Towards Visual Localization, Mapping and Moving Objects Tracking by a Mobile Robot: a Geometric and Probabilistic Approach

    Get PDF
    Dans cette thĂšse, nous rĂ©solvons le problĂšme de reconstruire simultanĂ©ment une reprĂ©sentation de la gĂ©omĂ©trie du monde, de la trajectoire de l'observateur, et de la trajectoire des objets mobiles, Ă  l'aide de la vision. Nous divisons le problĂšme en trois Ă©tapes : D'abord, nous donnons une solution au problĂšme de la cartographie et localisation simultanĂ©es pour la vision monoculaire qui fonctionne dans les situations les moins bien conditionnĂ©es gĂ©omĂ©triquement. Ensuite, nous incorporons l'observabilitĂ© 3D instantanĂ©e en dupliquant le matĂ©riel de vision avec traitement monoculaire. Ceci Ă©limine les inconvĂ©nients inhĂ©rents aux systĂšmes stĂ©rĂ©o classiques. Nous ajoutons enfin la dĂ©tection et suivi des objets mobiles proches en nous servant de cette observabilitĂ© 3D. Nous choisissons une reprĂ©sentation Ă©parse et ponctuelle du monde et ses objets. La charge calculatoire des algorithmes de perception est allĂ©gĂ©e en focalisant activement l'attention aux rĂ©gions de l'image avec plus d'intĂ©rĂȘt. ABSTRACT : In this thesis we give new means for a machine to understand complex and dynamic visual scenes in real time. In particular, we solve the problem of simultaneously reconstructing a certain representation of the world's geometry, the observer's trajectory, and the moving objects' structures and trajectories, with the aid of vision exteroceptive sensors. We proceeded by dividing the problem into three main steps: First, we give a solution to the Simultaneous Localization And Mapping problem (SLAM) for monocular vision that is able to adequately perform in the most ill-conditioned situations: those where the observer approaches the scene in straight line. Second, we incorporate full 3D instantaneous observability by duplicating vision hardware with monocular algorithms. This permits us to avoid some of the inherent drawbacks of classic stereo systems, notably their limited range of 3D observability and the necessity of frequent mechanical calibration. Third, we add detection and tracking of moving objects by making use of this full 3D observability, whose necessity we judge almost inevitable. We choose a sparse, punctual representation of both the world and the moving objects in order to alleviate the computational payload of the image processing algorithms, which are required to extract the necessary geometrical information out of the images. This alleviation is additionally supported by active feature detection and search mechanisms which focus the attention to those image regions with the highest interest. This focusing is achieved by an extensive exploitation of the current knowledge available on the system (all the mapped information), something that we finally highlight to be the ultimate key to success

    Distributed estimation techniques forcyber-physical systems

    Get PDF
    Nowadays, with the increasing use of wireless networks, embedded devices and agents with processing and sensing capabilities, the development of distributed estimation techniques has become vital to monitor important variables of the system that are not directly available. Numerous distributed estimation techniques have been proposed in the literature according to the model of the system, noises and disturbances. One of the main objectives of this thesis is to search all those works that deal with distributed estimation techniques applied to cyber-physical systems, system of systems and heterogeneous systems, through using systematic review methodology. Even though systematic reviews are not the common way to survey a topic in the control community, they provide a rigorous, robust and objective formula that should not be ignored. The presented systematic review incorporates and adapts the guidelines recommended in other disciplines to the field of automation and control and presents a brief description of the different phases that constitute a systematic review. Undertaking the systematic review many gaps were discovered: it deserves to be remarked that some estimators are not applied to cyber-physical systems, such as sliding mode observers or set-membership observers. Subsequently, one of these particular techniques was chosen, set-membership estimator, to develop new applications for cyber-physical systems. This introduces the other objectives of the thesis, i.e. to present two novel formulations of distributed set-membership estimators. Both estimators use a multi-hop decomposition, so the dynamics of the system is rewritten to present a cascaded implementation of the distributed set-membership observer, decoupling the influence of the non-observable modes to the observable ones. So each agent must find a different set for each sub-space, instead of a unique set for all the states. Two different approaches have been used to address the same problem, that is, to design a guaranteed distributed estimation method for linear full-coupled systems affected by bounded disturbances, to be implemented in a set of distributed agents that need to communicate and collaborate to achieve this goal

    Survey of Motion Tracking Methods Based on Inertial Sensors: A Focus on Upper Limb Human Motion

    Get PDF
    Motion tracking based on commercial inertial measurements units (IMUs) has been widely studied in the latter years as it is a cost-effective enabling technology for those applications in which motion tracking based on optical technologies is unsuitable. This measurement method has a high impact in human performance assessment and human-robot interaction. IMU motion tracking systems are indeed self-contained and wearable, allowing for long-lasting tracking of the user motion in situated environments. After a survey on IMU-based human tracking, five techniques for motion reconstruction were selected and compared to reconstruct a human arm motion. IMU based estimation was matched against motion tracking based on the Vicon marker-based motion tracking system considered as ground truth. Results show that all but one of the selected models perform similarly (about 35 mm average position estimation error)

    Flight Mechanics/Estimation Theory Symposium, 1991

    Get PDF
    Twenty-six papers and abstracts are presented. A wide range of issues related to orbit attitude prediction, orbit determination, and orbit control are examined including attitude sensor calibration, attitude dynamics, and orbit decay and maneuver strategy. Government, industry, and the academic community participated in the preparation and presentation of these papers

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Precision Navigation Using Pre-Georegistered Map Data

    Get PDF
    Navigation performance in small unmanned aerial vehicles (UAVs) is adversely affected by limitations in current sensor technology for small, lightweight sensors. Because most UAVs are equipped with cameras for mission-related purposes, it is advantageous to utilize the camera to improve the navigation solution. This research improves navigation by matching camera images to a priori georegistered image data and combining this update with existing image-aided navigation technology. The georegistration matching is done by projecting the images into the same plane, extracting features using the techniques Scale Invariant Feature Transform (SIFT) [5] and Speeded-Up Robust Features (SURF) [3]. The features are matched using the Random Scale and Consensus (RANSAC) [4] algorithm, which generates a model to transform feature locations from one image to another. In addition to matching the image taken by the UAV to the stored images, the effect of matching the images after transforming one to the perspective of the other is investigated. One of the chief advantages of this method is the ability to provide both an absolute position and attitude update. Test results using 15 minutes of aerial video footage at altitudes ranging from 1000m to 1500m demonstrated that transforming the image data from one perspective to the other yields an improvement in performance. The best system configuration uses SIFT on an image that was transformed into the satellite perspective and matched to satellite map data. This process is able to achieve attitude errors on the order of milliradians, and position errors on the order of a few meters vertically. The along track, cross track, and heading errors are higher than expected. Further work is needed on reliability. Once this is accomplished, it should improve the navigation solution of an aircraft, or even provide navigation grade position and attitude estimates in a GPS denied environment

    Vehicle dynamics virtual sensing and advanced motion control for highly skilled autonomous vehicles

    Get PDF
    This dissertation is aimed at elucidating the path towards the development of a future generation of highly-skilled autonomous vehicles (HSAV). In brief, it is envisaged that future HSAVs will be able to exhibit advanced driving skills to maintain the vehicle within stable limits in spite of the driving conditions (limits of handling) or environmental adversities (e.g. low manoeuvrability surfaces). Current research lines on intelligent systems indicate that such advanced driving behaviour may be realised by means of expert systems capable of monitoring the current vehicle states, learning the road friction conditions, and adapting their behaviour depending on the identified situation. Such adaptation skills are often exhibited by professional motorsport drivers, who fine-tune their driving behaviour depending on the road geometry or tyre-friction characteristics. On this basis, expert systems incorporating advanced driving functions inspired by the techniques seen on highly-skilled drivers (e.g. high body slip control) are proposed to extend the operating region of autonomous vehicles and achieve high-level automation (e.g. manoeuvrability enhancement on low-adherence surfaces). Specifically, two major research topics are covered in detail in this dissertation to conceive these expert systems: vehicle dynamics virtual sensing and advanced motion control. With regards to the former, a comprehensive research is undertaken to propose virtual sensors able to estimate the vehicle planar motion states and learn the road friction characteristics from readily available measurements. In what concerns motion control, systems to mimic advanced driving skills and achieve robust path-following ability are pursued. An optimal coordinated action of different chassis subsystems (e.g. steering and individual torque control) is sought by the adoption of a centralised multi-actuated system framework. The virtual sensors developed in this work are validated experimentally with the Vehicle-Based Objective Tyre Testing (VBOTT) research testbed of JAGUAR LAND ROVER and the advanced motion control functions with the Multi-Actuated Ground Vehicle “DevBot” of ARRIVAL and ROBORACE.Diese Dissertation soll den Weg zur Entwicklung einer zukĂŒnftigen Generation hochqualifizierter autonomer Fahrzeuge (HSAV) aufzeigen. Kurz gesagt, es ist beabsichtigt, dass zukĂŒnftige HSAVs fortgeschrittene FahrfĂ€higkeiten aufweisen können, um das Fahrzeug trotz der Fahrbedingungen (Grenzen des Fahrverhaltens) oder Umgebungsbedingungen (z. B. OberflĂ€chen mit geringer ManövrierfĂ€higkeit) in stabilen Grenzen zu halten. Aktuelle Forschungslinien zu intelligenten Systemen weisen darauf hin, dass ein solches fortschrittliches Fahrverhalten mit Hilfe von Expertensystemen realisiert werden kann, die in der Lage sind, die aktuellen FahrzeugzustĂ€nde zu ĂŒberwachen, die Straßenreibungsbedingungen kennenzulernen und ihr Verhalten in AbhĂ€ngigkeit von der ermittelten Situation anzupassen. Solche AnpassungsfĂ€higkeiten werden hĂ€ufig von professionellen Motorsportfahrern gezeigt, die ihr Fahrverhalten in AbhĂ€ngigkeit von der Straßengeometrie oder den Reifenreibungsmerkmalen abstimmen. Auf dieser Grundlage werden Expertensysteme mit fortschrittlichen Fahrfunktionen vorgeschlagen, die auf den Techniken hochqualifizierter Fahrer basieren (z. B. hohe Schlupfregelung), um den Betriebsbereich autonomer Fahrzeuge zu erweitern und eine Automatisierung auf hohem Niveau zu erreichen (z. B. Verbesserung der ManövrierfĂ€higkeit auf niedrigem Niveau) -haftende OberflĂ€chen). Um diese Expertensysteme zu konzipieren, werden zwei große Forschungsthemen in dieser Dissertation ausfĂŒhrlich behandelt: Fahrdynamik-virtuelle Wahrnehmung und fortschrittliche Bewegungssteuerung. In Bezug auf erstere wird eine umfassende Forschung durchgefĂŒhrt, um virtuelle Sensoren vorzuschlagen, die in der Lage sind, die BewegungszustĂ€nde der Fahrzeugebenen abzuschĂ€tzen und die Straßenreibungseigenschaften aus leicht verfĂŒgbaren Messungen kennenzulernen. In Bezug auf die Bewegungssteuerung werden Systeme zur Nachahmung fortgeschrittener FahrfĂ€higkeiten und zum Erzielen einer robusten WegfolgefĂ€higkeit angestrebt. Eine optimale koordinierte Wirkung verschiedener Fahrgestellsubsysteme (z. B. Lenkung und individuelle Drehmomentsteuerung) wird durch die Annahme eines zentralisierten, mehrfach betĂ€tigten Systemrahmens angestrebt. Die in dieser Arbeit entwickelten virtuellen Sensoren wurden experimentell mit dem Vehicle-Based Objective Tyre Testing (VBOTT) - PrĂŒfstand von JAGUAR LAND ROVER und den fortschrittlichen Bewegungssteuerungsfunktionen mit dem mehrfach betĂ€tigten Bodenfahrzeug ”DevBot” von ARRIVAL und ROBORACE validiert
    • 

    corecore