38 research outputs found

    Simultaneous Tracking and Shape Estimation of Extended Objects

    Get PDF
    This work is concerned with the simultaneous tracking and shape estimation of a mobile extended object based on noisy sensor measurements. Novel methods are developed for coping with the following two main challenges: i) The computational complexity due to the nonlinearity and high-dimensionality of the problem and ii) the lack of statistical knowledge about possible measurement sources on the extended object

    Simultaneous Tracking and Shape Estimation of Extended Objects

    Get PDF
    This work is concerned with the simultaneous tracking and shape estimation of a mobile extended object based on noisy sensor measurements. Novel methods are developed for coping with the following two main challenges: i) The computational complexity due to the nonlinearity and high-dimensionality of the problem, and ii) the lack of statistical knowledge about possible measurement sources on the extended object

    A comprehensive analysis of the geometry of TDOA maps in localisation problems

    Get PDF
    In this manuscript we consider the well-established problem of TDOA-based source localization and propose a comprehensive analysis of its solutions for arbitrary sensor measurements and placements. More specifically, we define the TDOA map from the physical space of source locations to the space of range measurements (TDOAs), in the specific case of three receivers in 2D space. We then study the identifiability of the model, giving a complete analytical characterization of the image of this map and its invertibility. This analysis has been conducted in a completely mathematical fashion, using many different tools which make it valid for every sensor configuration. These results are the first step towards the solution of more general problems involving, for example, a larger number of sensors, uncertainty in their placement, or lack of synchronization.Comment: 51 pages (3 appendices of 12 pages), 12 figure

    Robust and Efficient Inference of Scene and Object Motion in Multi-Camera Systems

    Get PDF
    Multi-camera systems have the ability to overcome some of the fundamental limitations of single camera based systems. Having multiple view points of a scene goes a long way in limiting the influence of field of view, occlusion, blur and poor resolution of an individual camera. This dissertation addresses robust and efficient inference of object motion and scene in multi-camera and multi-sensor systems. The first part of the dissertation discusses the role of constraints introduced by projective imaging towards robust inference of multi-camera/sensor based object motion. We discuss the role of the homography and epipolar constraints for fusing object motion perceived by individual cameras. For planar scenes, the homography constraints provide a natural mechanism for data association. For scenes that are not planar, the epipolar constraint provides a weaker multi-view relationship. We use the epipolar constraint for tracking in multi-camera and multi-sensor networks. In particular, we show that the epipolar constraint reduces the dimensionality of the state space of the problem by introducing a ``shared'' state space for the joint tracking problem. This allows for robust tracking even when one of the sensors fail due to poor SNR or occlusion. The second part of the dissertation deals with challenges in the computational aspects of tracking algorithms that are common to such systems. Much of the inference in the multi-camera and multi-sensor networks deal with complex non-linear models corrupted with non-Gaussian noise. Particle filters provide approximate Bayesian inference in such settings. We analyze the computational drawbacks of traditional particle filtering algorithms, and present a method for implementing the particle filter using the Independent Metropolis Hastings sampler, that is highly amenable to pipelined implementations and parallelization. We analyze the implementations of the proposed algorithm, and in particular concentrate on implementations that have minimum processing times. The last part of the dissertation deals with the efficient sensing paradigm of compressing sensing (CS) applied to signals in imaging, such as natural images and reflectance fields. We propose a hybrid signal model on the assumption that most real-world signals exhibit subspace compressibility as well as sparse representations. We show that several real-world visual signals such as images, reflectance fields, videos etc., are better approximated by this hybrid of two models. We derive optimal hybrid linear projections of the signal and show that theoretical guarantees and algorithms designed for CS can be easily extended to hybrid subspace-compressive sensing. Such methods reduce the amount of information sensed by a camera, and help in reducing the so called data deluge problem in large multi-camera systems

    Tracking Extended Objects with Active Models and Negative Measurements

    Get PDF
    Extended object tracking deals with estimating the shape and pose of an object based on noisy point measurements. This task is not straightforward, as we may be faced with scarce low-quality measurements, little a priori information, or we may be unable to observe the entire target. This work aims to address these challenges by incorporating ideas from active contours and exploiting information from negative measurements, which tell us where the target cannot be

    Tracking Extended Objects with Active Models and Negative Measurements

    Get PDF
    Beim Tracking von ausgedehnten Objekten (auf Englisch ‚extended object tracking‘, kurz EOT) geht es darum, die Form und Lage eines Zielobjekts anhand von verrauschten Punktmessungen zu schĂ€tzen. EOT wird traditionell zur Verfolgung von Großobjekten wie Flugzeugen, Schiffen, oder Autos verwendet. Allerdings ermöglichen Technologiefortschritte bei Tiefenkameras wie Microsoft Kinects mittlerweile sogar Laien, Punktwolken aus ihrer Umgebung aufzunehmen. Das stellt eine neue Herausforderung fĂŒr EOT-AnsĂ€tze dar, die in modernen Anwendungen, wie z.B. Objektmanipulation in Augmented Reality oder in der Robotik, Zielobjekte mit vielen möglichen Formen anhand von Messungen unterschiedlicher QualitĂ€t verfolgen mĂŒssen. In diesem Kontext ist die Auswahl der Formmodelle ausschlaggebend, denn sie bestimmen, wie robust und leistungsfĂ€hig der SchĂ€tzer sein wird, was wiederum eine sorgfĂ€ltige Betrachtung der ModalitĂ€ten und QualitĂ€t der verfĂŒgbaren Informationen erfordert. Solch ein Informationsparadigma kann als ein Spektrum visualisiert werden: auf der einen Seite, eine große Anzahl an genauen Messungen, und auf der anderen Seite, nur wenige verrauschte Beobachtungen. Allerdings haben sich die Verfahren in der Literatur traditionell auf einen schmalen Teil dieses Spektrums konzentriert. Einerseits assoziieren ‚gierige‘ Verfahren, die auf der Methode der kleinsten Quadrate basieren, Messungen mit der nĂ€chsten Quelle auf der Form. Diese Verfahren sind effizient und liefern sogar fĂŒr komplizierte Formen akkurate Ergebnisse, allerdings nur solange das Messrauschen niedrig bliebt. Ansonsten kann nicht gewĂ€hrleistet werden, dass der nĂ€chste Punkt immer noch eine passende Approximation der wahren Quelle ist, was zu verzerrten Ergebnissen fĂŒhrt. Andererseits sind probabilistische Modelle wie Raumverteilungen prĂ€zise fĂŒr einfache Formen, sogar bei extrem hohem Messrauschen, allerdings werden sie schon fĂŒr wenig komplexe Formen unlösbar oder numerisch instabil. Die Schwierigkeit besteht darin, dass in vielen modernen Trackingszenarien die Menge an verfĂŒgbarer Information sich drastisch mit der Zeit Ă€ndern kann. Das unterstreicht den Bedarf an AnsĂ€tzen, die nicht nur die StĂ€rken beider Modelle kombinieren, sondern auch alle Bereiche des Spektrums und nicht nur dessen GrenzfĂ€lle abdecken können. Das Ziel dieser Arbeit ist es, diese LĂŒcke zu fĂŒllen und somit die oben angesprochenen Herausforderungen zu lösen. Dazu schlagen wir vier BeitrĂ€ge vor, die den aktuellen Stand der Technik signifikant erweitern. Zuerst schlagen wir Level-set Partial Information Models vor, einen probabilistischen Ansatz zur erwartungstreuen FormschĂ€tzung fĂŒr Szenarien mit Verdeckungen und hohem Messrauschen. ZusĂ€tzlich fĂŒhren wir Level-set Active Random Hypersurface Models ein, die von Konzepten aus EOT und Computervision inspiriert sind, eine flexible Formparametrisierung fĂŒr konvexe und nicht-konvexe Formen ermöglichen, und die auch mit wenig Information umgehen können. DarĂŒber hinaus machen Negative Information Models sogenannte ‚negative‘ Information nutzbar, indem Messungen verarbeitet werden, die uns sagen, wo das Zielobjekt nicht sein kann. Schließlich zeigen wir eine einfach zu implementierende Erweiterung von diesen BeitrĂ€gen, Extrusion Models, um dreidimensionale Objekte mit realen Sensordaten zu verfolgen

    Autonomous vision-based terrain-relative navigation for planetary exploration

    Get PDF
    Abstract: The interest of major space agencies in the world for vision sensors in their mission designs has been increasing over the years. Indeed, cameras offer an efficient solution to address the ever-increasing requirements in performance. In addition, these sensors are multipurpose, lightweight, proven and a low-cost technology. Several researchers in vision sensing for space application currently focuse on the navigation system for autonomous pin-point planetary landing and for sample and return missions to small bodies. In fact, without a Global Positioning System (GPS) or radio beacon around celestial bodies, high-accuracy navigation around them is a complex task. Most of the navigation systems are based only on accurate initialization of the states and on the integration of the acceleration and the angular rate measurements from an Inertial Measurement Unit (IMU). This strategy can track very accurately sudden motions of short duration, but their estimate diverges in time and leads normally to high landing error. In order to improve navigation accuracy, many authors have proposed to fuse those IMU measurements with vision measurements using state estimators, such as Kalman filters. The first proposed vision-based navigation approach relies on feature tracking between sequences of images taken in real time during orbiting and/or landing operations. In that case, image features are image pixels that have a high probability of being recognized between images taken from different camera locations. By detecting and tracking these features through a sequence of images, the relative motion of the spacecraft can be determined. This technique, referred to as Terrain-Relative Relative Navigation (TRRN), relies on relatively simple, robust and well-developed image processing techniques. It allows the determination of the relative motion (velocity) of the spacecraft. Despite the fact that this technology has been demonstrated with space qualified hardware, its gain in accuracy remains limited since the spacecraft absolute position is not observable from the vision measurements. The vision-based navigation techniques currently studied consist in identifying features and in mapping them into an on-board cartographic database indexed by an absolute coordinate system, thereby providing absolute position determination. This technique, referred to as Terrain-Relative Absolute Navigation (TRAN), relies on very complex Image Processing Software (IPS) having an obvious lack of robustness. In fact, these software depend often on the spacecraft attitude and position, they are sensitive to illumination conditions (the elevation and azimuth of the Sun when the geo-referenced database is built must be similar to the ones present during mission), they are greatly influenced by the image noise and finally they hardly manage multiple varieties of terrain seen during the same mission (the spacecraft can fly over plain zone as well as mountainous regions, the images may contain old craters with noisy rims as well as young crater with clean rims and so on). At this moment, no real-time hardware-in-the-loop experiment has been conducted to demonstrate the applicability of this technology to space mission. The main objective of the current study is to develop autonomous vision-based navigation algorithms that provide absolute position and surface-relative velocity during the proximity operations of a planetary mission (orbiting phase and landing phase) using a combined approach of TRRN and TRAN technologies. The contributions of the study are: (1) reference mission definition, (2) advancements in the TRAN theory (image processing as well as state estimation) and (3) practical implementation of vision-based navigation.RĂ©sumĂ©: L’intĂ©rĂȘt des principales agences spatiales envers les technologies basĂ©es sur la vision artificielle ne cesse de croĂźtre. En effet, les camĂ©ras offrent une solution efficace pour rĂ©pondre aux exigences de performance, toujours plus Ă©levĂ©es, des missions spatiales. De surcroĂźt, ces capteurs sont multi-usages, lĂ©gers, Ă©prouvĂ©s et peu coĂ»teux. Plusieurs chercheurs dans le domaine de la vision artificielle se concentrent actuellement sur les systĂšmes autonomes pour l’atterrissage de prĂ©cision sur des planĂštes et sur les missions d’échantillonnage sur des astĂ©roĂŻdes. En effet, sans systĂšme de positionnement global « Global Positioning System (GPS) » ou de balises radio autour de ces corps cĂ©lestes, la navigation de prĂ©cision est une tĂąche trĂšs complexe. La plupart des systĂšmes de navigation sont basĂ©s seulement sur l’intĂ©gration des mesures provenant d’une centrale inertielle. Cette stratĂ©gie peut ĂȘtre utilisĂ©e pour suivre les mouvements du vĂ©hicule spatial seulement sur une courte durĂ©e, car les donnĂ©es estimĂ©es divergent rapidement. Dans le but d’amĂ©liorer la prĂ©cision de la navigation, plusieurs auteurs ont proposĂ© de fusionner les mesures provenant de la centrale inertielle avec des mesures d’images du terrain. Les premiers algorithmes de navigation utilisant l’imagerie du terrain qui ont Ă©tĂ© proposĂ©s reposent sur l’extraction et le suivi de traits caractĂ©ristiques dans une sĂ©quence d’images prises en temps rĂ©el pendant les phases d’orbite et/ou d’atterrissage de la mission. Dans ce cas, les traits caractĂ©ristiques de l’image correspondent Ă  des pixels ayant une forte probabilitĂ© d’ĂȘtre reconnus entre des images prises avec diffĂ©rentes positions de camĂ©ra. En dĂ©tectant et en suivant ces traits caractĂ©ristiques, le dĂ©placement relatif du vĂ©hicule (la vitesse) peut ĂȘtre dĂ©terminĂ©. Ces techniques, nommĂ©es navigation relative, utilisent des algorithmes de traitement d’images robustes, faciles Ă  implĂ©menter et bien dĂ©veloppĂ©s. Bien que cette technologie a Ă©tĂ© Ă©prouvĂ©e sur du matĂ©riel de qualitĂ© spatiale, le gain en prĂ©cision demeure limitĂ© Ă©tant donnĂ© que la position absolue du vĂ©hicule n’est pas observable dans les mesures extraites de l’image. Les techniques de navigation basĂ©es sur la vision artificielle actuellement Ă©tudiĂ©es consistent Ă  identifier des traits caractĂ©ristiques dans l’image pour les apparier avec ceux contenus dans une base de donnĂ©es gĂ©o-rĂ©fĂ©rencĂ©es de maniĂšre Ă  fournir une mesure de position absolue au filtre de navigation. Cependant, cette technique, nommĂ©e navigation absolue, implique l’utilisation d’algorithmes de traitement d’images trĂšs complexes souffrant pour le moment des problĂšmes de robustesse. En effet, ces algorithmes dĂ©pendent souvent de la position et de l’attitude du vĂ©hicule. Ils sont trĂšs sensibles aux conditions d’illuminations (l’élĂ©vation et l’azimut du Soleil prĂ©sents lorsque la base de donnĂ©es gĂ©o-rĂ©fĂ©rencĂ©e est construite doit ĂȘtre similaire Ă  ceux observĂ©s pendant la mission). Ils sont grandement influencĂ©s par le bruit dans l’image et enfin ils supportent mal les multiples variĂ©tĂ©s de terrain rencontrĂ©es pendant la mĂȘme mission (le vĂ©hicule peut survoler autant des zones de plaine que des rĂ©gions montagneuses, les images peuvent contenir des vieux cratĂšres avec des contours flous aussi bien que des cratĂšres jeunes avec des contours bien dĂ©finis, etc.). De plus, actuellement, aucune expĂ©rimentation en temps rĂ©el et sur du matĂ©riel de qualitĂ© spatiale n’a Ă©tĂ© rĂ©alisĂ©e pour dĂ©montrer l’applicabilitĂ© de cette technologie pour les missions spatiales. Par consĂ©quent, l’objectif principal de ce projet de recherche est de dĂ©velopper un systĂšme de navigation autonome par imagerie du terrain qui fournit la position absolue et la vitesse relative au terrain d’un vĂ©hicule spatial pendant les opĂ©rations Ă  basse altitude sur une planĂšte. Les contributions de ce travail sont : (1) la dĂ©finition d’une mission de rĂ©fĂ©rence, (2) l’avancement de la thĂ©orie de la navigation par imagerie du terrain (algorithmes de traitement d’images et estimation d’états) et (3) implĂ©mentation pratique de cette technologie
    corecore