340 research outputs found

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications

    Plenoptic Signal Processing for Robust Vision in Field Robotics

    Get PDF
    This thesis proposes the use of plenoptic cameras for improving the robustness and simplicity of machine vision in field robotics applications. Dust, rain, fog, snow, murky water and insufficient light can cause even the most sophisticated vision systems to fail. Plenoptic cameras offer an appealing alternative to conventional imagery by gathering significantly more light over a wider depth of field, and capturing a rich 4D light field structure that encodes textural and geometric information. The key contributions of this work lie in exploring the properties of plenoptic signals and developing algorithms for exploiting them. It lays the groundwork for the deployment of plenoptic cameras in field robotics by establishing a decoding, calibration and rectification scheme appropriate to compact, lenslet-based devices. Next, the frequency-domain shape of plenoptic signals is elaborated and exploited by constructing a filter which focuses over a wide depth of field rather than at a single depth. This filter is shown to reject noise, improving contrast in low light and through attenuating media, while mitigating occluders such as snow, rain and underwater particulate matter. Next, a closed-form generalization of optical flow is presented which directly estimates camera motion from first-order derivatives. An elegant adaptation of this "plenoptic flow" to lenslet-based imagery is demonstrated, as well as a simple, additive method for rendering novel views. Finally, the isolation of dynamic elements from a static background is considered, a task complicated by the non-uniform apparent motion caused by a mobile camera. Two elegant closed-form solutions are presented dealing with monocular time-series and light field image pairs. This work emphasizes non-iterative, noise-tolerant, closed-form, linear methods with predictable and constant runtimes, making them suitable for real-time embedded implementation in field robotics applications

    Robust Estimation of Motion Parameters and Scene Geometry : Minimal Solvers and Convexification of Regularisers for Low-Rank Approximation

    Get PDF
    In the dawning age of autonomous driving, accurate and robust tracking of vehicles is a quintessential part. This is inextricably linked with the problem of Simultaneous Localisation and Mapping (SLAM), in which one tries to determine the position of a vehicle relative to its surroundings without prior knowledge of them. The more you know about the object you wish to track—through sensors or mechanical construction—the more likely you are to get good positioning estimates. In the first part of this thesis, we explore new ways of improving positioning for vehicles travelling on a planar surface. This is done in several different ways: first, we generalise the work done for monocular vision to include two cameras, we propose ways of speeding up the estimation time with polynomial solvers, and we develop an auto-calibration method to cope with radially distorted images, without enforcing pre-calibration procedures.We continue to investigate the case of constrained motion—this time using auxiliary data from inertial measurement units (IMUs) to improve positioning of unmanned aerial vehicles (UAVs). The proposed methods improve the state-of-the-art for partially calibrated cases (with unknown focal length) for indoor navigation. Furthermore, we propose the first-ever real-time compatible minimal solver for simultaneous estimation of radial distortion profile, focal length, and motion parameters while utilising the IMU data.In the third and final part of this thesis, we develop a bilinear framework for low-rank regularisation, with global optimality guarantees under certain conditions. We also show equivalence between the linear and the bilinear framework, in the sense that the objectives are equal. This enables users of alternating direction method of multipliers (ADMM)—or other subgradient or splitting methods—to transition to the new framework, while being able to enjoy the benefits of second order methods. Furthermore, we propose a novel regulariser fusing two popular methods. This way we are able to combine the best of two worlds by encouraging bias reduction while enforcing low-rank solutions

    Monocular Pose Estimation Based on Global and Local Features

    Get PDF
    The presented thesis work deals with several mathematical and practical aspects of the monocular pose estimation problem. Pose estimation means to estimate the position and orientation of a model object with respect to a camera used as a sensor element. Three main aspects of the pose estimation problem are considered. These are the model representations, correspondence search and pose computation. Free-form contours and surfaces are considered for the approaches presented in this work. The pose estimation problem and the global representation of free-form contours and surfaces are defined in the mathematical framework of the conformal geometric algebra (CGA), which allows a compact and linear modeling of the monocular pose estimation scenario. Additionally, a new local representation of these entities is presented which is also defined in CGA. Furthermore, it allows the extraction of local feature information of these models in 3D space and in the image plane. This local information is combined with the global contour information obtained from the global representations in order to improve the pose estimation algorithms. The main contribution of this work is the introduction of new variants of the iterative closest point (ICP) algorithm based on the combination of local and global features. Sets of compatible model and image features are obtained from the proposed local model representation of free-form contours. This allows to translate the correspondence search problem onto the image plane and to use the feature information to develop new correspondence search criteria. The structural ICP algorithm is defined as a variant of the classical ICP algorithm with additional model and image structural constraints. Initially, this new variant is applied to planar 3D free-form contours. Then, the feature extraction process is adapted to the case of free-form surfaces. This allows to define the correlation ICP algorithm for free-form surfaces. In this case, the minimal Euclidean distance criterion is replaced by a feature correlation measure. The addition of structural information in the search process results in better conditioned correspondences and therefore in a better computed pose. Furthermore, global information (position and orientation) is used in combination with the correlation ICP to simplify and improve the pre-alignment approaches for the monocular pose estimation. Finally, all the presented approaches are combined to handle the pose estimation of surfaces when partial occlusions are present in the image. Experiments made on synthetic and real data are presented to demonstrate the robustness and behavior of the new ICP variants in comparison with standard approaches

    Structure from Motion with Higher-level Environment Representations

    Get PDF
    Computer vision is an important area focusing on understanding, extracting and using the information from vision-based sensor. It has many applications such as vision-based 3D reconstruction, simultaneous localization and mapping(SLAM) and data-driven understanding of the real world. Vision is a fundamental sensing modality in many different fields of application. While the traditional structure from motion mostly uses sparse point-based feature, this thesis aims to explore the possibility of using higher order feature representation. It starts with a joint work which uses straight line for feature representation and performs bundle adjustment with straight line parameterization. Then, we further try an even higher order representation where we use Bezier spline for parameterization. We start with a simple case where all contours are lying on the plane and uses Bezier splines to parametrize the curves in the background and optimize on both camera position and Bezier splines. For application, we present a complete end-to-end pipeline which produces meaningful dense 3D models from natural data of a 3D object: the target object is placed on a structured but unknown planar background that is modeled with splines. The data is captured using only a hand-held monocular camera. However, this application is limited to a planar scenario and we manage to push the parameterizations into real 3D. Following the potential of this idea, we introduce a more flexible higher-order extension of points that provide a general model for structural edges in the environment, no matter if straight or curved. Our model relies on linked BÂŽezier curves, the geometric intuition of which proves great benefits during parameter initialization and regularization. We present the first fully automatic pipeline that is able to generate spline-based representations without any human supervision. Besides a full graphical formulation of the problem, we introduce both geometric and photometric cues as well as higher-level concepts such overall curve visibility and viewing angle restrictions to automatically manage the correspondences in the graph. Results prove that curve-based structure from motion with splines is able to outperform state-of-the-art sparse feature-based methods, as well as to model curved edges in the environment

    Distributed Robotic Vision for Calibration, Localisation, and Mapping

    Get PDF
    This dissertation explores distributed algorithms for calibration, localisation, and mapping in the context of a multi-robot network equipped with cameras and onboard processing, comparing against centralised alternatives where all data is transmitted to a singular external node on which processing occurs. With the rise of large-scale camera networks, and as low-cost on-board processing becomes increasingly feasible in robotics networks, distributed algorithms are becoming important for robustness and scalability. Standard solutions to multi-camera computer vision require the data from all nodes to be processed at a central node which represents a significant single point of failure and incurs infeasible communication costs. Distributed solutions solve these issues by spreading the work over the entire network, operating only on local calculations and direct communication with nearby neighbours. This research considers a framework for a distributed robotic vision platform for calibration, localisation, mapping tasks where three main stages are identified: an initialisation stage where calibration and localisation are performed in a distributed manner, a local tracking stage where visual odometry is performed without inter-robot communication, and a global mapping stage where global alignment and optimisation strategies are applied. In consideration of this framework, this research investigates how algorithms can be developed to produce fundamentally distributed solutions, designed to minimise computational complexity whilst maintaining excellent performance, and designed to operate effectively in the long term. Therefore, three primary objectives are sought aligning with these three stages

    Binokulare EigenbewegungsschĂ€tzung fĂŒr Fahrerassistenzanwendungen

    Get PDF
    Driving can be dangerous. Humans become inattentive when performing a monotonous task like driving. Also the risk implied while multi-tasking, like using the cellular phone while driving, can break the concentration of the driver and increase the risk of accidents. Others factors like exhaustion, nervousness and excitement affect the performance of the driver and the response time. Consequently, car manufacturers have developed systems in the last decades which assist the driver under various circumstances. These systems are called driver assistance systems. Driver assistance systems are meant to support the task of driving, and the field of action varies from alerting the driver, with acoustical or optical warnings, to taking control of the car, such as keeping the vehicle in the traffic lane until the driver resumes control. For such a purpose, the vehicle is equipped with on-board sensors which allow the perception of the environment and/or the state of the vehicle. Cameras are sensors which extract useful information about the visual appearance of the environment. Additionally, a binocular system allows the extraction of 3D information. One of the main requirements for most camera-based driver assistance systems is the accurate knowledge of the motion of the vehicle. Some sources of information, like velocimeters and GPS, are of common use in vehicles today. Nevertheless, the resolution and accuracy usually achieved with these systems are not enough for many real-time applications. The computation of ego-motion from sequences of stereo images for the implementation of driving intelligent systems, like autonomous navigation or collision avoidance, constitutes the core of this thesis. This dissertation proposes a framework for the simultaneous computation of the 6 degrees of freedom of ego-motion (rotation and translation in 3D Euclidean space), the estimation of the scene structure and the detection and estimation of independently moving objects. The input is exclusively provided by a binocular system and the framework does not call for any data acquisition strategy, i.e. the stereo images are just processed as they are provided. Stereo allows one to establish correspondences between left and right images, estimating 3D points of the environment via triangulation. Likewise, feature tracking establishes correspondences between the images acquired at different time instances. When both are used together for a large number of points, the result is a set of clouds of 3D points with point-to-point correspondences between clouds. The apparent motion of the 3D points between consecutive frames is caused by a variety of reasons. The most dominant motion for most of the points in the clouds is caused by the ego-motion of the vehicle; as the vehicle moves and images are acquired, the relative position of the world points with respect to the vehicle changes. Motion is also caused by objects moving in the environment. They move independently of the vehicle motion, so the observed motion for these points is the sum of the ego-vehicle motion and the independent motion of the object. A third reason, and of paramount importance in vision applications, is caused by correspondence problems, i.e. the incorrect spatial or temporal assignment of the point-to-point correspondence. Furthermore, all the points in the clouds are actually noisy measurements of the real unknown 3D points of the environment. Solving ego-motion and scene structure from the clouds of points requires some previous analysis of the noise involved in the imaging process, and how it propagates as the data is processed. Therefore, this dissertation analyzes the noise properties of the 3D points obtained through stereo triangulation. This leads to the detection of a bias in the estimation of 3D position, which is corrected with a reformulation of the projection equation. Ego-motion is obtained by finding the rotation and translation between the two clouds of points. This problem is known as absolute orientation, and many solutions based on least squares have been proposed in the literature. This thesis reviews the available closed form solutions to the problem. The proposed framework is divided in three main blocks: 1) stereo and feature tracking computation, 2) ego-motion estimation and 3) estimation of 3D point position and 3D velocity. The first block solves the correspondence problem providing the clouds of points as output. No special implementation of this block is required in this thesis. The ego-motion block computes the motion of the cameras by finding the absolute orientation between the clouds of static points in the environment. Since the cloud of points might contain independently moving objects and outliers generated by false correspondences, the direct computation of the least squares might lead to an erroneous solution. The first contribution of this thesis is an effective rejection rule that detects outliers based on the distance between predicted and measured quantities, and reduces the effects of noisy measurement by assigning appropriate weights to the data. This method is called Smoothness Motion Constraint (SMC). The ego-motion of the camera between two frames is obtained finding the absolute orientation between consecutive clouds of weighted 3D points. The complete ego-motion since initialization is achieved concatenating the individual motion estimates. This leads to a super-linear propagation of the error, since noise is integrated. A second contribution of this dissertation is a predictor/corrector iterative method, which integrates the clouds of 3D points of multiple time instances for the computation of ego-motion. The presented method considerably reduces the accumulation of errors in the estimated ego-position of the camera. Another contribution of this dissertation is a method which recursively estimates the 3D world position of a point and its velocity; by fusing stereo, feature tracking and the estimated ego-motion in a Kalman Filter system. An improved estimation of point position is obtained this way, which is used in the subsequent system cycle resulting in an improved computation of ego-motion. The general contribution of this dissertation is a single framework for the real time computation of scene structure, independently moving objects and ego-motion for automotive applications.Autofahren kann gefĂ€hrlich sein. Die Fahrleistung wird durch die physischen und psychischen Grenzen des Fahrers und durch externe Faktoren wie das Wetter beeinflusst. Fahrerassistenzsysteme erhöhen den Fahrkomfort und unterstĂŒtzen den Fahrer, um die Anzahl an UnfĂ€llen zu verringern. Fahrerassistenzsysteme unterstĂŒtzen den Fahrer durch Warnungen mit optischen oder akustischen Signalen bis hin zur Übernahme der Kontrolle ĂŒber das Auto durch das System. Eine der Hauptvoraussetzungen fĂŒr die meisten Fahrerassistenzsysteme ist die akkurate Kenntnis der Bewegung des eigenen Fahrzeugs. Heutzutage verfĂŒgt man ĂŒber verschiedene Sensoren, um die Bewegung des Fahrzeugs zu messen, wie zum Beispiel GPS und Tachometer. Doch Auflösung und Genauigkeit dieser Systeme sind nicht ausreichend fĂŒr viele Echtzeitanwendungen. Die Berechnung der Eigenbewegung aus Stereobildsequenzen fĂŒr Fahrerassistenzsysteme, z.B. zur autonomen Navigation oder Kollisionsvermeidung, bildet den Kern dieser Arbeit. Diese Dissertation prĂ€sentiert ein System zur Echtzeitbewertung einer Szene, inklusive Detektion und Bewertung von unabhĂ€ngig bewegten Objekten sowie der akkuraten SchĂ€tzung der sechs Freiheitsgrade der Eigenbewegung. Diese grundlegenden Bestandteile sind erforderlich, um viele intelligente Automobilanwendungen zu entwickeln, die den Fahrer in unterschiedlichen Verkehrssituationen unterstĂŒtzen. Das System arbeitet ausschließlich mit einer Stereokameraplattform als Sensor. Um die Eigenbewegung und die Szenenstruktur zu berechnen wird eine Analyse des Rauschens und der Fehlerfortpflanzung im Bildaufbereitungsprozess benötigt. Deshalb werden in dieser Dissertation die Rauscheigenschaften der durch Stereotriangulation erhaltenen 3D-Punkte analysiert. Dies fĂŒhrt zu der Entdeckung eines systematischen Fehlers in der SchĂ€tzung der 3D-Position, der sich mit einer Neuformulierung der Projektionsgleichung korrigieren lĂ€sst. Die Simulationsergebnisse zeigen, dass eine bedeutende Verringerung des Fehlers in der geschĂ€tzten 3D-Punktposition möglich ist. Die EigenbewegungsschĂ€tzung wird gewonnen, indem die Rotation und Translation zwischen Punktwolken geschĂ€tzt wird. Dieses Problem ist als „absolute Orientierung” bekannt und viele Lösungen auf Basis der Methode der kleinsten Quadrate sind in der Literatur vorgeschlagen worden. Diese Arbeit rezensiert die verfĂŒgbaren geschlossenen Lösungen zu dem Problem. Das vorgestellte System gliedert sich in drei wesentliche Bausteine: 1. Registrierung von Bildmerkmalen, 2. EigenbewegungsschĂ€tzung und 3. iterative SchĂ€tzung von 3D-Position und 3D-Geschwindigkeit von Weltpunkten. Der erster Block erhĂ€lt eine Folge rektifizierter Bilder als Eingabe und liefert daraus eine Liste von verfolgten Bildmerkmalen mit ihrer entsprechenden 3D-Position. Der Block „EigenbewegungsschĂ€tzung” besteht aus vier Hauptschritten in einer Schleife: 1. Bewegungsvorhersage, 2. Anwendung der Glattheitsbedingung fĂŒr die Bewegung (GBB), 3. absolute Orientierungsberechnung und 4. Bewegungsintegration. Die in dieser Dissertation vorgeschlagene GBB ist eine mĂ€chtige Bedingung fĂŒr die Ablehnung von Ausreißern und fĂŒr die Zuordnung von Gewichten zu den gemessenen 3D-Punkten. Simulationen werden mit gaußschem und slashschem Rauschen ausgefĂŒhrt. Die Ergebnisse zeigen die Überlegenheit der GBB-Version ĂŒber die Standardgewichtungsmethoden. Die StabilitĂ€t der Ergebnisse hinsichtlich Ausreißern wurde analysiert mit dem Resultat, dass der „break down point” grĂ¶ĂŸer als 50% ist. Wenn die vier Schritte iterativ ausgefĂŒhrt, werden wird ein PrĂ€diktor-Korrektor-Verfahren gewonnen.Wir nennen diese SchĂ€tzung Multi-frameschĂ€tzung im Gegensatz zur ZweiframeschĂ€tzung, die nur die aktuellen und vorherigen Bildpaare fĂŒr die Berechnung der Eigenbewegung betrachtet. Die erste Iteration wird zwischen der aktuellen und vorherigen Wolke von Punkten durchgefĂŒhrt. Jede weitere Iteration integriert eine zusĂ€tzliche Punktwolke eines vorherigen Zeitpunkts. Diese Methode reduziert die Fehlerakkumulation bei der Integration von mehreren SchĂ€tzungen in einer einzigen globalen SchĂ€tzung. Simulationsergebnisse zeigen, dass obwohl der Fehler noch superlinear im Laufe der Zeit zunimmt, die GrĂ¶ĂŸe des Fehlers um mehrere GrĂ¶ĂŸenordnungen reduziert wird. Der dritte Block besteht aus der iterativen SchĂ€tzung von 3D-Position und 3D-Geschwindigkeit von Weltpunkten. Hier wird eine Methode basierend auf einem Kalman Filter verwendet, das Stereo, Featuretracking und Eigenbewegungsdaten fusioniert. Messungen der Position eines Weltpunkts werden durch das Stereokamerasystem gewonnen. Die Differenzierung der Position des geschĂ€tzten Punkts erlaubt die zusĂ€tzliche SchĂ€tzung seiner Geschwindigkeit. Die Messungen werden durch das Messmodell gewonnen, das Stereo- und Bewegungsdaten fusioniert. Simulationsergebnisse validieren das Modell. Die Verringerung der Positionsunsicherheit im Laufe der Zeit wird mit einer Monte-Carlo Simulation erzielt. Experimentelle Ergebnisse werden mit langen Sequenzen von Bildern erzielt. ZusĂ€tzliche Tests, einschließlich einer 3D-Rekonstruktion einer Waldszene und der Berechnung der freien Kamerabewegung in einem Indoor-Szenario, wurden durchgefĂŒhrt. Die Methode zeigt gute Ergebnisse in allen FĂ€llen. Der Algorithmus liefert zudem akzeptable Ergebnisse bei der SchĂ€tzung der Lage kleiner Objekte, wie Köpfe und Beine von realen Crash-Test-Dummies

    Visual Navigation for Robots in Urban and Indoor Environments

    Get PDF
    As a fundamental capability for mobile robots, navigation involves multiple tasks including localization, mapping, motion planning, and obstacle avoidance. In unknown environments, a robot has to construct a map of the environment while simultaneously keeping track of its own location within the map. This is known as simultaneous localization and mapping (SLAM). For urban and indoor environments, SLAM is especially important since GPS signals are often unavailable. Visual SLAM uses cameras as the primary sensor and is a highly attractive but challenging research topic. The major challenge lies in the robustness to lighting variation and uneven feature distribution. Another challenge is to build semantic maps composed of high-level landmarks. To meet these challenges, we investigate feature fusion approaches for visual SLAM. The basic rationale is that since urban and indoor environments contain various feature types such points and lines, in combination these features should improve the robustness, and meanwhile, high-level landmarks can be defined as or derived from these combinations. We design a novel data structure, multilayer feature graph (MFG), to organize five types of features and their inner geometric relationships. Building upon a two view-based MFG prototype, we extend the application of MFG to image sequence-based mapping by using EKF. We model and analyze how errors are generated and propagated through the construction of a two view-based MFG. This enables us to treat each MFG as an observation in the EKF update step. We apply the MFG-EKF method to a building exterior mapping task and demonstrate its efficacy. Two view based MFG requires sufficient baseline to be successfully constructed, which is not always feasible. Therefore, we further devise a multiple view based algorithm to construct MFG as a global map. Our proposed algorithm takes a video stream as input, initializes and iteratively updates MFG based on extracted key frames; it also refines robot localization and MFG landmarks using local bundle adjustment. We show the advantage of our method by comparing it with state-of-the-art methods on multiple indoor and outdoor datasets. To avoid the scale ambiguity in monocular vision, we investigate the application of RGB-D for SLAM.We propose an algorithm by fusing point and line features. We extract 3D points and lines from RGB-D data, analyze their measurement uncertainties, and compute camera motion using maximum likelihood estimation. We validate our method using both uncertainty analysis and physical experiments, where it outperforms the counterparts under both constant and varying lighting conditions. Besides visual SLAM, we also study specular object avoidance, which is a great challenge for range sensors. We propose a vision-based algorithm to detect planar mirrors. We derive geometric constraints for corresponding real-virtual features across images and employ RANSAC to develop a robust detection algorithm. Our algorithm achieves a detection accuracy of 91.0%
    • 

    corecore