38 research outputs found

    Robust vision based slope estimation and rocks detection for autonomous space landers

    Get PDF
    As future robotic surface exploration missions to other planets, moons and asteroids become more ambitious in their science goals, there is a rapidly growing need to significantly enhance the capabilities of entry, descent and landing technology such that landings can be carried out with pin-point accuracy at previously inaccessible sites of high scientific value. As a consequence of the extreme uncertainty in touch-down locations of current missions and the absence of any effective hazard detection and avoidance capabilities, mission designers must exercise extreme caution when selecting candidate landing sites. The entire landing uncertainty footprint must be placed completely within a region of relatively flat and hazard free terrain in order to minimise the risk of mission ending damage to the spacecraft at touchdown. Consequently, vast numbers of scientifically rich landing sites must be rejected in favour of safer alternatives that may not offer the same level of scientific opportunity. The majority of truly scientifically interesting locations on planetary surfaces are rarely found in such hazard free and easily accessible locations, and so goals have been set for a number of advanced capabilities of future entry, descent and landing technology. Key amongst these is the ability to reliably detect and safely avoid all mission critical surface hazards in the area surrounding a pre-selected landing location. This thesis investigates techniques for the use of a single camera system as the primary sensor in the preliminary development of a hazard detection system that is capable of supporting pin-point landing operations for next generation robotic planetary landing craft. The requirements for such a system have been stated as the ability to detect slopes greater than 5 degrees and surface objects greater than 30cm in diameter. The primary contribution in this thesis, aimed at achieving these goals, is the development of a feature-based,self-initialising, fully adaptive structure from motion (SFM) algorithm based on a robust square-root unscented Kalman filtering framework and the fusion of the resulting SFM scene structure estimates with a sophisticated shape from shading (SFS) algorithm that has the potential to produce very dense and highly accurate digital elevation models (DEMs) that possess sufficient resolution to achieve the sensing accuracy required by next generation landers. Such a system is capable of adapting to potential changes in the external noise environment that may result from intermittent and varying rocket motor thrust and/or sudden turbulence during descent, which may translate to variations in the vibrations experienced by the platform and introduce varying levels of motion blur that will affect the accuracy of image feature tracking algorithms. Accurate scene structure estimates have been obtained using this system from both real and synthetic descent imagery, allowing for the production of accurate DEMs. While some further work would be required in order to produce DEMs that possess the resolution and accuracy needed to determine slopes and the presence of small objects such as rocks at the levels of accuracy required, this thesis presents a very strong foundation upon which to build and goes a long way towards developing a highly robust and accurate solution

    Computer vision methods applied to person tracking and identification

    Get PDF
    2013 - 2014Computer vision methods for tracking and identification of people in constrained and unconstrained environments have been widely explored in the last decades. De- spite of the active research on these topics, they are still open problems for which standards and/or common guidelines have not been defined yet. Application fields of computer vision-based tracking systems are almost infinite. Nowadays, the Aug- mented Reality is a very active field of the research that can benefit from vision-based user’s tracking to work. Being defined as the fusion of real with virtual worlds, the success of an augmented reality application is completely dependant on the efficiency of the exploited tracking method. This work of thesis covers the issues related to tracking systems in augmented reality applications proposing a comprehensive and adaptable framework for marker-based tracking and a deep formal analysis. The provided analysis makes possible to objectively assess and quantify the advantages of using augmented reality principles in heterogeneous operative contexts. Two case studies have been considered, that are the support to maintenance in an industrial environment and to electrocardiography in a typical telemedicine scenario. Advan- tages and drawback are provided as well as future directions of the proposed study. The second topic covered in this thesis relates to the vision-based tracking solution for unconstrained outdoor environments. In video surveillance domain, a tracker is asked to handle variations in illumination, cope with appearance changes of the tracked objects and, possibly, predict motion to better anticipate future positions. ... [edited by Author]XIII n.s

    Appearance Modelling and Reconstruction for Navigation in Minimally Invasive Surgery

    Get PDF
    Minimally invasive surgery is playing an increasingly important role for patient care. Whilst its direct patient benefit in terms of reduced trauma, improved recovery and shortened hospitalisation has been well established, there is a sustained need for improved training of the existing procedures and the development of new smart instruments to tackle the issue of visualisation, ergonomic control, haptic and tactile feedback. For endoscopic intervention, the small field of view in the presence of a complex anatomy can easily introduce disorientation to the operator as the tortuous access pathway is not always easy to predict and control with standard endoscopes. Effective training through simulation devices, based on either virtual reality or mixed-reality simulators, can help to improve the spatial awareness, consistency and safety of these procedures. This thesis examines the use of endoscopic videos for both simulation and navigation purposes. More specifically, it addresses the challenging problem of how to build high-fidelity subject-specific simulation environments for improved training and skills assessment. Issues related to mesh parameterisation and texture blending are investigated. With the maturity of computer vision in terms of both 3D shape reconstruction and localisation and mapping, vision-based techniques have enjoyed significant interest in recent years for surgical navigation. The thesis also tackles the problem of how to use vision-based techniques for providing a detailed 3D map and dynamically expanded field of view to improve spatial awareness and avoid operator disorientation. The key advantage of this approach is that it does not require additional hardware, and thus introduces minimal interference to the existing surgical workflow. The derived 3D map can be effectively integrated with pre-operative data, allowing both global and local 3D navigation by taking into account tissue structural and appearance changes. Both simulation and laboratory-based experiments are conducted throughout this research to assess the practical value of the method proposed

    Sequential assimilation of crowdsourced social media data into a simplified flood inundation model

    Get PDF
    Flooding is the most common natural hazard worldwide. Severe floods can cause significant damage and sometimes loss of life. During a flood event, hydraulic models play an important role in forecasting and identifying potential inundated areas, where emergency responses should be deployed. Nevertheless, hydraulic models are not able to capture all of the processes in flood propagation because flood behaviour is highly dynamic and complex. Thus, there are always uncertainties associated with model simulations. As a result, near-real time observations are required to incorporate with hydraulic models to improve model forecasting skills. Crowdsourced (CS) social media data presents an opportunity for supporting urban flood management as it can provide insightful information collected by individuals in near real-time. In this thesis, approachesto maximise the impact of CS social media data (Twitter) to reduce uncertainty in flood inundation modelling (LISFLOOD-FP) through data assimilation were investigated. The developed methodologies were tested and evaluated using a real flooding case study of Phetchaburi city, Thailand. Firstly, two approaches (binary logistic regression and fuzzy logic) were developed based on Twitter metadata and spatiotemporal analysis to assess the quality of CS social media data. Both methods produced good results, but the binary logistic model was preferred as it involved less subjectivity. Next, the generalized likelihood uncertainty estimation methodology was applied to estimate model uncertainty and identify behavioural parameter ranges. Particle swarm optimisation was also carried out to calibrate for an optimum model parameter set. Following this, an ensemble Kalman filter was applied to assimilate the flood depth information extracted from the CS data into the LISFLOOD-FP simulations using various updating strategies. The findings show that the global state update suffers from inconsistency of predicted water levels due to overestimating the impact of the CS data, whereas a topography based local state update provides encouraging results as the uncertainty in model forecasts narrows, albeit for a short time period. To extend the improvement time span, a combination of state and boundary updating was further investigated to correct both water levels and model inputs, and was found to produce longer lasting improvements in terms of uncertainty reduction. Overall, the results indicate the feasibility of applying CS social media data to reduce model uncertainty in flood forecasting

    Advanced Calibration of Automotive Augmented Reality Head-Up Displays = Erweiterte Kalibrierung von Automotiven Augmented Reality-Head-Up-Displays

    Get PDF
    In dieser Arbeit werden fortschrittliche Kalibrierungsmethoden fĂŒr Augmented-Reality-Head-up-Displays (AR-HUDs) in Kraftfahrzeugen vorgestellt, die auf parametrischen perspektivischen Projektionen und nichtparametrischen Verzerrungsmodellen basieren. Die AR-HUD-Kalibrierung ist wichtig, um virtuelle Objekte in relevanten Anwendungen wie z.B. Navigationssystemen oder ParkvorgĂ€ngen korrekt zu platzieren. Obwohl es im Stand der Technik einige nĂŒtzliche AnsĂ€tze fĂŒr dieses Problem gibt, verfolgt diese Dissertation das Ziel, fortschrittlichere und dennoch weniger komplizierte AnsĂ€tze zu entwickeln. Als Voraussetzung fĂŒr die Kalibrierung haben wir mehrere relevante Koordinatensysteme definiert, darunter die dreidimensionale (3D) Welt, den Ansichtspunkt-Raum, den HUD-Sichtfeld-Raum (HUD-FOV) und den zweidimensionalen (2D) virtuellen Bildraum. Wir beschreiben die Projektion der Bilder von einem AR-HUD-Projektor in Richtung der Augen des Fahrers als ein ansichtsabhĂ€ngiges Lochkameramodell, das aus intrinsischen und extrinsischen Matrizen besteht. Unter dieser Annahme schĂ€tzen wir zunĂ€chst die intrinsische Matrix unter Verwendung der Grenzen des HUD-Sichtbereichs. Als nĂ€chstes kalibrieren wir die extrinsischen Matrizen an verschiedenen Blickpunkten innerhalb einer ausgewĂ€hlten "Eyebox" unter BerĂŒcksichtigung der sich Ă€ndernden Augenpositionen des Fahrers. Die 3D-Positionen dieser Blickpunkte werden von einer Fahrerkamera verfolgt. FĂŒr jeden einzelnen Blickpunkt erhalten wir eine Gruppe von 2D-3D-Korrespondenzen zwischen einer Menge Punkten im virtuellen Bildraum und ihren ĂŒbereinstimmenden Kontrollpunkten vor der Windschutzscheibe. Sobald diese Korrespondenzen verfĂŒgbar sind, berechnen wir die extrinsische Matrix am entsprechenden Betrachtungspunkt. Durch Vergleichen der neu projizierten und realen Pixelpositionen dieser virtuellen Punkte erhalten wir eine 2D-Verteilung von Bias-Vektoren, mit denen wir Warping-Karten rekonstruieren, welche die Informationen ĂŒber die Bildverzerrung enthalten. FĂŒr die VollstĂ€ndigkeit wiederholen wir die obigen extrinsischen Kalibrierungsverfahren an allen ausgewĂ€hlten Betrachtungspunkten. Mit den kalibrierten extrinsischen Parametern stellen wir die Betrachtungspunkte wieder her im Weltkoordinatensystem. Da wir diese Punkte gleichzeitig im Raum der Fahrerkamera verfolgen, kalibrieren wir weiter die Transformation von der Fahrerkamera in den Weltraum unter Verwendung dieser 3D-3D-Korrespondenzen. Um mit nicht teilnehmenden Betrachtungspunkten innerhalb der Eyebox umzugehen, erhalten wir ihre extrinsischen Parameter und Warping-Karten durch nichtparametrische Interpolationen. Unsere Kombination aus parametrischen und nichtparametrischen Modellen ĂŒbertrifft den Stand der Technik hinsichtlich der ZielkomplexitĂ€t sowie Zeiteffizienz, wĂ€hrend wir eine vergleichbare Kalibrierungsgenauigkeit beibehalten. Bei allen unseren Kalibrierungsschemen liegen die Projektionsfehler in der Auswertungsphase bei einer Entfernung von 7,5 Metern innerhalb weniger Millimeter, was einer Winkelgenauigkeit von ca. 2 Bogenminuten entspricht, was nahe am Auflösungvermögen des Auges liegt

    Robust Estimation of Motion Parameters and Scene Geometry : Minimal Solvers and Convexification of Regularisers for Low-Rank Approximation

    Get PDF
    In the dawning age of autonomous driving, accurate and robust tracking of vehicles is a quintessential part. This is inextricably linked with the problem of Simultaneous Localisation and Mapping (SLAM), in which one tries to determine the position of a vehicle relative to its surroundings without prior knowledge of them. The more you know about the object you wish to track—through sensors or mechanical construction—the more likely you are to get good positioning estimates. In the first part of this thesis, we explore new ways of improving positioning for vehicles travelling on a planar surface. This is done in several different ways: first, we generalise the work done for monocular vision to include two cameras, we propose ways of speeding up the estimation time with polynomial solvers, and we develop an auto-calibration method to cope with radially distorted images, without enforcing pre-calibration procedures.We continue to investigate the case of constrained motion—this time using auxiliary data from inertial measurement units (IMUs) to improve positioning of unmanned aerial vehicles (UAVs). The proposed methods improve the state-of-the-art for partially calibrated cases (with unknown focal length) for indoor navigation. Furthermore, we propose the first-ever real-time compatible minimal solver for simultaneous estimation of radial distortion profile, focal length, and motion parameters while utilising the IMU data.In the third and final part of this thesis, we develop a bilinear framework for low-rank regularisation, with global optimality guarantees under certain conditions. We also show equivalence between the linear and the bilinear framework, in the sense that the objectives are equal. This enables users of alternating direction method of multipliers (ADMM)—or other subgradient or splitting methods—to transition to the new framework, while being able to enjoy the benefits of second order methods. Furthermore, we propose a novel regulariser fusing two popular methods. This way we are able to combine the best of two worlds by encouraging bias reduction while enforcing low-rank solutions
    corecore