6 research outputs found

    Calibration of non-conventional imaging systems

    Get PDF

    Camera Calibration with Non-Central Local Camera Models

    Get PDF
    Kamerakalibrierung ist eine wichtige Grundvoraussetzung für viele Computer-Vision-Algorithmen wie Stereo-Vision und visuelle Odometrie. Das Ziel der Kamerakalibrierung besteht darin, sowohl die örtliche Lage der Kameras als auch deren Abbildungsmodell zu bestimmen. Das Abbildungsmodell einer Kamera beschreibt den Zusammenhang zwischen der 3D-Welt und der Bildebene. Aktuell werden häufig einfache globale Kamera-Modelle in einem Kalibrierprozess geschätzt, welcher mit vergleichsweise geringem Aufwand und einer großen Fehlertoleranz durchgeführt werden kann. Um das resultierende Kameramodell zu bewerten, wird in der Regel der Rückprojektionsfehler als Maß herangezogen. Jedoch können auch einfache Kameramodelle, die das Abbildungsverhalten von optischen Systemen nicht präzise beschreiben können, niedrige Rückprojektionsfehler erzielen. Dies führt dazu, dass immer wieder schlecht kalibrierte Kameramodelle nicht als solche identifiziert werden. Um dem entgegen zu wirken, wird in dieser Arbeit ein neues kontinuierliches nicht-zentrales Kameramodell basierend auf B-Splines vorgeschlagen. Dieses Abbildungsmodell ermöglicht es, verschiedene Objektive und nicht-zentrale Verschiebungen, die zum Beispiel durch eine Platzierung der Kamera hinter einer Windschutzscheibe entstehen, akkurat abzubilden. Trotz der allgemeinen Modellierung kann dieses Kameramodell durch einen einfach zu verwendenden Schachbrett-Kalibrierprozess geschätzt werden. Um Kalibrierergebnisse zu bewerten, wird anstelle des mittleren Rückprojektionsfehlers ein Kalibrier-Benchmark vorgeschlagen. Die Grundwahrheit des Kameramodells wird durch ein diskretes Sichtstrahlen-basiertes Modell beschrieben. Um dieses Modell zu schätzen, wird ein Kalibrierprozess vorgestellt, welches ein aktives Display als Ziel verwendet. Dabei wird eine lokale Parametrisierung für die Sichtstrahlen vorgestellt und ein Weg aufgezeigt, die Oberfläche des Displays zusammen mit den intrinsischen Kameraparametern zu schätzen. Durch die Schätzung der Oberfläche wird der mittlere Punkt-zu-Linien-Abstand um einen Faktor von mehr als 20 reduziert. Erst dadurch kann das so geschätzte Kameramodell als Grundwahrheit dienen. Das vorgeschlagene Kameramodell und die dazugehörigen Kalibrierprozesse werden durch eine ausführliche Auswertung in Simulation und in der echten Welt mithilfe des neuen Kalibrier-Benchmarks bewertet. Es wird gezeigt, dass selbst in dem vereinfachten Fall einer ebenen Glasscheibe, die vor der Kamera platziert ist, das vorgeschlagene Modell sowohl einem zentralen als auch einem nicht-zentralen globalen Kameramodell überlegen ist. Am Ende wird die Praxistauglichkeit des vorgeschlagenen Modells bewiesen, indem ein automatisches Fahrzeug kalibriert wird, das mit sechs Kameras ausgestattet ist, welche in unterschiedliche Richtungen zeigen. Der mittlere Rückprojektionsfehler verringert sich durch das neue Modell bei allen Kameras um den Faktor zwei bis drei. Der Kalibrier-Benchmark ermöglicht es in Zukunft, die Ergebnisse verschiedener Kalibrierverfahren miteinander zu vergleichen und die Genauigkeit des geschätzten Kameramodells mithilfe der Grundwahrheit akkurat zu bestimmen. Die Verringerung des Kalibrierfehlers durch das neue vorgeschlagene Kameramodell hilft die Genauigkeit weiterführender Algorithmen wie Stereo-Vision, visuelle Odometrie oder 3D-Rekonstruktion zu erhöhen

    3D Reconstruction for Optimal Representation of Surroundings in Automotive HMIs, Based on Fisheye Multi-Camera Systems

    Get PDF
    The aim of this thesis is the development of new concepts for environmental 3D reconstruction in automotive surround-view systems where information of the surroundings of a vehicle is displayed to a driver for assistance in parking and low-speed manouvering. The proposed driving assistance system represents a multi-disciplinary challenge combining techniques from both computer vision and computer graphics. This work comprises all necessary steps, namely sensor setup and image acquisition up to 3D rendering in order to provide a comprehensive visualization for the driver. Visual information is acquired by means of standard surround-view cameras with fish eye optics covering large fields of view around the ego vehicle. Stereo vision techniques are applied to these cameras in order to recover 3D information that is finally used as input for the image-based rendering. New camera setups are proposed that improve the 3D reconstruction around the whole vehicle, attending to different criteria. Prototypic realization was carried out that shows a qualitative measure of the results achieved and prove the feasibility of the proposed concept

    Pointing, Acquisition, and Tracking Systems for Free-Space Optical Communication Links

    Get PDF
    Pointing, acquisition, and tracking (PAT) systems have been widely applied in many applications, from short-range (e.g. human motion tracking) to long-haul (e.g. missile guidance) systems. This dissertation extends the PAT system into new territory: free space optical (FSO) communication system alignment, the most important missing ingredient for practical deployment. Exploring embedded geometric invariances intrinsic to the rigidity of actuators and sensors is a key design feature. Once the configuration of the actuator and sensor is determined, the geometric invariance is fixed, which can therefore be calibrated in advance. This calibrated invariance further serves as a transformation for converting the sensor measurement to actuator action. The challenge of the FSO alignment problem lies in how to point to a 3D target by only using a 2D sensor. Two solutions are proposed: the first one exploits the invariance, known as the linear homography, embedded in the FSO applications which involve long link length between transceivers or have planar trajectories. The second one employs either an additional 2D or 1D sensor, which results in invariances known as the trifocal tensor and radial trifocal tensor, respectively. Since these invariances have been developed upon an assumption that the measurements from sensors are free from noise, including the uncertainty resulting from aberrations, a robust calibrate algorithm is required to retrieve the optimal invariance from noisy measurements. The first solution is suffcient for most of the PAT systems used for FSO alignment since a long link length constraint is generally the case. Although PAT systems are normally categorized into coarse and fine subsystems to deal with different requirements, they are proven to be governed by a linear homography. Robust calibration algorithms have been developed during this work and further verified by simulations. Two prototype systems have been developed: one serves as a fine pointing subsystem, which consists of a beam steerer and an angular resolver; while the other serves as a coarse pointing subsystem, which consists of a rotary gimbal and a camera. The average pointing errors in both prototypes were less than 170 and 700 micro-rads, respectively. PAT systems based on the second solution are capable of pointing to any target within the intersected field-of-view from both sensors because two sensors provide stereo vision to determine the depth of the target, the missing information that cannot be determined by a 2D sensor. They are only required when short-distance FSO communication links must be established. Two simulations were conducted to show the robustness of the calibration procedures and the pointing accuracy with respect to random noise

    Hyperspectral Imaging for Fine to Medium Scale Applications in Environmental Sciences

    Get PDF
    The aim of the Special Issue “Hyperspectral Imaging for Fine to Medium Scale Applications in Environmental Sciences” was to present a selection of innovative studies using hyperspectral imaging (HSI) in different thematic fields. This intention reflects the technical developments in the last three decades, which have brought the capacity of HSI to provide spectrally, spatially and temporally detailed data, favoured by e.g., hyperspectral snapshot technologies, miniaturized hyperspectral sensors and hyperspectral microscopy imaging. The present book comprises a suite of papers in various fields of environmental sciences—geology/mineral exploration, digital soil mapping, mapping and characterization of vegetation, and sensing of water bodies (including under-ice and underwater applications). In addition, there are two rather methodically/technically-oriented contributions dealing with the optimized processing of UAV data and on the design and test of a multi-channel optical receiver for ground-based applications. All in all, this compilation documents that HSI is a multi-faceted research topic and will remain so in the future

    Embodiment Sensitivity to Movement Distortion and Perspective Taking in Virtual Reality

    Get PDF
    Despite recent technological improvements of immersive technologies, Virtual Reality suffers from severe intrinsic limitations, in particular the immateriality of the visible 3D environment. Typically, any simulation and manipulation in a cluttered environment would ideally require providing feedback of collisions to every body parts (arms, legs, trunk, etc.) and not only to the hands as has been originally explored with haptic feedback. This thesis addresses these limitations by relying on a cross modal perception and cognitive approach instead of haptic or force feedback. We base our design on scientific knowledge of bodily self-consciousness and embodiment. It is known that the instantaneous experience of embodiment emerges from the coherent multisensory integration of bodily signals taking place in the brain, and that altering this mechanism can temporarily change how one perceives properties of their own body. This mechanism is at stake during a VR simulation, and this thesis explores the new venues of interaction design based on these fundamental scientific findings about the embodied self. In particular, we explore the use of third person perspective (3PP) instead of permanently offering the traditional first person perspective (1PP), and we manipulate the user-avatar motor mapping to achieve a broader range of interactions while maintaining embodiment. We are guided by two principles, to explore the extent to which we can enhance VR interaction through the manipulation of bodily aspects, and to identify the extent to which a given manipulation affects the embodiment of a virtual body. Our results provide new evidence supporting strong embodiment of a virtual body even when viewed from 3PP, and in particular that voluntarily alternating point of view between 1PP and 3PP is not detrimental to the experience of ownership over the virtual body. Moreover, detailed analysis of movement quality show highly similar reaching behavior in both perspective conditions, and only obvious advantages or disadvantages of each perspective depending on the situation (e.g. occlusion of target by the body in 3PP, limited field of view in 1PP). We also show that subjects are insensitive to visuo-proprioceptive movement distortions when the nature of the distortion was not made explicit, and that subjects are biased toward self-attributing distorted movements that make the task easier
    corecore