5 research outputs found
Lightfield Analysis and Its Applications in Adaptive Optics and Surveillance Systems
An image can only be as good as the optics of a camera or any other imaging system allows it to be. An imaging system is merely a transformation that takes a 3D world coordinate to a 2D image plane. This can be done through both linear/non-linear transfer functions. Depending on the application at hand it is easier to use some models of imaging systems over the others in certain situations. The most well-known models are the 1) Pinhole model, 2) Thin Lens Model and 3) Thick lens model for optical systems. Using light-field analysis the connection through these different models is described. A novel figure of merit is presented on using one optical model over the other for certain applications.
After analyzing these optical systems, their applications in plenoptic cameras for adaptive optics applications are introduced. A new technique to use a plenoptic camera to extract information about a localized distorted planar wave front is described. CODEV simulations conducted in this thesis show that its performance is comparable to those of a Shack-Hartmann sensor and that they can potentially increase the dynamic range of angles that can be extracted assuming a paraxial imaging system.
As a final application, a novel dual PTZ-surveillance system to track a target through space is presented. 22X optic zoom lenses on high resolution pan/tilt platforms recalibrate a master-slave relationship based on encoder readouts rather than complicated image processing algorithms for real-time target tracking. As the target moves out of a region of interest in the master camera, it is moved to force the target back into the region of interest. Once the master camera is moved, a precalibrated lookup table is interpolated to compute the relationship between the master/slave cameras. The homography that relates the pixels of the master camera to the pan/tilt settings of the slave camera then continue to follow the planar trajectories of targets as they move through space at high accuracies
POINTING, ACQUISITION, AND TRACKING FOR DIRECTIONAL WIRELESS COMMUNICATIONS NETWORKS
Directional wireless communications networks (DWNs) are expected to
become a workhorse of the military, as they provide great network capacity in hostile
areas where omnidirectional RF systems can put their users in harm's way. These
networks will also be able to adapt to new missions, change topologies, use different
communications technologies, yet still reliably serve all their terminal users. DWNs
also have the potential to greatly expand the capacity of civilian and commercial
wireless communication. The inherently narrow beams present in these types of
systems require a means of steering them, acquiring the links, and tracking to
maintain connectivity. This area of technological challenges encompasses all the
issues of pointing, acquisition, and tracking (PAT).
iii
The two main technologies for DWNs are Free-Space Optical (FSO) and
millimeter wave RF (mmW). FSO offers tremendous bandwidths, long ranges, and
uses existing fiber-based technologies. However, it suffers from severe turbulence
effects when passing through long (>kms) atmospheric paths, and can be severely
affected by obscuration. MmW systems do not suffer from atmospheric effects
nearly as much, use much more sensitive coherent receivers, and have wider beam
divergences allowing for easier pointing. They do, however, suffer from a lack of
available small-sized power amplifiers, complicated RF infrastructure that must be
steered with a platform, and the requirement that all acquisition and tracking be done
with the data beam, as opposed to FSO which uses a beacon laser for acquisition and
a fast steering mirror for tracking.
This thesis analyzes the many considerations required for designing and
implementing a FSO PAT system, and extends this work to the rapidly expanding
area of mmW DWN systems. Different types of beam acquisition methods are
simulated and tested, and the tradeoffs between various design specifications are
analyzed and simulated to give insight into how to best implement a transceiver
platform.
An experimental test-bed of six FSO platforms is also designed and constructed
to test some of these concepts, along with the implementation of a three-node biconnected
network. Finally, experiments have been conducted to assess the
performance of fixed infrastructure routing hardware when operating with a
physically reconfigurable RF network
Calibrage et modélisation d’un système de stéréovision hybride et panoramique
Dans cette thèse nos contributions à la résolution de deux problématiques rencontrées en vision numérique et en photogrammétrie, qui sont le calibrage de caméras et la stéréovision, sont présentées. Ces deux problèmes font l’objet de très nombreuses recherches depuis plusieurs années. Les techniques de calibrage existantes diffèrent beaucoup suivant le type de caméras à calibrer (classique ou panoramique, à focale fixe ou à focale variable, ..). Notre première contribution est un banc de calibrage, à l’aide des éléments d’optique diffractive, qui permet de calibrer avec une bonne précision une très grande partie des caméras existantes. Un modèle simple et précis qui décrit la projection de la grille formée sur l’image et une méthode de calibrage pour chaque type de caméras est proposé. La technique est très robuste et les résultats pour l’ensemble des caméras calibrées sont optimaux. Avec la multiplication des types de caméras et la diversité des modèles de projections, un modèle de formation d'image générique semble très intéressant. Notre deuxième contribution est un modèle de projection unifié pour plusieurs caméras classiques et panoramiques. Dans ce modèle, toute caméra est modélisée par une projection rectiligne et des splines cubiques composées permettant de représenter toutes sortes de distorsions. Cette approche permet de modéliser géométriquement les systèmes de stéréovision mixtes ou panoramiques et de convertir une image panoramique en une image classique. Par conséquent, le problème de stéréovision mixte ou panoramique est transformé en un problème de stéréovision conventionnelle. Mots clés : calibrage, vision panoramique, distorsion, fisheye, zoom, panomorphe, géométrie épipolaire, reconstruction tridimensionnelle, stéréovision hybride, stéréovision panoramique.This thesis aims to present our contributions to the resolution of two problems encountered in the field of computer vision and photogrammetry, which are camera calibration and stereovision. These two problems have been extensively studied in the last years. Different camera calibration techniques have been developed in the literature depending on the type of camera (classical or panoramic, with zoom lens or fixed lens..). Our first contribution is a compact and accurate calibration setup, based on diffractive optical elements, which is suitable for different kind of cameras. The technique is very robust and optimal results were achieved for different types of cameras. With the multiplication of camera types and the diversity of the projection models, a generic model has become very interesting. Our second contribution is a generic model, which is suitable for conventional and panoramic cameras. In this model, composed cubic splines functions provide more realistic model of both radial and tangential distortions. Such an approach allows to model either hybrid or panoramic stereovision system and to convert panoramic image to classical image. Consequently, the processing challenges of a hybrid stereovision system or a panoramic stereovision system are turned into simple classical stereovision problems. Keywords: Calibration, panoramic vision, distortions, fisheye, zoom, panomorph, epipolar geometry, three-dimensional reconstruction, hybrid stereovision, panoramic stereovision
Recommended from our members
Depth Estimation from a Single Holoscopic 3D Image and Image Up-sampling with Deep-learning
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London3D depth information is widely utilized in industries such as security, autonomous vehicles, robotics, 3D printing, AR/VR entertainment, cinematography and medical science. However, state-of-the-art imaging and 3D depth-sensing technologies are rather complicated or expensive and still lack scalability and interoperability. The research identified, entails the development of an innovative technique for reliable and efficient 3D depth estimation that deliver better accuracy. The proposed (1) multilayer Holoscopic 3D encoding technique reduces the computational cost of extracting viewpoint images from complex structured Holoscopic 3D data by 95%, by using labelled multilayer elemental images. It also addresses misplacement of elemental image pixels due to lens distortion error. The multilayer Holoscopic 3D encoding computing efficiency leads to the implementation of real-time 3D depth-dependent applications. Also, (2) an innovative approach of a deep learning-based single image super-resolution framework is developed and evaluated. It identified that learning-based image up-sampling techniques could be used regardless of inadequate 3D training data, as 2D training data can yield the same results.
(3) The research is extended further by implementation of an H3D depth disparity -based framework, where a Holoscopic content adaptation technique for extracting semi-segmented stereo viewpoint image is introduced, and the design of a smart 3D depth mapping technique is proposed. Particularly, it provides a somewhat accurate 3D depth estimation from H3D images in near real-time. Holoscopic 3D image has thousands of perspective elemental images from omnidirectional viewpoint images and (4) a novel 3D depth estimation technique is developed to estimates 3D depth information directly from a single Holoscopic 3D image without the loss of any angular information and the introduction of unwanted artefacts. The proposed 3D depth measurement techniques are computationally efficient and robust with high accuracy; these can be incorporated in real-time applications of autonomous vehicles, security and AR/VR for real-time interaction
Manufacturing Metrology
Metrology is the science of measurement, which can be divided into three overlapping activities: (1) the definition of units of measurement, (2) the realization of units of measurement, and (3) the traceability of measurement units. Manufacturing metrology originally implicates the measurement of components and inputs for a manufacturing process to assure they are within specification requirements. It can also be extended to indicate the performance measurement of manufacturing equipment. This Special Issue covers papers revealing novel measurement methodologies and instrumentations for manufacturing metrology from the conventional industry to the frontier of the advanced hi-tech industry. Twenty-five papers are included in this Special Issue. These published papers can be categorized into four main groups, as follows: Length measurement: covering new designs, from micro/nanogap measurement with laser triangulation sensors and laser interferometers to very-long-distance, newly developed mode-locked femtosecond lasers. Surface profile and form measurements: covering technologies with new confocal sensors and imagine sensors: in situ and on-machine measurements. Angle measurements: these include a new 2D precision level design, a review of angle measurement with mode-locked femtosecond lasers, and multi-axis machine tool squareness measurement. Other laboratory systems: these include a water cooling temperature control system and a computer-aided inspection framework for CMM performance evaluation