38 research outputs found

    Efficient and Accurate Disparity Estimation from MLA-Based Plenoptic Cameras

    Get PDF
    This manuscript focuses on the processing images from microlens-array based plenoptic cameras. These cameras enable the capturing of the light field in a single shot, recording a greater amount of information with respect to conventional cameras, allowing to develop a whole new set of applications. However, the enhanced information introduces additional challenges and results in higher computational effort. For one, the image is composed of thousand of micro-lens images, making it an unusual case for standard image processing algorithms. Secondly, the disparity information has to be estimated from those micro-images to create a conventional image and a three-dimensional representation. Therefore, the work in thesis is devoted to analyse and propose methodologies to deal with plenoptic images. A full framework for plenoptic cameras has been built, including the contributions described in this thesis. A blur-aware calibration method to model a plenoptic camera, an optimization method to accurately select the best microlenses combination, an overview of the different types of plenoptic cameras and their representation. Datasets consisting of both real and synthetic images have been used to create a benchmark for different disparity estimation algorithm and to inspect the behaviour of disparity under different compression rates. A robust depth estimation approach has been developed for light field microscopy and image of biological samples

    Radar Imaging in Challenging Scenarios from Smart and Flexible Platforms

    Get PDF
    undefine

    Ultrasound Imaging

    Get PDF
    In this book, we present a dozen state of the art developments for ultrasound imaging, for example, hardware implementation, transducer, beamforming, signal processing, measurement of elasticity and diagnosis. The editors would like to thank all the chapter authors, who focused on the publication of this book

    Interferometric Synthetic Aperture Sonar Signal Processing for Autonomous Underwater Vehicles Operating Shallow Water

    Get PDF
    The goal of the research was to develop best practices for image signal processing method for InSAS systems for bathymetric height determination. Improvements over existing techniques comes from the fusion of Chirp-Scaling a phase preserving beamforming techniques to form a SAS image, an interferometric Vernier method to unwrap the phase; and confirming the direction of arrival with the MUltiple SIgnal Channel (MUSIC) estimation technique. The fusion of Chirp-Scaling, Vernier, and MUSIC lead to the stability in the bathymetric height measurement, and improvements in resolution. This method is computationally faster, and used less memory then existing techniques

    Radar Technology

    Get PDF
    In this book “Radar Technology”, the chapters are divided into four main topic areas: Topic area 1: “Radar Systems” consists of chapters which treat whole radar systems, environment and target functional chain. Topic area 2: “Radar Applications” shows various applications of radar systems, including meteorological radars, ground penetrating radars and glaciology. Topic area 3: “Radar Functional Chain and Signal Processing” describes several aspects of the radar signal processing. From parameter extraction, target detection over tracking and classification technologies. Topic area 4: “Radar Subsystems and Components” consists of design technology of radar subsystem components like antenna design or waveform design

    On the Applicability of Genetic Algorithms to Fast Solar Spectropolarimetric Inversions for Vector Magnetography

    Get PDF
    The measurement of vector magnetic fields on the sun is one of the most important diagnostic tools for characterizing solar activity. The ubiquitous solar wind is guided into interplanetary space by open magnetic field lines in the upper solar atmosphere. Highly-energetic solar flares and Coronal Mass Ejections (CMEs) are triggered in lower layers of the solar atmosphere by the driving forces at the visible ``surface\u27\u27 of the sun, the photosphere. The driving forces there tangle and interweave the vector magnetic fields, ultimately leading to an unstable field topology with large excess magnetic energy, and this excess energy is suddenly and violently released by magnetic reconnection, emitting intense broadband radiation that spans the electromagnetic spectrum, accelerating billions of metric tons of plasma away from the sun, and finally relaxing the magnetic field to lower-energy states. These eruptive flaring events can have severe impacts on the near-Earth environment and the human technology that inhabits it. This dissertation presents a novel inversion method for inferring the properties of the vector magnetic field from telescopic measurements of the polarization states (Stokes vector) of the light received from the sun, in an effort to develop a method that is fast, accurate, and reliable. One of the long-term goals of this work is to develop such a method that is capable of rapidly-producing characterizations of the magnetic field from time-sequential data, such that near real-time projections of the complexity and flare-productivity of solar active regions can be made. This will be a boon to the field of solar flare forecasting, and should help mitigate the harmful effects of space weather on mankind\u27s space-based endeavors. To this end, I have developed an inversion method based on genetic algorithms (GA) that have the potential for achieving such high-speed analysis

    Ultralight Radar Sensor for Autonomous Operations by Mini- and Micro-UAS

    Get PDF
    In recent years the boost in operations by mini- and micro-UAS (Unmanned Aircraft Systems, also known as Remotely Piloted Aircraft Systems - RPAS - or simply drones) and the successful miniaturization of electronic components were experienced. Radar sensors demonstrated to have favorable features for these operations. However, despite their ability to provide meaningful information for navigation, sense-and-avoid, and imaging tasks, currently very few radar sensors are exploited onboard or developed for autonomous operations with mini- and micro-UAS. Exploration of indoor complex, dangerous, and not easily accessible environments represents a possible application for mini-UAS based on radar technology. In this scenario, the objective of the thesis is to develop design strategies and processing approaches for a novel ultralight radar sensor able to provide the miniaturized platform with Simultaneous Localization and Mapping (SLAM) capabilities, mainly but not exclusively indoors. Millimeter-wave Interferometric Synthetic Aperture Radar (mmw InSAR) technology has been identified as a key asset. At the same time, testing of commercial lightweight radar is carried out to assess potentialities towards autonomous navigation, sense-and-avoid, and imaging. The two main research lines can be outlined as follows: - Long-term scenario: Development of very compact and ultralight Synthetic Aperture Radar able to provide mini- or micro-UAS with very accurate 3D awareness in indoor or GPS-denied complex and harsh environments. - Short-term scenario: Assessment of true potentialities of current commercial radar sensors in a UAS-oriented scenario. Within the framework of long-term scenario, after a review of state-of-art SAR sensors, Frequency-Modulated Continuous Wave (FMCW) SAR technology has been selected as preferred candidate. Design procedure tailored to this technology and software simulator for operations have been developed in MATLAB environment. Software simulator accounts for the analysis of ambiguous areas in a three-dimensional environment, different SAR focusing algorithms, and a Ray-Tracing algorithm specifically designed for indoor operations. The simulations provided relevant information on actual feasibility of the sensor, as well as mission design characteristics. Additionally, field tests have been carried out at Fraunhofer Institute FHR with a mmw SAR. Processing approaches developed from simulations proved to be effective when dealing with field tests. A very lightweight FMCW radar sensor manifactured by IMST GmbH has been tested for short-term scenario operations. The codes for data acquisition were developed in Python language both for Windows-based and GNU/Linux-based operative systems. The radar provided information on range and angle of targets in the scene, thus being interesting for radar-aided UAS navigation. Multiple-target tracking and radar odometry algorithms have been developed and tested on actual field data. Radar-only odometry provided to be effective under specific circumstances

    Inverse problems in astronomical and general imaging

    Get PDF
    The resolution and the quality of an imaged object are limited by four contributing factors. Firstly, the primary resolution limit of a system is imposed by the aperture of an instrument due to the effects of diffraction. Secondly, the finite sampling frequency, the finite measurement time and the mechanical limitations of the equipment also affect the resolution of the images captured. Thirdly, the images are corrupted by noise, a process inherent to all imaging systems. Finally, a turbulent imaging medium introduces random degradations to the signals before they are measured. In astronomical imaging, it is the atmosphere which distorts the wavefronts of the objects, severely limiting the resolution of the images captured by ground-based telescopes. These four factors affect all real imaging systems to varying degrees. All the limitations imposed on an imaging system result in the need to deduce or reconstruct the underlying object distribution from the distorted measured data. This class of problems is called inverse problems. The key to the success of solving an inverse problem is the correct modelling of the physical processes which give rise to the corresponding forward problem. However, the physical processes have an infinite amount of information, but only a finite number of parameters can be used in the model. Information loss is therefore inevitable. As a result, the solution to many inverse problems requires additional information or prior knowledge. The application of prior information to inverse problems is a recurrent theme throughout this thesis. An inverse problem that has been an active research area for many years is interpolation, and there exist numerous techniques for solving this problem. However, many of these techniques neither account for the sampling process of the instrument nor include prior information in the reconstruction. These factors are taken into account in the proposed optimal Bayesian interpolator. The process of interpolation is also examined from the point of view of superresolution, as these processes can be viewed as being complementary. Since the principal effect of atmospheric turbulence on an incoming wavefront is a phase distortion, most of the inverse problem techniques devised for this seek to either estimate or compensate for this phase component. These techniques are classified into computer post-processing methods, adaptive optics (AO) and hybrid techniques. Blind deconvolution is a post-processing technique which uses the speckle images to estimate both the object distribution and the point spread function (PSF), the latter of which is directly related to the phase. The most successful approaches are based on characterising the PSF as the aberrations over the aperture. Since the PSF is also dependent on the atmosphere, it is possible to constrain the solution using the statistics of the atmosphere. An investigation shows the feasibility of this approach. Bispectrum is also a post-processing method which reconstructs the spectrum of the object. The key component for phase preservation is the property of phase closure, and its application as prior information for blind deconvolution is examined. Blind deconvolution techniques utilise only information in the image channel to estimate the phase which is difficult. An alternative method for phase estimation is from a Shack-Hartmann (SH) wavefront sensing channel. However, since phase information is present in both the wavefront sensing and the image channels simultaneously, both of these approaches suffer from the problem that phase information from only one channel is used. An improved estimate of the phase is achieved by a combination of these methods, ensuring that the phase estimation is made jointly from the data in both the image and the wavefront sensing measurements. This formulation, posed as a blind deconvolution framework, is investigated in this thesis. An additional advantage of this approach is that since speckle images are imaged in a narrowband, while wavefront sensing images are captured by a charge-coupled device (CCD) camera at all wavelengths, the splitting of the light does not compromise the light level for either channel. This provides a further incentive for using simultaneous data sets. The effectiveness of using Shack-Hartmann wavefront sensing data for phase estimation relies on the accuracy of locating the data spots. The commonly used method which calculates the centre of gravity of the image is in fact prone to noise and is suboptimal. An improved method for spot location based on blind deconvolution is demonstrated. Ground-based adaptive optics (AO) technologies aim to correct for atmospheric turbulence in real time. Although much success has been achieved, the space- and time-varying nature of the atmosphere renders the accurate measurement of atmospheric properties difficult. It is therefore usual to perform additional post-processing on the AO data. As a result, some of the techniques developed in this thesis are applicable to adaptive optics. One of the methods which utilise elements of both adaptive optics and post-processing is the hybrid technique of deconvolution from wavefront sensing (DWFS). Here, both the speckle images and the SH wavefront sensing data are used. The original proposal of DWFS is simple to implement but suffers from the problem where the magnitude of the object spectrum cannot be reconstructed accurately. The solution proposed for overcoming this is to use an additional set of reference star measurements. This however does not completely remove the original problem; in addition it introduces other difficulties associated with reference star measurements such as anisoplanatism and reduction of valuable observing time. In this thesis a parameterised solution is examined which removes the need for a reference star, as well as offering a potential to overcome the problem of estimating the magnitude of the object

    Point-diffraction interferometry for wavefront sensing in adaptive optics

    Get PDF
    The work presented in this thesis aims at the development and validation of a wavefront sensor concept for adaptive optics (AO) called the pupil-modulated Point-Diffraction Interferometer (m-PDI). The m-PDI belongs to a broader family of wavefront sensors called Point-Diffraction Interferometers (PDIs), which make use of a small pinhole to filter a portion of the incoming light, hence generating a reference beam. This allows them to perform wavefront sensing on temporally incoherent light, such as natural guide stars in the context of astronomical AO. Due to their high sensitivity, PDIs are being developed to address several difficult problems in AO, namely measuring quasi-static aberrations to a high degree of accuracy, the cophasing of segmented apertures, and reaching a high correction regime known as extreme AO. But despite their advantages, they remain limited by their narrow chromatic range, around ∆λ = 2% relative to central bandwidth, and short dynamic range, generally of ±π/2. The purpose of developing the m-PDI is to explore whether this new concept has any ad- vantages regarding these limitations. Indeed, we find that the m-PDI has a maximum chromatic bandwidth of 66% relative to the central wavelength and a dynamic range at least 4 times larger than that of other PDIs. Although the m-PDI concept had been proposed previously, it had not been explored to the extent reached in this manuscript. This thesis presents an initial investigation into the m-PDI, beginning with the development of the theory. Here the theoretical framework is laid out to explain how interference fringes are modulated by the wavefront, how to then demodulate the propa- gating electric field’s phase and then finally how to measure the signal-to-noise ratio (SNR). After building analytical and numerical models, a prototype is designed, built and characterised using CHOUGH, a high-order AO testbed in the lab. This incarnation of the m-PDI is called the Calibration & Alignment Wavefront Sensor (CAWS). The characterisation of the CAWS shows two things. The first one is that the CAWS’ response is approximately flat across its spatial frequency domain. The second one is that its dynamic range decreases at higher frequencies, suggesting that it depends, amongst other things, on the wavefront’s slope. In order to prove that m-PDIs can be used for AO, a control loop is closed using the CAWS and CHOUGH’s deformable mirror, with both monochromatic and broadband light. The results show that the final Strehl ratio increases from 0.2 to 0.66, at a wavelength of 633 nm. The difference in residual aberrations seen separately by the imaging camera and by the CAWS is about 20 nm RMS. This is explained by non-common path aberrations and low order aberrations which are invisible to the CAWS. Finally, the instrument was tested on the CANARY AO bench at the William Herschel Telescope. The CAWS was successful at characterising the quasi- static aberrations of the system and at demodulating the phase of wavefronts produced with the deformable mirror. When demodulating on-sky residual aberrations at the back of CANARY’s single-conjugate AO loop, the SNR remained too low for effective wavefront demodulation, only sporadically in- creasing above 1. These results are not discouraging as the CAWS was only a first prototype and CANARY is not a high-order system, reaching a Strehl ratio of around 0.5% at 675 nm. The lessons and improvements for future de- signs are to increase the diameter of the instrument’s pinhole by at least twice, and deliver it a higher Strehl ratio by moving towards longer wavelengths and employing a higher order AO system

    Holistic simulation of optical systems

    Get PDF
    For many years, the design of optical systems mainly comprised a linear arrangement of plane or spherical components, such as lenses, mirrors or prisms, and a geometric-optical description by ray tracing lead to an accurate and satisfactory result. Today, many modern optical systems found in a variety of different industrial and scientific applications, deviate from this structure. Polarization, diffraction and coherence, or material interactions, such as volume or surface scattering, need to be included when reasonable performance predictions are required. Furthermore, manufacturing and alignment aspects must be considered in the design and simulation of optical systems to ensure that their impact is not damaging to the overall purpose of the corresponding setup. Another important part is the growing field of digital optics. Signal processing algorithms have become an indispensable part of many systems, whereby an almost unlimited number of current and potential applications exists. Since these algorithms are an essential part of the system, their compatibility and impact on the completed system is an important aspect to con- sider. In principle, this list of relevant topics and examples can be further expanded to an almost unlimited extend. However, the simulation and optimization of the single sub-aspects do often not lead to a satisfactory result. The goal of this thesis is to demonstrate that the performance prediction of modern optical systems benefits significantly from an aggregation of the individual models and technological aspects. Present concepts are further enhanced by the development and analysis of new approaches and algorithms, leading to a more holistic description and simulation of complex setups as a whole. The long-term objective of this work is a comprehensive virtual and rapid prototyping. From an industrial perspective, this would reduce the risk, time and costs associated with the development of an optical system
    corecore