75 research outputs found

    Ultra-high speed electro-optical systems employing fiber optics final report

    Get PDF
    Ultrahigh speed electro-optical systems employing fiber optic

    Multiwavelength Observations of the Second Largest Known FR II Radio Galaxy, NVSS 2146+82

    Get PDF
    We present multi-frequency VLA, multicolor CCD imaging, optical spectroscopy, and ROSAT HRI observations of the giant FR II radio galaxy NVSS 2146+82. This galaxy, which was discovered by the NRAO VLA Sky Survey (NVSS), has an angular extent of nearly 20' from lobe to lobe. The radio structure is normal for an FR II source except for its large size and regions in the lobes with unusually flat radio spectra. Our spectroscopy indicates that the optical counterpart of the radio core is at a redshift of z=0.145, so the linear size of the radio structure is ~4 h_50^-1 Mpc. This object is therefore the second largest FR II known (3C 236 is ~6 h_50^-1 Mpc). Optical imaging of the field surrounding the host galaxy reveals an excess number of candidate galaxy cluster members above the number typically found in the field surrounding a giant radio galaxy. WIYN HYDRA spectra of a sample of the candidate cluster members reveal that six share the same redshift as NVSS 2146+82, indicating the presence of at least a ``rich group'' containing the FR II host galaxy. ROSAT HRI observations of NVSS 2146+82 place upper limits on the X-ray flux of 1.33 x 10^-13 ergs cm^-2 s^-1 for any hot IGM and 3.52 x 10^-14 ergs cm^-2 s^-1 for an X-ray AGN, thereby limiting any X-ray emission at the distance of the radio galaxy to that typical of a poor group or weak AGN. Several other giant radio galaxies have been found in regions with overdensities of nearby galaxies, and a separate study has shown that groups containing FR IIs are underluminous in X-rays compared to groups without radio sources. We speculate that the presence of the host galaxy in an optically rich group of galaxies that is underluminous in X-rays may be related to the giant radio galaxy phenomenon.Comment: 46 pages, 15 figures, AASTeX aaspp4 style, accepted for publication in A

    水中イメージングシステムのための画質改善に関する研究

    Get PDF
    Underwater survey systems have numerous scientific or industrial applications in the fields of geology, biology, mining, and archeology. These application fields involve various tasks such as ecological studies, environmental damage assessment, and ancient prospection. During two decades, underwater imaging systems are mainly equipped by Underwater Vehicles (UV) for surveying in water or ocean. Challenges associated with obtaining visibility of objects have been difficult to overcome due to the physical properties of the medium. In the last two decades, sonar is usually used for the detection and recognition of targets in the ocean or underwater environment. However, because of the low quality of images by sonar imaging, optical vision sensors are then used instead of it for short range identification. Optical imaging provides short-range, high-resolution visual information of the ocean floor. However, due to the light transmission’s physical properties in the water medium, the optical imaged underwater images are usually performance as poor visibility. Light is highly attenuated when it travels in the ocean. Consequence, the imaged scenes result as poorly contrasted and hazy-like obstructions. The underwater imaging processing techniques are important to improve the quality of underwater images. As mentioned before, underwater images have poor visibility because of the medium scattering and light distortion. In contrast to common photographs, underwater optical images suffer from poor visibility owing to the medium, which causes scattering, color distortion, and absorption. Large suspended particles cause scattering similar to the scattering of light in fog or turbid water that contain many suspended particles. Color distortion occurs because different wavelengths are attenuated to different degrees in water; consequently, images of ambient in the underwater environments are dominated by a bluish tone, because higher wavelengths are attenuated more quickly. Absorption of light in water substantially reduces its intensity. The random attenuation of light causes a hazy appearance as the light backscattered by water along the line of sight considerably degrades image contrast. Especially, objects at a distance of more than 10 meters from the observation point are almost unreadable because colors are faded as characteristic wavelengths, which are filtered according to the distance traveled by light in water. So, traditional image processing methods are not suitable for processing them well. This thesis proposes strategies and solutions to tackle the above mentioned problems of underwater survey systems. In this thesis, we contribute image pre-processing, denoising, dehazing, inhomogeneities correction, color correction and fusion technologies for underwater image quality improvement. The main content of this thesis is as follows. First, comprehensive reviews of the current and most prominent underwater imaging systems are provided in Chapter 1. A main features and performance based classification criterion for the existing systems is presented. After that, by analyzing the challenges of the underwater imaging systems, a hardware based approach and non-hardware based approach is introduced. In this thesis, we are concerned about the image processing based technologies, which are one of the non-hardware approaches, and take most recent methods to process the low quality underwater images. As the different sonar imaging systems applied in much equipment, such as side-scan sonar, multi-beam sonar. The different sonar acquires different images with different characteristics. Side-scan sonar acquires high quality imagery of the seafloor with very high spatial resolution but poor locational accuracy. On the contrast, multi-beam sonar obtains high precision position and underwater depth in seafloor points. In order to fully utilize all information of these two types of sonars, it is necessary to fuse the two kinds of sonar data in Chapter 2. Considering the sonar image forming principle, for the low frequency curvelet coefficients, we use the maximum local energy method to calculate the energy of two sonar images. For the high frequency curvelet coefficients, we take absolute maximum method as a measurement. The main attributes are: firstly, the multi-resolution analysis method is well adapted the cured-singularities and point-singularities. It is useful for sonar intensity image enhancement. Secondly, maximum local energy is well performing the intensity sonar images, which can achieve perfect fusion result [42]. In Chapter 3, as analyzed the underwater laser imaging system, a Bayesian Contourlet Estimator of Bessel K Form (BCE-BKF) based denoising algorithm is proposed. We take the BCE-BKF probability density function (PDF) to model neighborhood of contourlet coefficients. After that, according to the proposed PDF model, we design a maximum a posteriori (MAP) estimator, which relies on a Bayesian statistics representation of the contourlet coefficients of noisy images. The denoised laser images have better contrast than the others. There are three obvious virtues of the proposed method. Firstly, contourlet transform decomposition prior to curvelet transform and wavelet transform by using ellipse sampling grid. Secondly, BCE-BKF model is more effective in presentation of the noisy image contourlet coefficients. Thirdly, the BCE-BKF model takes full account of the correlation between coefficients [107]. In Chapter 4, we describe a novel method to enhance underwater images by dehazing. In underwater optical imaging, absorption, scattering, and color distortion are three major issues in underwater optical imaging. Light rays traveling through water are scattered and absorbed according to their wavelength. Scattering is caused by large suspended particles that degrade optical images captured underwater. Color distortion occurs because different wavelengths are attenuated to different degrees in water; consequently, images of ambient underwater environments are dominated by a bluish tone. Our key contribution is to propose a fast image and video dehazing algorithm, to compensate the attenuation discrepancy along the propagation path, and to take the influence of the possible presence of an artificial lighting source into consideration [108]. In Chapter 5, we describe a novel method of enhancing underwater optical images or videos using guided multilayer filter and wavelength compensation. In certain circumstances, we need to immediately monitor the underwater environment by disaster recovery support robots or other underwater survey systems. However, due to the inherent optical properties and underwater complex environment, the captured images or videos are distorted seriously. Our key contributions proposed include a novel depth and wavelength based underwater imaging model to compensate for the attenuation discrepancy along the propagation path and a fast guided multilayer filtering enhancing algorithm. The enhanced images are characterized by a reduced noised level, better exposure of the dark regions, and improved global contrast where the finest details and edges are enhanced significantly [109]. The performance of the proposed approaches and the benefits are concluded in Chapter 6. Comprehensive experiments and extensive comparison with the existing related techniques demonstrate the accuracy and effect of our proposed methods.九州工業大学博士学位論文 学位記番号:工博甲第367号 学位授与年月日:平成26年3月25日CHAPTER 1 INTRODUCTION|CHAPTER 2 MULTI-SOURCE IMAGES FUSION|CHAPTER 3 LASER IMAGES DENOISING|CHAPTER 4 OPTICAL IMAGE DEHAZING|CHAPTER 5 SHALLOW WATER DE-SCATTERING|CHAPTER 6 CONCLUSIONS九州工業大学平成25年

    Machine Learning Approaches to Image Deconvolution

    Get PDF
    Image blur is a fundamental problem in both photography and scientific imaging. Even the most well-engineered optics are imperfect, and finite exposure times cause motion blur. To reconstruct the original sharp image, the field of image deconvolution tries to recover recorded photographs algorithmically. When the blur is known, this problem is called non-blind deconvolution. When the blur is unknown and has to be inferred from the observed image, it is called blind deconvolution. The key to reconstructing information lost due to blur and noise is to use prior knowledge. To this end, this thesis develops approaches inspired by machine learning that include more available information and advance the current state of the art for both non-blind and blind image deconvolution. Optical aberrations of a lens are encoded in an initial calibration step as a spatially-varying point spread function. With prior information about the distribution of gradients in natural images, the original image is reconstructed in a maximum a posteriori (MAP) estimation, with results comparing favorably to previous methods. By including the camera’s color filter array in the forward model, the estimation procedure can perform demosaicing and deconvolution jointly and thereby surpass the quality of the results yielded by a separate demosaicing step. The applicability of removing optical aberrations is broadened further by estimating the point spread function from the image itself. We extend an existing MAP-based blind deconvolution approach to the first algorithm that is able to remove spatially-varying lens blur blindly, including chromatic aberrations. The properties of lenses restrict the class of possible point spread functions and reduce the space of parameters to be inferred, enabling results on par with the best non-blind approaches for the lenses tested in our experiments. To capture more information about the distribution of natural images and capitalize on the abundance of training data, neural networks prove to be a useful tool. As other successful non-blind deconvolution methods, a regularized inversion of the blur is performed in the Fourier domain as an initial step. Next, a large neural network learns the mapping from the preprocessed image back to the uncorrupted original. The trained network surpasses results of state-of-the-art algorithms on both artificial and real-world examples. For the first time, a learning approach also succeeds in blind image deconvolution. A deep neural network “unrolls” the estimation procedure of existing methods for this task. After training end-to-end on artificially generated example images, the network achieves performance competitive with state-of-the-art methods in the generic case, and even goes beyond when trained for a specific image category.Unscharfe Bilder sind ein häufiges Problem, sowohl in der Fotografie als auch in der wissenschaftlichen Bildgebung. Auch die leistungsfähigsten optischen Systeme sind nicht perfekt, und endliche Belichtungszeiten verursachen Bewegungsunschärfe. Dekonvolution hat das Ziel das ursprünglich scharfe Bild aus der Aufnahme mit Hilfe von algorithmischen Verfahren wiederherzustellen. Kennt man die exakte Form der Unschärfe, so wird dieses Rekonstruktions-Problem als nicht-blinde Dekonvolution bezeichnet. Wenn die Unschärfe aus dem Bild selbst inferiert werden muss, so spricht man von blinder Dekonvolution. Der Schlüssel zum Wiederherstellen von verlorengegangener Bildinformation liegt im Verwenden von verfügbarem Vorwissen über Bilder und die Entstehung der Unschärfe. Hierzu entwickelt diese Arbeit verschiedene Ansätze um dieses Vorwissen besser verwenden zu können, basierend auf Methoden des maschinellen Lernens, und verbessert damit den Stand der Technik, sowohl für nicht-blinde als auch für blinde Dekonvolution. Optische Abbildungsfehler lassen sich in einem einmal ausgeführten Kalibrierungsschritt vermessen und als eine ortsabhängige Punktverteilungsfunktion des einfallenden Lichtes beschreiben. Mit dem Vorwissen über die Verteilung von Gradienten in Bildern kann das ursprüngliche Bild durch eine Maximum-a-posteriori (MAP) Schätzung wiederhergestellt werden, wobei die resultierenden Ergebnisse vergleichbare Methoden übertreffen. Wenn man des Weiteren im Vorwärtsmodell die Farbfilter des Sensors berücksichtigt, so kann das Schätzverfahren Demosaicking und Dekonvolution simultan ausführen, in einer Qualität die den Ergebnissen durch Demosaicking in einem separaten Schritt überlegen ist. Die Korrektur von Linsenfehlern wird breiter anwendbar indem man die Punktverteilungsfunktion vom Bild selbst inferiert. Wir erweitern einen existierenden MAP-basierenden Ansatz für blinde Dekonvolution zum ersten Algorithmus, der in der Lage ist auch ortsabhängige optische Unschärfen blind zu entfernen, einschließlich chromatischer Aberration. Die spezifischen Eigenschaften von Kamera-Objektiven schränken den Raum der zu schätzenden Punktverteilungsfunktionen weit genug ein, so dass für die in unseren Experimenten untersuchten Objektive die erreichte Bildrekonstruktion ähnlich erfolgreich ist wie bei nicht-blinden Verfahren. Es zeigt sich, dass neuronale Netze von im Überfluss vorhandenen Bilddatenbanken profitieren können um mehr über die Bildern zugrundeliegende Wahrscheinlichkeitsverteilung zu lernen. Ähnlich wie in anderen erfolgreichen nicht-blinden Dekonvolutions-Ansätzen wird die Unschärfe zuerst durch eine regularisierte Inversion im Fourier-Raum vermindert. Danach ist es einem neuronalen Netz mit großer Kapazität möglich zu lernen, wie aus einem derart vorverarbeiteten Bild das fehlerfreie Original geschätzt werden kann. Das trainierte Netz produziert anderen Methoden überlegene Ergebnisse, sowohl auf künstlich generierten Beispielen als auch auf tatsächlichen unscharfen Fotos. Zum ersten Mal ist ein lernendes Verfahren auch hinsichtlich der blinden Bild-Dekonvolution erfolgreich. Ein tiefes neuronales Netz modelliert die Herangehensweise von bisherigen Schätzverfahren und wird auf künstlich generierten Beispielen trainiert die Unschärfe vorherzusagen. Nach Abschluss des Trainings ist es in der Lage, mit anderen aktuellen Methoden vergleichbare Ergebnisse zu erzielen, und geht über deren Ergebnisse hinaus, wenn man speziell für eine bestimmten Subtyp von Bildern trainiert

    Analysis of freeform optical systems based on the decomposition of the total wave aberration into Zernike surface contributions

    Get PDF
    The increasing use of freeform optical surfaces raises the demand for optical design tools developed for generalized systems. In the design process, surface-by-surface aberration contributions are of special interest. The expansion of the wave aberration function into the field- and pupil-dependent coefficients is an analytical method used for that purpose. An alternative numerical approach utilizing data from the trace of multiple ray sets is proposed. The optical system is divided into segments of the optical path measured along the chief ray. Each segment covers one surface and the distance to the subsequent surface. Surface contributions represent the change of the wavefront that occurs due to propagation through individual segments. Further, the surface contributions are divided with respect to their phenomenological origin into intrinsic induced and transfer components. Each component is determined from a separate set of rays. The proposed method does not place any constraints on the system geometry or the aperture shape. However, in this thesis only plane symmetric systems with near-circular apertures are studied. This enabled characterization of the obtained aberration components with Zernike fringe polynomials. The application of the proposed method in the design process of the freeform systems is demonstrated. The analysis of Zernike surface contributions provides valuable insights for selecting the starting system with the best potential for correcting aberrations with freeform surfaces. Further, it helps in determining the effective location of a freeform element in a system. Consequently, it is possible to design systems corrected for Zernike aberrations of order higher than the order of coefficients used for freeform sag contributions, described with the same Zernike polynomial set.Die zunehmende Verwendung von optischen Freiformflächen erhöht die Forderung nach optischen Designwerkzeugen die für allgemeine Systeme entwickelt wurden. Im Design-Prozess sind oberflächenbedingte Aberrationsbeiträge von besonderem Interesse. Die Erweiterung der Wellenaberrationsfunktion in feld- und pupillen-abhängige Koeffizienten ist eine zu diesem Zweck verwendete analytische Methode. Ein alternativer numerischer Ansatz, der Daten aus der Verlauf von mehreren Strahlenbündeln verwendet, wird vorgeschlagen. Das optische System ist in Segmente des optischen Weges unterteilt, die entlang des Hauptstrahls gemessen werden. Oberflächenbeiträge repräsentieren die Änderung der Wellenfront, die aufgrund der Propagation durch einzelne Segmente auftritt. Ferner sind die Oberflächenbeiträge hinsichtlich ihres phänomenologischen Ursprungs in intrinsische induzierte und transferierende Komponenten unterteilt. Jede Komponente wird aus einem separaten Strahlenbündel bestimmt. Die vorgeschlagene Methode stellt keine Beschränkungen für die Systemgeometrie oder die Apertur bereit. In dieser Arbeit werden jedoch nur ebene symmetrische Systeme mit nahezu kreisförmigen Aperturen untersucht. Dies ermöglichte eine Charakterisierung der erhaltenen Aberrationskomponenten mit Zernike-Randpolynomen. Die Anwendung der vorgeschlagenen Methode im Designprozess der Freiformsysteme wird demonstriert. Die Analyse der Zernike-Oberflächenbeiträge liefert wertvolle Erkenntnisse für die Auswahl des Startsystems mit dem besten Potenzial zur Korrektur von Aberrationen mit Freiformflächen. Außerdem hilft es beim Bestimmen der effektiven Position eines Freiformelements in einem System. Folglich ist es möglich, Systeme zu entwerfen, die für Zernike-Aberrationen höherer Ordnung korrigiert sind als die für die Freiform-Sag Beiträge verwendeten Koeffizienten, die mit demselben Zernike-Polynomsatz beschrieben sind

    Focal Plane Wavefront Sensing using Residual Adaptive Optics Speckles

    Get PDF
    Optical imperfections, misalignments, aberrations, and even dust can significantly limit sensitivity in high-contrast imaging systems such as coronagraphs. An upstream deformable mirror (DM) in the pupil can be used to correct or compensate for these flaws, either to enhance Strehl ratio or suppress residual coronagraphic halo. Measurement of the phase and amplitude of the starlight halo at the science camera is essential for determining the DM shape that compensates for any non-common-path (NCP) wavefront errors. Using DM displacement ripples to create a series of probe and anti-halo speckles in the focal plane has been proposed for space-based coronagraphs and successfully demonstrated in the lab. We present the theory and first on-sky demonstration of a technique to measure the complex halo using the rapidly-changing residual atmospheric speckles at the 6.5m MMT telescope using the Clio mid-IR camera. The AO system's wavefront sensor (WFS) measurements are used to estimate the residual wavefront, allowing us to approximately compute the rapidly-evolving phase and amplitude of speckle halo. When combined with relatively-short, synchronized science camera images, the complex speckle estimates can be used to interferometrically analyze the images, leading to an estimate of the static diffraction halo with NCP effects included. In an operational system, this information could be collected continuously and used to iteratively correct quasi-static NCP errors or suppress imperfect coronagraphic halos.Comment: Astrophysical Journal (accepted). 26 pages, 21 figure

    Data Models for Dataset Drift Controls in Machine Learning With Images

    Full text link
    Camera images are ubiquitous in machine learning research. They also play a central role in the delivery of important services spanning medicine and environmental surveying. However, the application of machine learning models in these domains has been limited because of robustness concerns. A primary failure mode are performance drops due to differences between the training and deployment data. While there are methods to prospectively validate the robustness of machine learning models to such dataset drifts, existing approaches do not account for explicit models of the primary object of interest: the data. This makes it difficult to create physically faithful drift test cases or to provide specifications of data models that should be avoided when deploying a machine learning model. In this study, we demonstrate how these shortcomings can be overcome by pairing machine learning robustness validation with physical optics. We examine the role raw sensor data and differentiable data models can play in controlling performance risks related to image dataset drift. The findings are distilled into three applications. First, drift synthesis enables the controlled generation of physically faithful drift test cases. The experiments presented here show that the average decrease in model performance is ten to four times less severe than under post-hoc augmentation testing. Second, the gradient connection between task and data models allows for drift forensics that can be used to specify performance-sensitive data models which should be avoided during deployment of a machine learning model. Third, drift adjustment opens up the possibility for processing adjustments in the face of drift. This can lead to speed up and stabilization of classifier training at a margin of up to 20% in validation accuracy. A guide to access the open code and datasets is available at https://github.com/aiaudit-org/raw2logit.Comment: LO and MA contributed equall

    The VST Photometric Ha Survey of the Southern Galactic Plane and Bulge (VPHAS+)

    Get PDF
    The VST Photometric Ha Survey of the Southern Galactic Plane and Bulge (VPHAS+) is surveying the southern Milky Way in u, g, r, i and Ha at ~1?arcsec angular resolution. Its footprint spans the Galactic latitude range -5o < b < +5° at all longitudes south of the celestial equator. Extensions around the Galactic Centre to Galactic latitudes ±10° bring in much of the Galactic bulge. This European Southern Observatory public survey, begun on 2011 December 28, reaches down to ~20th magnitude (10s) and will provide single-epoch digital optical photometry for ~300 million stars. The observing strategy and data pipelining are described, and an appraisal of the segmented narrow-band Ha filter in use is presented. Using model atmospheres and library spectra, we compute main-sequence (u - g), (g - r), (r - i) and (r - Ha) stellar colours in the Vega system. We report on a preliminary validation of the photometry using test data obtained from two pointings overlapping the Sloan Digital Sky Survey. An example of the (u - g, g - r) and (r - Ha, r - i) diagrams for a full VPHAS+ survey field is given. Attention is drawn to the opportunities for studies of compact nebulae and nebular morphologies that arise from the image quality being achieved. The value of the u band as the means to identify planetary-nebula central stars is demonstrated by the discovery of the central star of NGC 2899 in survey data. Thanks to its excellent imaging performance, the VLT Survey Telescope (VST)/OmegaCam combination used by this survey is a perfect vehicle for automated searches for reddened early-type stars, and will allow the discovery and analysis of compact binaries, white dwarfs and transient sources
    corecore