7,181 research outputs found

    Advanced technology development for image gathering, coding, and processing

    Get PDF
    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding

    Methodology for the Integration of Optomechanical System Software Models with a Radiative Transfer Image Simulation Model

    Get PDF
    Stray light, any unwanted radiation that reaches the focal plane of an optical system, reduces image contrast, creates false signals or obscures faint ones, and ultimately degrades radiometric accuracy. These detrimental effects can have a profound impact on the usability of collected Earth-observing remote sensing data, which must be radiometrically calibrated to be useful for scientific applications. Understanding the full impact of stray light on data scientific utility is of particular concern for lower cost, more compact imaging systems, which inherently provide fewer opportunities for stray light control. To address these concerns, this research presents a general methodology for integrating point spread function (PSF) and stray light performance data from optomechanical system models in optical engineering software with a radiative transfer image simulation model. This integration method effectively emulates the PSF and stray light performance of a detailed system model within a high-fidelity scene, thus producing realistic simulated imagery. This novel capability enables system trade studies and sensitivity analyses to be conducted on parameters of interest, particularly those that influence stray light, by analyzing their quantitative impact on user applications when imaging realistic operational scenes. For Earth science applications, this method is useful in assessing the impact of stray light performance on retrieving surface temperature, ocean color products such as chlorophyll concentration or dissolved organic matter, etc. The knowledge gained from this model integration also provides insight into how specific stray light requirements translate to user application impact, which can be leveraged in writing more informed stray light requirements. In addition to detailing the methodology\u27s radiometric framework, we describe the collection of necessary raytrace data from an optomechanical system model (in this case, using FRED Optical Engineering Software), and present PSF and stray light component validation tests through imaging Digital Imaging and Remote Sensing Image Generation (DIRSIG) model test scenes. We then demonstrate the integration method\u27s ability to produce quantitative metrics to assess the impact of stray light-focused system trade studies on user applications using a Cassegrain telescope model and a stray light-stressing coastal scene under various system and scene conditions. This case study showcases the stray light images and other detailed performance data produced by the integration method that take into account both a system\u27s stray light susceptibility and a scene\u27s at-aperture radiance profile to determine the stray light contribution of specific system components or stray light paths. The innovative contributions provided by this work represent substantial improvements over current stray light modeling and simulation techniques, where the scene image formation is decoupled from the physical system stray light modeling, and can aid in the design of future Earth-observing imaging systems. This work ultimately establishes an integrated-systems approach that combines the effects of scene content and the optomechanical components, resulting in a more realistic and higher fidelity system performance prediction

    Digital implementation of the cellular sensor-computers

    Get PDF
    Two different kinds of cellular sensor-processor architectures are used nowadays in various applications. The first is the traditional sensor-processor architecture, where the sensor and the processor arrays are mapped into each other. The second is the foveal architecture, in which a small active fovea is navigating in a large sensor array. This second architecture is introduced and compared here. Both of these architectures can be implemented with analog and digital processor arrays. The efficiency of the different implementation types, depending on the used CMOS technology, is analyzed. It turned out, that the finer the technology is, the better to use digital implementation rather than analog

    An intelligent real time 3D vision system for robotic welding tasks

    Get PDF
    MARWIN is a top-level robot control system that has been designed for automatic robot welding tasks. It extracts welding parameters and calculates robot trajectories directly from CAD models which are then verified by real-time 3D scanning and registration. MARWIN's 3D computer vision provides a user-centred robot environment in which a task is specified by the user by simply confirming and/or adjusting suggested parameters and welding sequences. The focus of this paper is on describing a mathematical formulation for fast 3D reconstruction using structured light together with the mechanical design and testing of the 3D vision system and show how such technologies can be exploited in robot welding tasks

    Robot training using system identification

    Get PDF
    This paper focuses on developing a formal, theory-based design methodology to generate transparent robot control programs using mathematical functions. The research finds its theoretical roots in robot training and system identification techniques such as Armax (Auto-Regressive Moving Average models with eXogenous inputs) and Narmax (Non-linear Armax). These techniques produce linear and non-linear polynomial functions that model the relationship between a robot’s sensor perception and motor response. The main benefits of the proposed design methodology, compared to the traditional robot programming techniques are: (i) It is a fast and efficient way of generating robot control code, (ii) The generated robot control programs are transparent mathematical functions that can be used to form hypotheses and theoretical analyses of robot behaviour, and (iii) It requires very little explicit knowledge of robot programming where end-users/programmers who do not have any specialised robot programming skills can nevertheless generate task-achieving sensor-motor couplings. The nature of this research is concerned with obtaining sensor-motor couplings, be it through human demonstration via the robot, direct human demonstration, or other means. The viability of our methodology has been demonstrated by teaching various mobile robots different sensor-motor tasks such as wall following, corridor passing, door traversal and route learning

    Recent Progress in Image Deblurring

    Full text link
    This paper comprehensively reviews the recent development of image deblurring, including non-blind/blind, spatially invariant/variant deblurring techniques. Indeed, these techniques share the same objective of inferring a latent sharp image from one or several corresponding blurry images, while the blind deblurring techniques are also required to derive an accurate blur kernel. Considering the critical role of image restoration in modern imaging systems to provide high-quality images under complex environments such as motion, undesirable lighting conditions, and imperfect system components, image deblurring has attracted growing attention in recent years. From the viewpoint of how to handle the ill-posedness which is a crucial issue in deblurring tasks, existing methods can be grouped into five categories: Bayesian inference framework, variational methods, sparse representation-based methods, homography-based modeling, and region-based methods. In spite of achieving a certain level of development, image deblurring, especially the blind case, is limited in its success by complex application conditions which make the blur kernel hard to obtain and be spatially variant. We provide a holistic understanding and deep insight into image deblurring in this review. An analysis of the empirical evidence for representative methods, practical issues, as well as a discussion of promising future directions are also presented.Comment: 53 pages, 17 figure

    Review of high-contrast imaging systems for current and future ground- and space-based telescopes I. Coronagraph design methods and optical performance metrics

    Full text link
    The Optimal Optical Coronagraph (OOC) Workshop at the Lorentz Center in September 2017 in Leiden, the Netherlands gathered a diverse group of 25 researchers working on exoplanet instrumentation to stimulate the emergence and sharing of new ideas. In this first installment of a series of three papers summarizing the outcomes of the OOC workshop, we present an overview of design methods and optical performance metrics developed for coronagraph instruments. The design and optimization of coronagraphs for future telescopes has progressed rapidly over the past several years in the context of space mission studies for Exo-C, WFIRST, HabEx, and LUVOIR as well as ground-based telescopes. Design tools have been developed at several institutions to optimize a variety of coronagraph mask types. We aim to give a broad overview of the approaches used, examples of their utility, and provide the optimization tools to the community. Though it is clear that the basic function of coronagraphs is to suppress starlight while maintaining light from off-axis sources, our community lacks a general set of standard performance metrics that apply to both detecting and characterizing exoplanets. The attendees of the OOC workshop agreed that it would benefit our community to clearly define quantities for comparing the performance of coronagraph designs and systems. Therefore, we also present a set of metrics that may be applied to theoretical designs, testbeds, and deployed instruments. We show how these quantities may be used to easily relate the basic properties of the optical instrument to the detection significance of the given point source in the presence of realistic noise.Comment: To appear in Proceedings of the SPIE, vol. 1069

    The Subaru Coronagraphic Extreme Adaptive Optics system: enabling high-contrast imaging on solar-system scales

    Full text link
    The Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) instrument is a multipurpose high-contrast imaging platform designed for the discovery and detailed characterization of exoplanetary systems and serves as a testbed for high-contrast imaging technologies for ELTs. It is a multi-band instrument which makes use of light from 600 to 2500nm allowing for coronagraphic direct exoplanet imaging of the inner 3 lambda/D from the stellar host. Wavefront sensing and control are key to the operation of SCExAO. A partial correction of low-order modes is provided by Subaru's facility adaptive optics system with the final correction, including high-order modes, implemented downstream by a combination of a visible pyramid wavefront sensor and a 2000-element deformable mirror. The well corrected NIR (y-K bands) wavefronts can then be injected into any of the available coronagraphs, including but not limited to the phase induced amplitude apodization and the vector vortex coronagraphs, both of which offer an inner working angle as low as 1 lambda/D. Non-common path, low-order aberrations are sensed with a coronagraphic low-order wavefront sensor in the infrared (IR). Low noise, high frame rate, NIR detectors allow for active speckle nulling and coherent differential imaging, while the HAWAII 2RG detector in the HiCIAO imager and/or the CHARIS integral field spectrograph (from mid 2016) can take deeper exposures and/or perform angular, spectral and polarimetric differential imaging. Science in the visible is provided by two interferometric modules: VAMPIRES and FIRST, which enable sub-diffraction limited imaging in the visible region with polarimetric and spectroscopic capabilities respectively. We describe the instrument in detail and present preliminary results both on-sky and in the laboratory.Comment: Accepted for publication, 20 pages, 10 figure
    corecore