3,833 research outputs found

    Extending the depth of field with chromatic aberration for dual-wavelength iris imaging

    Get PDF
    We propose a method of extending the depth of field to twice that achievable by conventional lenses for the purpose of a low cost iris recognition front-facing camera in mobile phones. By introducing intrinsic primary chromatic aberration in the lens, the depth of field is doubled by means of dual wavelength illumination. The lens parameters (radius of curvature, optical power) can be found analytically by using paraxial raytracing. The effective range of distances covered increases with dispersion of the glass chosen and with larger distance for the near object point

    Theoretical researches into planetary atmospheres and their influence upon surface features

    Get PDF
    The interface between certain geological and atmospheric phenomena on Mars was studied by examining those geological features which are associated with the presence of water in the liquid or solid phases. Several classes of Martian surface features thought to have had their origins in flow processes were studied in order to determine the role ice may have played in their creations. Preliminary studies concerning the behavior of Martian ice shelves were conducted, with the conclusion that flow rates of Martian and earth ice sheets are similar. Withdrawal of subsurface ice was found to be among the explanations for the origins of the chaotic terrains and drifted blocks

    Optimal Depth Estimation and Extended Depth of Field from Single Images by Computational Imaging using Chromatic Aberrations

    Get PDF
    The thesis presents a thorough analysis of a computational imaging approach to estimate the optimal depth, and the extended depth of field from a single image using axial chromatic aberrations. To assist a camera design process, a digital camera simulator is developed which can efficiently simulate different kind of lenses for a 3D scene. The main contribution in the simulator is the fast implementation of space variant filtering and accurate simulation of optical blur at occlusion boundaries. The simulator also includes sensor modeling and digital post processing to facilitate a co-design of optics and digital processing algorithms. To estimate the depth from color images, which are defocused to different amount due to axial chromatic aberrations, a low cost algorithm is developed. Due to varying contrast across colors, a local contrast independent blur measure is proposed. The normalized ratios between the blur measure of all three colors (red, green and blue) are used to estimate the depth for a larger distance range. The analysis of depth errors is performed, which shows the limitations of depth from chromatic aberrations, especially for narrowband object spectra. Since the blur changes over the field and hence depth, therefore, a simple calibration procedure is developed to correct the field varying behavior of estimated depth. A prototype lens is designed with optimal amount of axial chromatic aberrations for a focal length of 4 mm and F-number 2.4. The real captured and synthetic images show the depth measurement with the root mean square error of 10% in the distance range of 30 cm to 2 m. Taking the advantage of chromatic aberrations and estimated depth, a method is proposed to extend the depth of field of the captured image. An imaging sensor with white (W) pixel along with red, green and blue (RGB) pixels with a lens exhibiting axial chromatic aberrations is used to overcome the limitations of previous methods. The proposed method first restores the white image with depth invariant point spread function, and then transfers the sharpness information of the sharpest color or white image to blurred colors. Due to broadband color filter responses, the blur of each RGB color at its focus position is larger in case of chromatic aberrations as compared to chromatic aberrations corrected lens. Therefore, restored white image helps in getting a sharper image for these positions, and also for the objects where the sharpest color information is missing. An efficient implementation of the proposed algorithm achieves better image quality with low computational complexity. Finally, the performance of the depth estimation and extended depth of field is studied for different camera parameters. The criteria are defined to select optimal lens and sensor parameters to acquire desired results with the proposed digital post processing algorithms

    Characteristics of flight simulator visual systems

    Get PDF
    The physical parameters of the flight simulator visual system that characterize the system and determine its fidelity are identified and defined. The characteristics of visual simulation systems are discussed in terms of the basic categories of spatial, energy, and temporal properties corresponding to the three fundamental quantities of length, mass, and time. Each of these parameters are further addressed in relation to its effect, its appropriate units or descriptors, methods of measurement, and its use or importance to image quality

    SPIRE Point Source Catalog Explanatory Supplement

    Get PDF
    The Spectral and Photometric Imaging Receiver (SPIRE) was launched as one of the scientific instruments on board of the space observatory Herschel. The SPIRE photometer opened up an entirely new window in the Submillimeter domain for large scale mapping, that up to then was very difficult to observe. There are already several catalogs that were produced by individual Herschel science projects. Yet, we estimate that the objects of only a fraction of these maps will ever be systematically extracted and published by the science teams that originally proposed the observations. The SPIRE instrument performed its standard photometric observations in an optically very stable configuration, only moving the telescope across the sky, with variations in its configuration parameters limited to scan speed and sampling rate. This and the scarcity of features in the data that require special processing steps made this dataset very attractive for producing an expert reduced catalog of point sources that is being described in this document. The Catalog was extracted from a total of 6878 unmodified SPIRE scan map observations. The photometry was obtained by a systematic and homogeneous source extraction procedure, followed by a rigorous quality check that emphasized reliability over completeness. Having to exclude regions affected by strong Galactic emission, that pushed the limits of the four source extraction methods that were used, this catalog is aimed primarily at the extragalactic community. The result can serve as a pathfinder for ALMA and other Submillimeter and Far-Infrared facilities. 1,693,718 sources are included in the final catalog, splitting into 950688, 524734, 218296 objects for the 250\mu m, 350\mu m, and 500\mu m bands, respectively. The catalog comes with well characterized environments, reliability, completeness, and accuracies, that single programs typically cannot provide

    X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation

    Get PDF
    We suggest to represent an X-Field -a set of 2D images taken across different view, time or illumination conditions, i.e., video, light field, reflectance fields or combinations thereof-by learning a neural network (NN) to map their view, time or light coordinates to 2D images. Executing this NN at new coordinates results in joint view, time or light interpolation. The key idea to make this workable is a NN that already knows the "basic tricks" of graphics (lighting, 3D projection, occlusion) in a hard-coded and differentiable form. The NN represents the input to that rendering as an implicit map, that for any view, time, or light coordinate and for any pixel can quantify how it will move if view, time or light coordinates change (Jacobian of pixel position with respect to view, time, illumination, etc.). Our X-Field representation is trained for one scene within minutes, leading to a compact set of trainable parameters and hence real-time navigation in view, time and illumination

    NOVEL DENSE STEREO ALGORITHMS FOR HIGH-QUALITY DEPTH ESTIMATION FROM IMAGES

    Get PDF
    This dissertation addresses the problem of inferring scene depth information from a collection of calibrated images taken from different viewpoints via stereo matching. Although it has been heavily investigated for decades, depth from stereo remains a long-standing challenge and popular research topic for several reasons. First of all, in order to be of practical use for many real-time applications such as autonomous driving, accurate depth estimation in real-time is of great importance and one of the core challenges in stereo. Second, for applications such as 3D reconstruction and view synthesis, high-quality depth estimation is crucial to achieve photo realistic results. However, due to the matching ambiguities, accurate dense depth estimates are difficult to achieve. Last but not least, most stereo algorithms rely on identification of corresponding points among images and only work effectively when scenes are Lambertian. For non-Lambertian surfaces, the brightness constancy assumption is no longer valid. This dissertation contributes three novel stereo algorithms that are motivated by the specific requirements and limitations imposed by different applications. In addressing high speed depth estimation from images, we present a stereo algorithm that achieves high quality results while maintaining real-time performance. We introduce an adaptive aggregation step in a dynamic-programming framework. Matching costs are aggregated in the vertical direction using a computationally expensive weighting scheme based on color and distance proximity. We utilize the vector processing capability and parallelism in commodity graphics hardware to speed up this process over two orders of magnitude. In addressing high accuracy depth estimation, we present a stereo model that makes use of constraints from points with known depths - the Ground Control Points (GCPs) as referred to in stereo literature. Our formulation explicitly models the influences of GCPs in a Markov Random Field. A novel regularization prior is naturally integrated into a global inference framework in a principled way using the Bayes rule. Our probabilistic framework allows GCPs to be obtained from various modalities and provides a natural way to integrate information from various sensors. In addressing non-Lambertian reflectance, we introduce a new invariant for stereo correspondence which allows completely arbitrary scene reflectance (bidirectional reflectance distribution functions - BRDFs). This invariant can be used to formulate a rank constraint on stereo matching when the scene is observed by several lighting configurations in which only the lighting intensity varies

    Surface-wave-enabled darkfield aperture for background suppression during weak signal detection

    Get PDF
    Sensitive optical signal detection can often be confounded by the presence of a significant background, and, as such, predetection background suppression is substantively important for weak signal detection. In this paper, we present a novel optical structure design, termed surface-wave-enabled darkfield aperture (SWEDA), which can be directly incorporated onto optical sensors to accomplish predetection background suppression. This SWEDA structure consists of a central hole and a set of groove pattern that channels incident light to the central hole via surface plasmon wave and surface-scattered wave coupling. We show that the surface wave component can mutually cancel the direct transmission component, resulting in near-zero net transmission under uniform normal incidence illumination. Here, we report the implementation of two SWEDA structures. The first structure, circular-groove-based SWEDA, is able to provide polarization-independent suppression of uniform illumination with a suppression factor of 1230. The second structure, linear-groove-based SWEDA, is able to provide a suppression factor of 5080 for transverse-magnetic wave and can serve as a highly compact (5.5 micrometer length) polarization sensor (the measured transmission ratio of two orthogonal polarizations is 6100). Because the exact destructive interference balance is highly delicate and can be easily disrupted by the nonuniformity of the localized light field or light field deviation from normal incidence, the SWEDA can therefore be used to suppress a bright background and allow for sensitive darkfield sensing and imaging (observed image contrast enhancement of 27 dB for the first SWEDA)

    Theory of Non-equilibrium Single Electron Dynamics in STM Imaging of Dangling Bonds on a Hydrogenated Silicon Surface

    Full text link
    During fabrication and scanning-tunneling-microscope (STM) imaging of dangling bonds (DBs) on a hydrogenated silicon surface, we consistently observed halo-like features around isolated DBs for specific imaging conditions. These surround individual or small groups of DBs, have abnormally sharp edges, and cannot be explained by conventional STM theory. Here we investigate the nature of these features by a comprehensive 3-dimensional model of elastic and inelastic charge transfer in the vicinity of a DB. Our essential finding is that non-equilibrium current through the localized electronic state of a DB determines the charging state of the DB. This localized charge distorts the electronic bands of the silicon sample, which in turn affects the STM current in that vicinity causing the halo effect. The influence of various imaging conditions and characteristics of the sample on STM images of DBs is also investigated.Comment: 33 pages, 9 figure

    The PanCam Instrument for the ExoMars Rover

    Get PDF
    The scientific objectives of the ExoMars rover are designed to answer several key questions in the search for life on Mars. In particular, the unique subsurface drill will address some of these, such as the possible existence and stability of subsurface organics. PanCam will establish the surface geological and morphological context for the mission, working in collaboration with other context instruments. Here, we describe the PanCam scientific objectives in geology, atmospheric science, and 3-D vision. We discuss the design of PanCam, which includes a stereo pair of Wide Angle Cameras (WACs), each of which has an 11-position filter wheel and a High Resolution Camera (HRC) for high-resolution investigations of rock texture at a distance. The cameras and electronics are housed in an optical bench that provides the mechanical interface to the rover mast and a planetary protection barrier. The electronic interface is via the PanCam Interface Unit (PIU), and power conditioning is via a DC-DC converter. PanCam also includes a calibration target mounted on the rover deck for radiometric calibration, fiducial markers for geometric calibration, and a rover inspection mirror.publishersversionPeer reviewe
    • …
    corecore