3,861 research outputs found

    Focusing on out-of-focus : assessing defocus estimation algorithms for the benefit of automated image masking

    Get PDF
    Acquiring photographs as input for an image-based modelling pipeline is less trivial than often assumed. Photographs should be correctly exposed, cover the subject sufficiently from all possible angles, have the required spatial resolution, be devoid of any motion blur, exhibit accurate focus and feature an adequate depth of field. The last four characteristics all determine the " sharpness " of an image and the photogrammetric, computer vision and hybrid photogrammetric computer vision communities all assume that the object to be modelled is depicted " acceptably " sharp throughout the whole image collection. Although none of these three fields has ever properly quantified " acceptably sharp " , it is more or less standard practice to mask those image portions that appear to be unsharp due to the limited depth of field around the plane of focus (whether this means blurry object parts or completely out-of-focus backgrounds). This paper will assess how well-or ill-suited defocus estimating algorithms are for automatically masking a series of photographs, since this could speed up modelling pipelines with many hundreds or thousands of photographs. To that end, the paper uses five different real-world datasets and compares the output of three state-of-the-art edge-based defocus estimators. Afterwards, critical comments and plans for the future finalise this paper

    Compressive Holographic Video

    Full text link
    Compressed sensing has been discussed separately in spatial and temporal domains. Compressive holography has been introduced as a method that allows 3D tomographic reconstruction at different depths from a single 2D image. Coded exposure is a temporal compressed sensing method for high speed video acquisition. In this work, we combine compressive holography and coded exposure techniques and extend the discussion to 4D reconstruction in space and time from one coded captured image. In our prototype, digital in-line holography was used for imaging macroscopic, fast moving objects. The pixel-wise temporal modulation was implemented by a digital micromirror device. In this paper we demonstrate 10Ă—10\times temporal super resolution with multiple depths recovery from a single image. Two examples are presented for the purpose of recording subtle vibrations and tracking small particles within 5 ms.Comment: 12 pages, 6 figure

    James Webb Space Telescope Optical Simulation Testbed I: Overview and First Results

    Full text link
    The James Webb Space Telescope (JWST) Optical Simulation Testbed (JOST) is a tabletop workbench to study aspects of wavefront sensing and control for a segmented space telescope, including both commissioning and maintenance activities. JOST is complementary to existing optomechanical testbeds for JWST (e.g. the Ball Aerospace Testbed Telescope, TBT) given its compact scale and flexibility, ease of use, and colocation at the JWST Science & Operations Center. We have developed an optical design that reproduces the physics of JWST's three-mirror anastigmat using three aspheric lenses; it provides similar image quality as JWST (80% Strehl ratio) over a field equivalent to a NIRCam module, but at HeNe wavelength. A segmented deformable mirror stands in for the segmented primary mirror and allows control of the 18 segments in piston, tip, and tilt, while the secondary can be controlled in tip, tilt and x, y, z position. This will be sufficient to model many commissioning activities, to investigate field dependence and multiple field point sensing & control, to evaluate alternate sensing algorithms, and develop contingency plans. Testbed data will also be usable for cross-checking of the WFS&C Software Subsystem, and for staff training and development during JWST's five- to ten-year mission.Comment: Proceedings of the SPIE, 9143-150. 13 pages, 8 figure

    Multidimensional Optical Sensing and Imaging Systems (MOSIS): From Macro to Micro Scales

    Get PDF
    Multidimensional optical imaging systems for information processing and visualization technologies have numerous applications in fields such as manufacturing, medical sciences, entertainment, robotics, surveillance, and defense. Among different three-dimensional (3-D) imaging methods, integral imaging is a promising multiperspective sensing and display technique. Compared with other 3-D imaging techniques, integral imaging can capture a scene using an incoherent light source and generate real 3-D images for observation without any special viewing devices. This review paper describes passive multidimensional imaging systems combined with different integral imaging configurations. One example is the integral-imaging-based multidimensional optical sensing and imaging systems (MOSIS), which can be used for 3-D visualization, seeing through obscurations, material inspection, and object recognition from microscales to long range imaging. This system utilizes many degrees of freedom such as time and space multiplexing, depth information, polarimetric, temporal, photon flux and multispectral information based on integral imaging to record and reconstruct the multidimensionally integrated scene. Image fusion may be used to integrate the multidimensional images obtained by polarimetric sensors, multispectral cameras, and various multiplexing techniques. The multidimensional images contain substantially more information compared with two-dimensional (2-D) images or conventional 3-D images. In addition, we present recent progress and applications of 3-D integral imaging including human gesture recognition in the time domain, depth estimation, mid-wave-infrared photon counting, 3-D polarimetric imaging for object shape and material identification, dynamic integral imaging implemented with liquid-crystal devices, and 3-D endoscopy for healthcare applications.B. Javidi wishes to acknowledge support by the National Science Foundation (NSF) under Grant NSF/IIS-1422179, and DARPA and US Army under contract number W911NF-13-1-0485. The work of P. Latorre Carmona, A. MartĂ­nez-Uso, J. M. Sotoca and F. Pla was supported by the Spanish Ministry of Economy under the project ESP2013-48458-C4-3-P, and by MICINN under the project MTM2013-48371-C2-2-PDGI, by Generalitat Valenciana under the project PROMETEO-II/2014/062, and by Universitat Jaume I through project P11B2014-09. The work of M. MartĂ­nez-Corral and G. Saavedra was supported by the Spanish Ministry of Economy and Competitiveness under the grant DPI2015-66458-C2-1R, and by the Generalitat Valenciana, Spain under the project PROMETEOII/2014/072

    Wavefront image sensor chip

    Get PDF
    We report the implementation of an image sensor chip, termed wavefront image sensor chip (WIS), that can measure both intensity/amplitude and phase front variations of a light wave separately and quantitatively. By monitoring the tightly confined transmitted light spots through a circular aperture grid in a high Fresnel number regime, we can measure both intensity and phase front variations with a high sampling density (11 µm) and high sensitivity (the sensitivity of normalized phase gradient measurement is 0.1 mrad under the typical working condition). By using WIS in a standard microscope, we can collect both bright-field (transmitted light intensity) and normalized phase gradient images. Our experiments further demonstrate that the normalized phase gradient images of polystyrene microspheres, unstained and stained starfish embryos, and strongly birefringent potato starch granules are improved versions of their corresponding differential interference contrast (DIC) microscope images in that they are artifact-free and quantitative. Besides phase microscopy, WIS can benefit machine recognition, object ranging, and texture assessment for a variety of applications

    Integrated 2-D Optical Flow Sensor

    Get PDF
    I present a new focal-plane analog VLSI sensor that estimates optical flow in two visual dimensions. The chip significantly improves previous approaches both with respect to the applied model of optical flow estimation as well as the actual hardware implementation. Its distributed computational architecture consists of an array of locally connected motion units that collectively solve for the unique optimal optical flow estimate. The novel gradient-based motion model assumes visual motion to be translational, smooth and biased. The model guarantees that the estimation problem is computationally well-posed regardless of the visual input. Model parameters can be globally adjusted, leading to a rich output behavior. Varying the smoothness strength, for example, can provide a continuous spectrum of motion estimates, ranging from normal to global optical flow. Unlike approaches that rely on the explicit matching of brightness edges in space or time, the applied gradient-based model assures spatiotemporal continuity on visual information. The non-linear coupling of the individual motion units improves the resulting optical flow estimate because it reduces spatial smoothing across large velocity differences. Extended measurements of a 30x30 array prototype sensor under real-world conditions demonstrate the validity of the model and the robustness and functionality of the implementation
    • …
    corecore