1,943 research outputs found

    PhoSim-NIRCam: Photon-by-photon image simulations of the James Webb Space Telescope's Near-Infrared Camera

    Full text link
    Recent instrumentation projects have allocated resources to develop codes for simulating astronomical images. Novel physics-based models are essential for understanding telescope, instrument, and environmental systematics in observations. A deep understanding of these systematics is especially important in the context of weak gravitational lensing, galaxy morphology, and other sensitive measurements. In this work, we present an adaptation of a physics-based ab initio image simulator: The Photon Simulator (PhoSim). We modify PhoSim for use with the Near-Infrared Camera (NIRCam) -- the primary imaging instrument aboard the James Webb Space Telescope (JWST). This photon Monte Carlo code replicates the observational catalog, telescope and camera optics, detector physics, and readout modes/electronics. Importantly, PhoSim-NIRCam simulates both geometric aberration and diffraction across the field of view. Full field- and wavelength-dependent point spread functions are presented. Simulated images of an extragalactic field are presented. Extensive validation is planned during in-orbit commissioning

    Do-It-Yourself Single Camera 3D Pointer Input Device

    Full text link
    We present a new algorithm for single camera 3D reconstruction, or 3D input for human-computer interfaces, based on precise tracking of an elongated object, such as a pen, having a pattern of colored bands. To configure the system, the user provides no more than one labelled image of a handmade pointer, measurements of its colored bands, and the camera's pinhole projection matrix. Other systems are of much higher cost and complexity, requiring combinations of multiple cameras, stereocameras, and pointers with sensors and lights. Instead of relying on information from multiple devices, we examine our single view more closely, integrating geometric and appearance constraints to robustly track the pointer in the presence of occlusion and distractor objects. By probing objects of known geometry with the pointer, we demonstrate acceptable accuracy of 3D localization.Comment: 8 pages, 6 figures, 2018 15th Conference on Computer and Robot Visio

    Extending AMCW lidar depth-of-field using a coded aperture

    Get PDF
    By augmenting a high resolution full-field Amplitude Modulated Continuous Wave lidar system with a coded aperture, we show that depth-of-field can be extended using explicit, albeit blurred, range data to determine PSF scale. Because complex domain range-images contain explicit range information, the aperture design is unconstrained by the necessity for range determination by depth-from-defocus. The coded aperture design is shown to improve restoration quality over a circular aperture. A proof-of-concept algorithm using dynamic PSF determination and spatially variant Landweber iterations is developed and using an empirically sampled point spread function is shown to work in cases without serious multipath interference or high phase complexity

    Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications

    Get PDF
    Three-dimensional television (3D-TV) has gained increasing popularity in the broadcasting domain, as it enables enhanced viewing experiences in comparison to conventional two-dimensional (2D) TV. However, its application has been constrained due to the lack of essential contents, i.e., stereoscopic videos. To alleviate such content shortage, an economical and practical solution is to reuse the huge media resources that are available in monoscopic 2D and convert them to stereoscopic 3D. Although stereoscopic video can be generated from monoscopic sequences using depth measurements extracted from cues like focus blur, motion and size, the quality of the resulting video may be poor as such measurements are usually arbitrarily defined and appear inconsistent with the real scenes. To help solve this problem, a novel method for object-based stereoscopic video generation is proposed which features i) optical-flow based occlusion reasoning in determining depth ordinal, ii) object segmentation using improved region-growing from masks of determined depth layers, and iii) a hybrid depth estimation scheme using content-based matching (inside a small library of true stereo image pairs) and depth-ordinal based regularization. Comprehensive experiments have validated the effectiveness of our proposed 2D-to-3D conversion method in generating stereoscopic videos of consistent depth measurements for 3D-TV applications
    corecore