33,636 research outputs found

    Single-shot compressed ultrafast photography: a review

    Get PDF
    Compressed ultrafast photography (CUP) is a burgeoning single-shot computational imaging technique that provides an imaging speed as high as 10 trillion frames per second and a sequence depth of up to a few hundred frames. This technique synergizes compressed sensing and the streak camera technique to capture nonrepeatable ultrafast transient events with a single shot. With recent unprecedented technical developments and extensions of this methodology, it has been widely used in ultrafast optical imaging and metrology, ultrafast electron diffraction and microscopy, and information security protection. We review the basic principles of CUP, its recent advances in data acquisition and image reconstruction, its fusions with other modalities, and its unique applications in multiple research fields

    Methods of visualisation

    Get PDF

    High-speed imaging in fluids

    Get PDF
    High-speed imaging is in popular demand for a broad range of experiments in fluids. It allows for a detailed visualization of the event under study by acquiring a series of image frames captured at high temporal and spatial resolution. This review covers high-speed imaging basics, by defining criteria for high-speed imaging experiments in fluids and to give rule-of-thumbs for a series of cases. It also considers stroboscopic imaging, triggering and illumination, and scaling issues. It provides guidelines for testing and calibration. Ultra high-speed imaging at frame rates exceeding 1 million frames per second is reviewed, and the combination of conventional experiments in fluids techniques with high-speed imaging techniques are discussed. The review is concluded with a high-speed imaging chart, which summarizes criteria for temporal scale and spatial scale and which facilitates the selection of a high-speed imaging system for the applicatio

    Aperture Supervision for Monocular Depth Estimation

    Full text link
    We present a novel method to train machine learning algorithms to estimate scene depths from a single image, by using the information provided by a camera's aperture as supervision. Prior works use a depth sensor's outputs or images of the same scene from alternate viewpoints as supervision, while our method instead uses images from the same viewpoint taken with a varying camera aperture. To enable learning algorithms to use aperture effects as supervision, we introduce two differentiable aperture rendering functions that use the input image and predicted depths to simulate the depth-of-field effects caused by real camera apertures. We train a monocular depth estimation network end-to-end to predict the scene depths that best explain these finite aperture images as defocus-blurred renderings of the input all-in-focus image.Comment: To appear at CVPR 2018 (updated to camera ready version
    • …
    corecore