13,889 research outputs found

    Video-rate computational super-resolution and integral imaging at longwave-infrared wavelengths

    Get PDF
    We report the first computational super-resolved, multi-camera integral imaging at long-wave infrared (LWIR) wavelengths. A synchronized array of FLIR Lepton cameras was assembled, and computational super-resolution and integral-imaging reconstruction employed to generate video with light-field imaging capabilities, such as 3D imaging and recognition of partially obscured objects, while also providing a four-fold increase in effective pixel count. This approach to high-resolution imaging enables a fundamental reduction in the track length and volume of an imaging system, while also enabling use of low-cost lens materials.Comment: Supplementary multimedia material in http://dx.doi.org/10.6084/m9.figshare.530302

    Light field super resolution through controlled micro-shifts of light field sensor

    Get PDF
    Light field cameras enable new capabilities, such as post-capture refocusing and aperture control, through capturing directional and spatial distribution of light rays in space. Micro-lens array based light field camera design is often preferred due to its light transmission efficiency, cost-effectiveness and compactness. One drawback of the micro-lens array based light field cameras is low spatial resolution due to the fact that a single sensor is shared to capture both spatial and angular information. To address the low spatial resolution issue, we present a light field imaging approach, where multiple light fields are captured and fused to improve the spatial resolution. For each capture, the light field sensor is shifted by a pre-determined fraction of a micro-lens size using an XY translation stage for optimal performance

    Light Field Super-Resolution Via Graph-Based Regularization

    Full text link
    Light field cameras capture the 3D information in a scene with a single exposure. This special feature makes light field cameras very appealing for a variety of applications: from post-capture refocus, to depth estimation and image-based rendering. However, light field cameras suffer by design from strong limitations in their spatial resolution, which should therefore be augmented by computational methods. On the one hand, off-the-shelf single-frame and multi-frame super-resolution algorithms are not ideal for light field data, as they do not consider its particular structure. On the other hand, the few super-resolution algorithms explicitly tailored for light field data exhibit significant limitations, such as the need to estimate an explicit disparity map at each view. In this work we propose a new light field super-resolution algorithm meant to address these limitations. We adopt a multi-frame alike super-resolution approach, where the complementary information in the different light field views is used to augment the spatial resolution of the whole light field. We show that coupling the multi-frame approach with a graph regularizer, that enforces the light field structure via nonlocal self similarities, permits to avoid the costly and challenging disparity estimation step for all the views. Extensive experiments show that the new algorithm compares favorably to the other state-of-the-art methods for light field super-resolution, both in terms of PSNR and visual quality.Comment: This new version includes more material. In particular, we added: a new section on the computational complexity of the proposed algorithm, experimental comparisons with a CNN-based super-resolution algorithm, and new experiments on a third datase

    Deep Autoencoder for Combined Human Pose Estimation and body Model Upscaling

    Get PDF
    We present a method for simultaneously estimating 3D human pose and body shape from a sparse set of wide-baseline camera views. We train a symmetric convolutional autoencoder with a dual loss that enforces learning of a latent representation that encodes skeletal joint positions, and at the same time learns a deep representation of volumetric body shape. We harness the latter to up-scale input volumetric data by a factor of 4×4 \times, whilst recovering a 3D estimate of joint positions with equal or greater accuracy than the state of the art. Inference runs in real-time (25 fps) and has the potential for passive human behaviour monitoring where there is a requirement for high fidelity estimation of human body shape and pose

    동적 장면으로부터의 다중 물체 3차원 복원 기법 및 학습 기반의 깊이 초해상도 기법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 2. 이경무.In this dissertation, a framework for reconstructing 3-dimensional shape of the multiple objects and the method for enhancing the resolution of 3-dimensional models, especially human face, are proposed. Conventional 3D reconstruction from multiple views is applicable to static scenes, in which the configuration of objects is fixed while the images are taken. In the proposed framework, the main goal is to reconstruct the 3D models of multiple objects in a more general setting where the configuration of the objects varies among views. This problem is solved by object-centered decomposition of the dynamic scenes using unsupervised co-recognition approach. Unlike conventional motion segmentation algorithms that require small motion assumption between consecutive views, co-recognition method provides reliable accurate correspondences of a same object among unordered and wide-baseline views. In order to segment each object region, the 3D sparse points obtained from the structure-from-motion are utilized. These points are relative reliable since both their geometric relation and photometric consistency are considered simultaneously to generate these 3D sparse points. The sparse points serve as automatic seed points for a seeded-segmentation algorithm, which makes the interactive segmentation work in non-interactive way. Experiments on various real challenging image sequences demonstrate the effectiveness of the proposed approach, especially in the presence of abrupt independent motions of objects. Obtaining high-density 3D model is also an important issue. Since the multi-view images used to reconstruct 3D model or the 3D imaging hardware such as the time-of-flight cameras or the laser scanners have their own natural upper limit of resolution, super-resolution method is required to increase the resolution of 3D data. This dissertation presents an algorithm to super-resolve the single human face model represented in 3D point cloud. The point cloud data is considered as an object-centered 3D data representation compared to the camera-centered depth images. While many researches are done for the super-resolution of intensity images and there exist some prior works on the depth image data, this is the first attempt to super-resolve the single set of 3D point cloud data without additional intensity or depth image observation of the object. This problem is solved by querying the previously learned database which contains corresponding high resolution 3D data associated with the low resolution data. The Markov Random Field(MRF) model is constructed on the 3D points, and the proper energy function is formulated as a multi-class labeling problem on the MRF. Experimental results show that the proposed method solves the super-resolution problem with high accuracy.Abstract i Contents ii List of Figures vii List of Tables xiii 1 Introduction 1 1.1 3D Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Dissertation Goal and Contribution . . . . . . . . . . . . . . . . . . . 2 1.3 Organization of Dissertation . . . . . . . . . . . . . . . . . . . . . . . 3 2 Background 7 2.1 Motion Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Image Super Resolution . . . . . . . . . . . . . . . . . . . . . . . . . 9 3 Multi-Object Reconstruction from Dynamic Scenes 13 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.4 Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.4.1 Co-Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.4.2 Integration of the Sub-Results . . . . . . . . . . . . . . . . . 25 3.5 Camera Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.6 Object Boundary Renement . . . . . . . . . . . . . . . . . . . . . . 28 3.7 3D Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.8 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.8.1 Qualitative Results . . . . . . . . . . . . . . . . . . . . . . . . 32 3.8.2 Quantitative Results . . . . . . . . . . . . . . . . . . . . . . . 39 3.8.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4 Super Resolution for 3D Face Reconstruction 55 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.4 Proposed Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 4.4.1 Local Patch . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 4.4.2 Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 4.4.3 Prior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.5 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.5.1 Training Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.5.2 Building Markov Network . . . . . . . . . . . . . . . . . . . . 75 4.5.3 Reconstructing Super-Resolved 3D Model . . . . . . . . . . . 76 4.6 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.6.1 Quantitative Results . . . . . . . . . . . . . . . . . . . . . . . 78 4.6.2 Qualitative Results . . . . . . . . . . . . . . . . . . . . . . . . 81 4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5 Conclusion 93 5.1 Summary of Dissertation . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.2 Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Bibliography 97 국문 초록 107Docto
    corecore