66 research outputs found

    Viewpoint-Free Photography for Virtual Reality

    Get PDF
    Viewpoint-free photography, i.e., interactively controlling the viewpoint of a photograph after capture, is a standing challenge. In this thesis, we investigate algorithms to enable viewpoint-free photography for virtual reality (VR) from casual capture, i.e., from footage easily captured with consumer cameras. We build on an extensive body of work in image-based rendering (IBR). Given images of an object or scene, IBR methods aim to predict the appearance of an image taken from a novel perspective. Most IBR methods focus on full or near-interpolation, where the output viewpoints either lie directly between captured images, or nearby. These methods are not suitable for VR, where the user has significant range of motion and can look in all directions. Thus, it is essential to create viewpoint-free photos with a wide field-of-view and sufficient positional freedom to cover the range of motion a user might experience in VR. We focus on two VR experiences: 1) Seated VR experiences, where the user can lean in different directions. This simplifies the problem, as the scene is only observed from a small range of viewpoints. Thus, we focus on easy capture, showing how to turn panorama-style capture into 3D photos, a simple representation for viewpoint-free photos, and also how to speed up processing so users can see the final result on-site. 2) Room-scale VR experiences, where the user can explore vastly different perspectives. This is challenging: More input footage is needed, maintaining real-time display rates becomes difficult, view-dependent appearance and object backsides need to be modelled, all while preventing noticeable mistakes. We address these challenges by: (1) creating refined geometry for each input photograph, (2) using a fast tiled rendering algorithm to achieve real-time display rates, and (3) using a convolutional neural network to hide visual mistakes during compositing. Overall, we provide evidence that viewpoint-free photography is feasible from casual capture. We thoroughly compare with the state-of-the-art, showing that our methods achieve both a numerical improvement and a clear increase in visual quality for both seated and room-scale VR experiences

    Image-Based Rendering Of Real Environments For Virtual Reality

    Get PDF

    Foreground Removal in a Multi-Camera System

    Get PDF
    Traditionally, whiteboards have been used to brainstorm, teach, and convey ideas with others. However distributing whiteboard content remotely can be challenging. To solve this problem, A multi-camera system was developed which can be scaled to broadcast an arbitrarily large writing surface while removing objects not related to the whiteboard content. Related research has been performed previously to combine multiple images together, identify and remove unrelated objects, also referred to as foreground, in a single image and correct for warping differences in camera frames. However, this is the first time anyone has attempted to solve this problem using a multi-camera system. The main components of this problem include stitching the input images together, identifying foreground material, and replacing the foreground information with the most recent background (desired) information. This problem can be subdivided into two main components: fusing multiple images into one cohesive frame, and detecting/removing foreground objects. for the first component, homographic transformations are used to create a mathematical mapping from the input image to the desired reference frame. Blending techniques are then applied to remove artifacts that remain after the perspective transform. For the second, statistical tests and modeling in conjunction with additional classification algorithms were used

    Exploiting Structural Constraints in Image Pairs

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    3D-SEM Metrology for Coordinate Measurements at the Nanometer Scale

    Get PDF

    Towards practical deep learning based image restoration model

    Get PDF
    Image Restoration (IR) is a task of reconstructing the latent image from its degraded observations. It has become an important research area in computer vision and image processing, and has wide applications in the imaging industry. Conventional methods apply inverse filtering or optimization-based approaches to restore images corrupted in ideal cases. The limited restoration performance on ill-posed problems and the low-efficient iterative optimization processes prevents such algorithms from being deployed to more complicated industry applications. Recently, the advanced deep Convolutional Neural Networks (CNNs) begin to model the image restoration as learning and inferring the posterior probability in a regression model, and successfully achieved remarkable performance. However, due to the data-driven nature, the models trained with simple synthetic paired data (e.g, bicubic interpolation or Gaussian noises) cannot be well adapted to more complicated inputs from real data domains. Besides, acquiring real paired data for training such models is also very challenging. In this dissertation, we discuss the data manipulation and model adaptability of the deep learning based image restoration tasks. Specifically, we study improving the model adaptability by understanding the domain difference between its training data and its expected testing data. We argue that the cause of image degradation can be various due to multiple imaging and transmission pipelines. Though complicated to analyze, for some specific imaging problems, we can still improve the performance of deep restoration models on unseen testing data by resolving the data domain differences implied in the image acquisition and formation pipeline. Our analysis focuses on digital image denoising, image restoration from more complicated degradation types beyond denoising and multi-image inpainting. For all tasks, the proposed training or adaptation strategies, based on the physical principle of the degradation formation or based on geometric assumption of the image, achieve a reasonable improvement on the restoration performance. For image denoising, we discuss the influence of the Bayer pattern of the Camera Filter Array (CFA) and the image demosaicing process on the adaptability of the deep denoising models. Specifically, for the task of denoising RAW sensor observations, we find that unifying and augmenting the data Bayer pattern during training and testing is an efficient strategy to make the well-trained denoising model Bayer-invariant. Additionally, for the RGB image denoising, demosaicing the noisy RAW images with Bayer patterns will result in the spatial-correlation of pixel noises. Therefore, we propose the pixel-shuffle down-sampling approach to break down this spatial correlation, and make the Gaussian-trained denoiser more adaptive to real RGB noisy images. Beyond denoising, we explain a more complicated degradation process involving diffraction when there are some occlusions on the imaging lens. One example is a novel imaging model called Under-Display Camera (UDC). From the perspective of optical analysis, we study the physics-based imaging processing method by deriving the forward model of the degradation, and synthesize the paired data for both conventional and deep denoising pipeline. Experiments demonstrate the effectiveness of the forward model and the deep restoration model trained with synthetic data achieves visually similar performance to the one trained with real paired images. Last, we further discuss reference-based image inpainting to restore the missing regions in the target image by reusing contents from the source image. Due to the color and spatial misalignment between the two images, we first initialize the warping by using multi-homography registration, and then propose a content-preserving Color and Spatial Transformer (CST) to refine the misalignment and color difference. We designed the CST to be scale-robust, so it mitigates the warping problems when the model is applied to testing images with different resolution. We synthesize realistic data while training the CST, and it suggests the inpainting pipeline achieves a more robust restoration performance with the proposed CST

    Advances in Computer Recognition, Image Processing and Communications, Selected Papers from CORES 2021 and IP&C 2021

    Get PDF
    As almost all human activities have been moved online due to the pandemic, novel robust and efficient approaches and further research have been in higher demand in the field of computer science and telecommunication. Therefore, this (reprint) book contains 13 high-quality papers presenting advancements in theoretical and practical aspects of computer recognition, pattern recognition, image processing and machine learning (shallow and deep), including, in particular, novel implementations of these techniques in the areas of modern telecommunications and cybersecurity

    MediaSync: Handbook on Multimedia Synchronization

    Get PDF
    This book provides an approachable overview of the most recent advances in the fascinating field of media synchronization (mediasync), gathering contributions from the most representative and influential experts. Understanding the challenges of this field in the current multi-sensory, multi-device, and multi-protocol world is not an easy task. The book revisits the foundations of mediasync, including theoretical frameworks and models, highlights ongoing research efforts, like hybrid broadband broadcast (HBB) delivery and users' perception modeling (i.e., Quality of Experience or QoE), and paves the way for the future (e.g., towards the deployment of multi-sensory and ultra-realistic experiences). Although many advances around mediasync have been devised and deployed, this area of research is getting renewed attention to overcome remaining challenges in the next-generation (heterogeneous and ubiquitous) media ecosystem. Given the significant advances in this research area, its current relevance and the multiple disciplines it involves, the availability of a reference book on mediasync becomes necessary. This book fills the gap in this context. In particular, it addresses key aspects and reviews the most relevant contributions within the mediasync research space, from different perspectives. Mediasync: Handbook on Multimedia Synchronization is the perfect companion for scholars and practitioners that want to acquire strong knowledge about this research area, and also approach the challenges behind ensuring the best mediated experiences, by providing the adequate synchronization between the media elements that constitute these experiences
    corecore