83 research outputs found

    High-Quality Seamless Panoramic Images

    Get PDF

    Integrated computational system for portable retinal imaging

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 119-121).This thesis introduces a system to improve image quality obtained from a low-light CMOS camera-specifically designed to image the surface of the retina. The retinal tissue, as well as having various diseases of its own, is unique being the only internal tissue in the human body that can be imaged non-invasively. This allows for diagnosis of diseases that are not limited to eye conditions, such as diabetes and hypertension. Current portable solutions for retinal imaging such as the Panoptic and indirect ophthalmoscopes require expensive, complex optics, and must be operated by a trained professional due to the challenging task of aligning the pupillary axis with the optical axis of the device. Our team has developed a simple hardware/software solution for inexpensive and portable retinal imaging that consists of an LED light source, CMOS camera, and simple LCD display built into a pair of sunglasses. This thesis presents a multistage solution that registers the retinal tissue on the sensor; identifies its shape and size; performs local integration with phase correlation; and performs global integration with panoramic mosaicing. Through this process we can increase signal to noise ratio, increase image contrast, create super-resolution images, and obtain a large field of view image of the retina. We also lay the groundwork for possible 3D reconstruction to increase the amount of information present. The main contributions of this thesis are in computational methods for improving overall image quality, while other team members focused on the illumination source, optics, or other hardware components.by Jason Boggess.S.M

    Capture4VR: From VR Photography to VR Video

    Get PDF
    Virtual reality (VR) enables the display of dynamic visual content with unparalleled realism and immersion. However, VR is also still a relatively young medium that requires new ways to author content, particularly for visual content that is captured from the real world. This course, therefore, provides a comprehensive overview of the latest progress in bringing photographs and video into VR. Ultimately, the techniques, approaches and systems we discuss aim to faithfully capture the visual appearance and dynamics of the real world, and to bring it into virtual reality to create unparalleled realism and immersion by providing freedom of head motion and motion parallax, which is a vital depth cue for the human visual system. In this half-day course, we take the audience on a journey from VR photography to VR video that began more than a century ago but which has accelerated tremendously in the last five years. We discuss both commercial state-of-the-art systems by Facebook, Google and Microsoft, as well as the latest research techniques and prototypes

    Image Stitching

    Get PDF
    Projecte final de carrera fet en col.laboració amb University of Limerick. Department of Electronic and Computer EngineeringEnglish: Image processing is any form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or, a set of characteristics or parameters related to the image. Most image processing techniques involve treating the image as a two-dimensional signal and applying standard signal processing techniques to it. Specifically, image stitching presents different stages to render two or more overlapping images into a seamless stitched image, from the detection of features to blending in a final image. In this process, Scale Invariant Feature Transform (SIFT) algorithm can be applied to perform the detection and matching control points step, due to its good properties. The process of create an automatic and effective whole stitching process leads to analyze different methods of the stitching stages. Several commercial and online software tools are available to perform the stitching process, offering diverse options in different situations. This analysis involves the creation of a script to deal with images and project data files. Once the whole script is generated, the stitching process is able to achieve an automatic execution allowing good quality results in the final composite image.Castellano: Procesado de imagen es cualquier tipo de procesado de señal en aquel que la entrada es una imagen, como una fotografía o fotograma de video; la salida puede ser una imagen o conjunto de características y parámetros relacionados con la imagen. Muchas de las técnicas de procesado de imagen implican un tratamiento de la imagen como señal en dos dimensiones, y para ello se aplican técnicas estándar de procesado de señal. Concretamente, la costura o unión de imágenes presenta diferentes etapas para unir dos o más imágenes superpuestas en una imagen perfecta sin costuras, desde la detección de puntos clave en las imágenes hasta su mezcla en la imagen final. En este proceso, el algoritmo Scale Invariant Feature Transform (SIFT) puede ser aplicado para desarrollar la fase de detección y selección de correspondencias entre imágenes debido a sus buenas cualidades. El desarrollo de la creación de un completo proceso de costura automático y efectivo, pasa por analizar diferentes métodos de las etapas del cosido de las imágenes. Varios software comerciales y gratuitos son capaces de llevar a cabo el proceso de costura, ofreciendo diferentes alternativas en distintas situaciones. Este análisis implica la creación de una secuencia de comandos que trabaja con las imágenes y con archivos de datos del proyecto generado. Una vez esta secuencia es creada, el proceso de cosido de imágenes es capaz de lograr una ejecución automática permitiendo unos resultados de calidad en la imagen final.Català: Processament d'imatge és qualsevol tipus de processat de senyal en aquell que l'entrada és una imatge, com una fotografia o fotograma de vídeo, i la sortida pot ser una imatge o conjunt de característiques i paràmetres relacionats amb la imatge. Moltes de les tècniques de processat d'imatge impliquen un tractament de la imatge com a senyal en dues dimensions, i per això s'apliquen tècniques estàndard de processament de senyal. Concretament, la costura o unió d'imatges presenta diferents etapes per unir dues o més imatges superposades en una imatge perfecta sense costures, des de la detecció de punts clau en les imatges fins a la seva barreja en la imatge final. En aquest procés, l'algoritme Scale Invariant Feature Transform (SIFT) pot ser aplicat per desenvolupar la fase de detecció i selecció de correspondències entre imatges a causa de les seves bones qualitats. El desenvolupament de la creació d'un complet procés de costura automàtic i efectiu, passa per analitzar diferents mètodes de les etapes del cosit de les imatges. Diversos programari comercials i gratuïts són capaços de dur a terme el procés de costura, oferint diferents alternatives en diverses situacions. Aquesta anàlisi implica la creació d'una seqüència de commandes que treballa amb les imatges i amb arxius de dades del projecte generat. Un cop aquesta seqüència és creada, el procés de cosit d'imatges és capaç d'aconseguir una execució automàtica permetent uns resultats de qualitat en la imatge final

    Capture4VR: From VR Photography to VR Video

    Get PDF
    Virtual reality (VR) enables the display of dynamic visual content with unparalleled realism and immersion. However, VR is also still a relatively young medium that requires new ways to author content, particularly for visual content that is captured from the real world. This course, therefore, provides a comprehensive overview of the latest progress in bringing photographs and video into VR. Ultimately, the techniques, approaches and systems we discuss aim to faithfully capture the visual appearance and dynamics of the real world, and to bring it into virtual reality to create unparalleled realism and immersion by providing freedom of head motion and motion parallax, which is a vital depth cue for the human visual system. In this half-day course, we take the audience on a journey from VR photography to VR video that began more than a century ago but which has accelerated tremendously in the last five years. We discuss both commercial state-of-the-art systems by Facebook, Google and Microsoft, as well as the latest research techniques and prototypes

    Piecewise planar underwater mosaicing

    Get PDF
    A commonly ignored problem in planar mosaics, yet often present in practice, is the selection of a reference homography reprojection frame where to attach the successive image frames of the mosaic. A bad choice for the reference frame can lead to severe distortions in the mosaic and can degenerate in incorrect configurations after some sequential frame concatenations. This problem is accentuated in uncontrolled underwater acquisition setups as those provided by AUVs or ROVs due to both the noisy trajectory of the acquisition vehicle - with roll and pitch shakes - and to the non-flat nature of the seabed which tends to break the planarity assumption implicit in the mosaic construction. These scenarios can also introduce other undesired effects, such as light variations between successive frames, scattering and attenuation, vignetting, flickering and noise. This paper proposes a novel mosaicing pipeline, also including a strategy to select the best reference homography in planar mosaics from video sequences which minimizes the distortions induced on each image by the mosaic homography itself. Moreover, a new non-linear color correction scheme is incorporated to handle strong color and luminosity variations among the mosaic frames. Experimental evaluation of the proposed method on real, challenging underwater video sequences shows the validity of the approach, providing clear and visually appealing mosaic

    Mapping colour in image stitching applications

    Get PDF
    Digitally, panoramic pictures can be assembled from several individual, overlapping photographs. While the geometric alignment of these photographs has retained a lot of attention from the computer vision community, the mapping of colour, i.e. the correction of colour mismatches, has not been studied extensively. In this article, we analyze the colour rendering of today’s digital photographic systems, and propose a method to correct for colour differences. The colour correction consists in retrieving linearized relative scene referred data from uncalibrated images by estimating the Opto-Electronic Conversion Function (OECF) and correcting for exposure, white-point, and vignetting variations between the individual pictures. Different OECF estimation methods are presented and evaluated in conjunction with motion estimation. The resulting panoramas, shown on examples using slides and digital photographs, yield much-improved visual quality compared to stitching using only motion estimation. Additionally, we show that colour correction can also improve the geometrical alignment

    Real-Time Computational Gigapixel Multi-Camera Systems

    Get PDF
    The standard cameras are designed to truthfully mimic the human eye and the visual system. In recent years, commercially available cameras are becoming more complex, and offer higher image resolutions than ever before. However, the quality of conventional imaging methods is limited by several parameters, such as the pixel size, lens system, the diffraction limit, etc. The rapid technological advancements, increase in the available computing power, and introduction of Graphics Processing Units (GPU) and Field-Programmable-Gate-Arrays (FPGA) open new possibilities in the computer vision and computer graphics communities. The researchers are now focusing on utilizing the immense computational power offered on the modern processing platforms, to create imaging systems with novel or significantly enhanced capabilities compared to the standard ones. One popular type of the computational imaging systems offering new possibilities is a multi-camera system. This thesis will focus on FPGA-based multi-camera systems that operate in real-time. The aim of themulti-camera systems presented in this thesis is to offer a wide field-of-view (FOV) video coverage at high frame rates. The wide FOV is achieved by constructing a panoramic image from the images acquired by the multi-camera system. Two new real-time computational imaging systems that provide new functionalities and better performance compared to conventional cameras are presented in this thesis. Each camera system design and implementation are analyzed in detail, built and tested in real-time conditions. Panoptic is a miniaturized low-cost multi-camera system that reconstructs a 360 degrees view in real-time. Since it is an easily portable system, it provides means to capture the complete surrounding light field in dynamic environment, such as when mounted on a vehicle or a flying drone. The second presented system, GigaEye II , is a modular high-resolution imaging system that introduces the concept of distributed image processing in the real-time camera systems. This thesis explains in detail howsuch concept can be efficiently used in real-time computational imaging systems. The purpose of computational imaging systems in the form of multi-camera systems does not end with real-time panoramas. The application scope of these cameras is vast. They can be used in 3D cinematography, for broadcasting live events, or for immersive telepresence experience. The final chapter of this thesis presents three potential applications of these systems: object detection and tracking, high dynamic range (HDR) imaging, and observation of multiple regions of interest. Object detection and tracking, and observation of multiple regions of interest are extremely useful and desired capabilities of surveillance systems, in security and defense industry, or in the fast-growing industry of autonomous vehicles. On the other hand, high dynamic range imaging is becoming a common option in the consumer market cameras, and the presented method allows instantaneous capture of HDR videos. Finally, this thesis concludes with the discussion of the real-time multi-camera systems, their advantages, their limitations, and the future predictions
    • …
    corecore