987 research outputs found

    Revealing the Invisible: On the Extraction of Latent Information from Generalized Image Data

    Get PDF
    The desire to reveal the invisible in order to explain the world around us has been a source of impetus for technological and scientific progress throughout human history. Many of the phenomena that directly affect us cannot be sufficiently explained based on the observations using our primary senses alone. Often this is because their originating cause is either too small, too far away, or in other ways obstructed. To put it in other words: it is invisible to us. Without careful observation and experimentation, our models of the world remain inaccurate and research has to be conducted in order to improve our understanding of even the most basic effects. In this thesis, we1 are going to present our solutions to three challenging problems in visual computing, where a surprising amount of information is hidden in generalized image data and cannot easily be extracted by human observation or existing methods. We are able to extract the latent information using non-linear and discrete optimization methods based on physically motivated models and computer graphics methodology, such as ray tracing, real-time transient rendering, and image-based rendering

    Progress in industrial photogrammetry by means of markerless solutions

    Get PDF
    174 p.La siguiente tesis está enfocada al desarrollo y uso avanzado de metodologías fotogramétrica sin dianas en aplicaciones industriales. La fotogrametría es una técnica de medición óptica 3D que engloba múltiples configuraciones y aproximaciones. En este estudio se han desarrollado procedimientos de medición, modelos y estrategias de procesamiento de imagen que van más allá que la fotogrametría convencional y buscan el emplear soluciones de otros campos de la visión artificial en aplicaciones industriales. Mientras que la fotogrametría industrial requiere emplear dianas artificiales para definir los puntos o elementos de interés, esta tesis contempla la reducción e incluso la eliminación de las dianas tanto pasivas como activas como alternativas prácticas. La mayoría de los sistemas de medida utilizan las dianas tanto para definir los puntos de control, relacionar las distintas perspectivas, obtener precisión, así como para automatizar las medidas. Aunque en muchas situaciones el empleo de dianas no sea restrictivo existen aplicaciones industriales donde su empleo condiciona y restringe considerablemente los procedimientos de medida empleados en la inspección. Un claro ejemplo es la verificación y control de calidad de piezas seriadas, o la medición y seguimiento de elementos prismáticos relacionados con un sistema de referencia determinado. Es en este punto donde la fotogrametría sin dianas puede combinarse o complementarse con soluciones tradicionales para tratar de mejorar las prestaciones actuales

    Learning geometric and lighting priors from natural images

    Get PDF
    Comprendre les images est d’une importance cruciale pour une pléthore de tâches, de la composition numérique au ré-éclairage d’une image, en passant par la reconstruction 3D d’objets. Ces tâches permettent aux artistes visuels de réaliser des chef-d’oeuvres ou d’aider des opérateurs à prendre des décisions de façon sécuritaire en fonction de stimulis visuels. Pour beaucoup de ces tâches, les modèles physiques et géométriques que la communauté scientifique a développés donnent lieu à des problèmes mal posés possédant plusieurs solutions, dont généralement une seule est raisonnable. Pour résoudre ces indéterminations, le raisonnement sur le contexte visuel et sémantique d’une scène est habituellement relayé à un artiste ou un expert qui emploie son expérience pour réaliser son travail. Ceci est dû au fait qu’il est généralement nécessaire de raisonner sur la scène de façon globale afin d’obtenir des résultats plausibles et appréciables. Serait-il possible de modéliser l’expérience à partir de données visuelles et d’automatiser en partie ou en totalité ces tâches ? Le sujet de cette thèse est celui-ci : la modélisation d’a priori par apprentissage automatique profond pour permettre la résolution de problèmes typiquement mal posés. Plus spécifiquement, nous couvrirons trois axes de recherche, soient : 1) la reconstruction de surface par photométrie, 2) l’estimation d’illumination extérieure à partir d’une seule image et 3) l’estimation de calibration de caméra à partir d’une seule image avec un contenu générique. Ces trois sujets seront abordés avec une perspective axée sur les données. Chacun de ces axes comporte des analyses de performance approfondies et, malgré la réputation d’opacité des algorithmes d’apprentissage machine profonds, nous proposons des études sur les indices visuels captés par nos méthodes.Understanding images is needed for a plethora of tasks, from compositing to image relighting, including 3D object reconstruction. These tasks allow artists to realize masterpieces or help operators to safely make decisions based on visual stimuli. For many of these tasks, the physical and geometric models that the scientific community has developed give rise to ill-posed problems with several solutions, only one of which is generally reasonable. To resolve these indeterminations, the reasoning about the visual and semantic context of a scene is usually relayed to an artist or an expert who uses his experience to carry out his work. This is because humans are able to reason globally on the scene in order to obtain plausible and appreciable results. Would it be possible to model this experience from visual data and partly or totally automate tasks? This is the topic of this thesis: modeling priors using deep machine learning to solve typically ill-posed problems. More specifically, we will cover three research axes: 1) surface reconstruction using photometric cues, 2) outdoor illumination estimation from a single image and 3) camera calibration estimation from a single image with generic content. These three topics will be addressed from a data-driven perspective. Each of these axes includes in-depth performance analyses and, despite the reputation of opacity of deep machine learning algorithms, we offer studies on the visual cues captured by our methods

    Quantifying Depth of Field and Sharpness for Image-Based 3D Reconstruction of Heritage Objects

    Get PDF
    Image-based 3D reconstruction processing tools assume sharp focus across the entire object being imaged, but depth of field (DOF) can be a limitation when imaging small to medium sized objects resulting in variation in image sharpness with range from the camera. While DOF is well understood in the context of photographic imaging and it is considered with the acquisition for image-based 3D reconstruction, an "acceptable" level of sharpness and associated "circle of confusion" has not yet been quantified for the 3D case. The work described in this paper contributes to the understanding and quantification of acceptable sharpness by providing evidence of the influence of DOF on the 3D reconstruction of small to medium sized museum objects. Spatial frequency analysis using established collections photography imaging guidelines and targets is used to connect input image quality with 3D reconstruction output quality. Combining quantitative spatial frequency analysis with metrics from a series of comparative 3D reconstructions provides insights into the connection between DOF and output model quality. Lab-based quantification of DOF is used to investigate the influence of sharpness on the output 3D reconstruction to better understand the effects of lens aperture, camera to object surface angle, and taking distance. The outcome provides evidence of the role of DOF in image-based 3D reconstruction and it is briefly presented how masks derived from image content and depth maps can be used to remove unsharp image content and optimise structure from motion (SfM) and multiview stereo (MVS) workflows

    Learning Lens Blur Fields

    Full text link
    Optical blur is an inherent property of any lens system and is challenging to model in modern cameras because of their complex optical elements. To tackle this challenge, we introduce a high-dimensional neural representation of blur−-the lens blur field\textit{the lens blur field}−-and a practical method for acquiring it. The lens blur field is a multilayer perceptron (MLP) designed to (1) accurately capture variations of the lens 2D point spread function over image plane location, focus setting and, optionally, depth and (2) represent these variations parametrically as a single, sensor-specific function. The representation models the combined effects of defocus, diffraction, aberration, and accounts for sensor features such as pixel color filters and pixel-specific micro-lenses. To learn the real-world blur field of a given device, we formulate a generalized non-blind deconvolution problem that directly optimizes the MLP weights using a small set of focal stacks as the only input. We also provide a first-of-its-kind dataset of 5D blur fields−-for smartphone cameras, camera bodies equipped with a variety of lenses, etc. Lastly, we show that acquired 5D blur fields are expressive and accurate enough to reveal, for the first time, differences in optical behavior of smartphone devices of the same make and model

    Earth resources: A continuing bibliography with indexes (issue 55)

    Get PDF
    This bibliography lists 368 reports, articles and other documents introduced into the NASA scientific and technical information system between July 1 and September 30, 1987. Emphasis is placed on the use of remote sensing and geographical instrumentation in spacecraft and aircraft to survey and inventory natural resources and urban areas. Subject matter is grouped according to agriculture and forestry, environmental changes and cultural resources, geodesy and cartography, geology and mineral resources, hydrology and water management, data processing and distribution systems, instrumentation and sensors, and economic analysis

    PERISCOPE: PERIapsis Subsurface Cave Optical Explorer

    Get PDF
    The PERISCOPE study focuses primarily on lunar caves, due to the potential for being imaged in orbital scenarios. In the intervening years, from 2012-2015, scientists developed further rationales and interest in the scientific value of lunar caves. It does not appear that they are likely to be sinks for water-ice due to the relatively warm temperatures(~-20 degrees Celsius) in the caves leading to geologically-rapid migration of unbound water due to sublimation, and inevitable loss through any skylights. However, the skylights themselves reveal apparent complex layering, which may speak to a more complex multi-stage evolution of mare flood basalts than previously considered, and so their examination may provide even more insight into the lunar mare, which in turn provide a primary record of early solar system crustal formal and evolution processes. Further extrapolation of these insights can be found within the exoplanet community of researchers,who find the information useful for calibrating star formation and planetary evolution models. In addition, catalogues of lunar and martian skylights, "caves" or "atypical pit craters" have been developed, with numbers for both bodies now in the low hundreds thanks to additional high resolution surveys and revisiting the existing image databases

    3D Recording and Interpretation for Maritime Archaeology

    Get PDF
    This open access peer-reviewed volume was inspired by the UNESCO UNITWIN Network for Underwater Archaeology International Workshop held at Flinders University, Adelaide, Australia in November 2016. Content is based on, but not limited to, the work presented at the workshop which was dedicated to 3D recording and interpretation for maritime archaeology. The volume consists of contributions from leading international experts as well as up-and-coming early career researchers from around the globe. The content of the book includes recording and analysis of maritime archaeology through emerging technologies, including both practical and theoretical contributions. Topics include photogrammetric recording, laser scanning, marine geophysical 3D survey techniques, virtual reality, 3D modelling and reconstruction, data integration and Geographic Information Systems. The principal incentive for this publication is the ongoing rapid shift in the methodologies of maritime archaeology within recent years and a marked increase in the use of 3D and digital approaches. This convergence of digital technologies such as underwater photography and photogrammetry, 3D sonar, 3D virtual reality, and 3D printing has highlighted a pressing need for these new methodologies to be considered together, both in terms of defining the state-of-the-art and for consideration of future directions. As a scholarly publication, the audience for the book includes students and researchers, as well as professionals working in various aspects of archaeology, heritage management, education, museums, and public policy. It will be of special interest to those working in the field of coastal cultural resource management and underwater archaeology but will also be of broader interest to anyone interested in archaeology and to those in other disciplines who are now engaging with 3D recording and visualization

    Efficient and Accurate Disparity Estimation from MLA-Based Plenoptic Cameras

    Get PDF
    This manuscript focuses on the processing images from microlens-array based plenoptic cameras. These cameras enable the capturing of the light field in a single shot, recording a greater amount of information with respect to conventional cameras, allowing to develop a whole new set of applications. However, the enhanced information introduces additional challenges and results in higher computational effort. For one, the image is composed of thousand of micro-lens images, making it an unusual case for standard image processing algorithms. Secondly, the disparity information has to be estimated from those micro-images to create a conventional image and a three-dimensional representation. Therefore, the work in thesis is devoted to analyse and propose methodologies to deal with plenoptic images. A full framework for plenoptic cameras has been built, including the contributions described in this thesis. A blur-aware calibration method to model a plenoptic camera, an optimization method to accurately select the best microlenses combination, an overview of the different types of plenoptic cameras and their representation. Datasets consisting of both real and synthetic images have been used to create a benchmark for different disparity estimation algorithm and to inspect the behaviour of disparity under different compression rates. A robust depth estimation approach has been developed for light field microscopy and image of biological samples
    • …
    corecore