104 research outputs found

    3D Modelling from Real Data

    Get PDF
    The genesis of a 3D model has basically two definitely different paths. Firstly we can consider the CAD generated models, where the shape is defined according to a user drawing action, operating with different mathematical “bricks” like B-Splines, NURBS or subdivision surfaces (mathematical CAD modelling), or directly drawing small polygonal planar facets in space, approximating with them complex free form shapes (polygonal CAD modelling). This approach can be used for both ideal elements (a project, a fantasy shape in the mind of a designer, a 3D cartoon, etc.) or for real objects. In the latter case the object has to be first surveyed in order to generate a drawing coherent with the real stuff. If the surveying process is not only a rough acquisition of simple distances with a substantial amount of manual drawing, a scene can be modelled in 3D by capturing with a digital instrument many points of its geometrical features and connecting them by polygons to produce a 3D result similar to a polygonal CAD model, with the difference that the shape generated is in this case an accurate 3D acquisition of a real object (reality-based polygonal modelling). Considering only device operating on the ground, 3D capturing techniques for the generation of reality-based 3D models may span from passive sensors and image data (Remondino and El-Hakim, 2006), optical active sensors and range data (Blais, 2004; Shan & Toth, 2008; Vosselman and Maas, 2010), classical surveying (e.g. total stations or Global Navigation Satellite System - GNSS), 2D maps (Yin et al., 2009) or an integration of the aforementioned methods (Stumpfel et al., 2003; Guidi et al., 2003; Beraldin, 2004; Stamos et al., 2008; Guidi et al., 2009a; Remondino et al., 2009; Callieri et al., 2011). The choice depends on the required resolution and accuracy, object dimensions, location constraints, instrument’s portability and usability, surface characteristics, working team experience, project’s budget, final goal, etc. Although aware of the potentialities of the image-based approach and its recent developments in automated and dense image matching for non-expert the easy usability and reliability of optical active sensors in acquiring 3D data is generally a good motivation to decline image-based approaches. Moreover the great advantage of active sensors is the fact that they deliver immediately dense and detailed 3D point clouds, whose coordinate are metrically defined. On the other hand image data require some processing and a mathematical formulation to transform the two-dimensional image measurements into metric three-dimensional coordinates. Image-based modelling techniques (mainly photogrammetry and computer vision) are generally preferred in cases of monuments or architectures with regular geometric shapes, low budget projects, good experience of the working team, time or location constraints for the data acquisition and processing. This chapter is intended as an updated review of reality-based 3D modelling in terrestrial applications, with the different categories of 3D sensing devices and the related data processing pipelines

    Image-based Material Editing

    Get PDF
    Photo editing software allows digital images to be blurred, warped or re-colored at the touch of a button. However, it is not currently possible to change the material appearance of an object except by painstakingly painting over the appropriate pixels. Here we present a set of methods for automatically replacing one material with another, completely different material, starting with only a single high dynamic range image, and an alpha matte specifying the object. Our approach exploits the fact that human vision is surprisingly tolerant of certain (sometimes enormous) physical inaccuracies. Thus, it may be possible to produce a visually compelling illusion of material transformations, without fully reconstructing the lighting or geometry. We employ a range of algorithms depending on the target material. First, an approximate depth map is derived from the image intensities using bilateral filters. The resulting surface normals are then used to map data onto the surface of the object to specify its material appearance. To create transparent or translucent materials, the mapped data are derived from the object\u27s background. To create textured materials, the mapped data are a texture map. The surface normals can also be used to apply arbitrary bidirectional reflectance distribution functions to the surface, allowing us to simulate a wide range of materials. To facilitate the process of material editing, we generate the HDR image with a novel algorithm, that is robust against noise in individual exposures. This ensures that any noise, which would possibly have affected the shape recovery of the objects adversely, will be removed. We also present an algorithm to automatically generate alpha mattes. This algorithm requires as input two images--one where the object is in focus, and one where the background is in focus--and then automatically produces an approximate matte, indicating which pixels belong to the object. The result is then improved by a second algorithm to generate an accurate alpha matte, which can be given as input to our material editing techniques

    Target detection using oblique hyperspectral imagery: A Domain trade study

    Get PDF
    Hyperspectral imagery (HSI) has proven to be a useful tool when considering the task of target detection. Various processes have been developed that manipulate HSI data in different ways in order to render the data useable for target detection activities. A fundamental initial step in each of these processes is ensuring that the HSI data set obtained is in the same domain as the target’s spectral signature. In general, remotely sensed HSI is collected in terms of digital counts which are calibrated to units of radiance, whereas spectral target signatures are normally available in units of reflectance. This work investigates target detection using simulated hyperspectral imagery captured from highly oblique angles. Specifically, this thesis seeks to determine which domain, radiance or reflectance, is more appropriate for the off-nadir case. An oblique atmospheric compensation technique based on the empirical line method (ELM) is presented and used to compensate the simulated data used in this study. The resulting reflectance cubes are subjected to a variety of standard target detection processes. A forward modeling technique that is appropriate for use on oblique hyperspectral data is also presented. This forward modeling process allows for standard target detection techniques to be applied in the radiance domain. Results obtained from the radiance and reflectance domains are comparable. Under ideal circumstances, however, the radiance domain results observed tend to be superior compared to results observed in the reflectance domain. These somewhat favorable results observed in the radiance domain, considered with the practicality and potential operational applicability of the forward modeling technique presented, suggest that the radiance domain is an attractive option for oblique hyperspectral target detection

    3D URBAN GEOVISUALIZATION: IN SITU AUGMENTED AND MIXED REALITY EXPERIMENTS

    Get PDF
    In this paper, we assume that augmented reality (AR) and mixed reality (MR) are relevant contexts for 3D urban geovisualization, especially in order to support the design of the urban spaces. We propose to design an in situ MR application, that could be helpful for urban designers, providing tools to interactively remove or replace buildings in situ. This use case requires advances regarding existing geovisualization methods. We highlight the need to adapt and extend existing 3D geovisualization pipelines, in order to adjust the specific requirements for AR/MR applications, in particular for data rendering and interaction. In order to reach this goal, we focus on and implement four elementary in situ and ex situ AR/MR experiments: each type of these AR/MR experiments helps to consider and specify a specific subproblem, i.e. scale modification, pose estimation, matching between scene and urban project realism, and the mix of real and virtual elements through portals, while proposing occlusion handling, rendering and interaction techniques to solve them

    Simulation of 3D Model, Shape, and Appearance Aging by Physical, Chemical, Biological, Environmental, and Weathering Effects

    Get PDF
    Physical, chemical, biological, environmental, and weathering effects produce a range of 3D model, shape, and appearance changes. Time introduces an assortment of aging, weathering, and decay processes such as dust, mold, patina, and fractures. These time-varying imperfections provide the viewer with important visual cues for realism and age. Existing approaches that create realistic aging effects still require an excessive amount of time and effort by extremely skilled artists to tediously hand fashion blemishes or simulate simple procedural rules. Most techniques do not scale well to large virtual environments. These limitations have prevented widespread utilization of many aging and weathering algorithms. We introduce a novel method for geometrically and visually simulating these processes in order to create visually realistic scenes. This work proposes the ``mu-ton system, a framework for scattering numerous mu-ton particles throughout an environment to mutate and age the world. We take a point based representation to discretize both the decay effects and the underlying geometry. The mu-ton particles simulate interactions between multiple phenomena. This mutation process changes both the physical properties of the external surface layer and the internal volume substrate. The mutation may add or subtract imperfections into the environment as objects age. First we review related work in aging and weathering, and illustrate the limitations of the current data-driven and physically based approaches. We provide a taxonomy of aging processes. We then describe the structure for our ``mu-ton framework, and we provide the user a short tutorial how to setup different effects. The first application of the ``mu-ton system focuses on inorganic aging and decay. We demonstrate changing material properties on a variety of objects, and simulate their transformation. We show the application of our system aging a simple city alley on different materials. The second application of the ``mu-ton system focuses organic aging. We provide details on simulating a variety of growth processes. We then evaluate and analyze the ``mu-ton framework and compare our results with ``gamma-ton tracing. Finally, we outline the contributions this thesis provides to computer-based aging and weathering simulation

    Sampling the Multiple Facets of Light

    Get PDF
    The theme of this thesis revolves around three important manifestations of light, namely its corpuscular, wave and electromagnetic nature. Our goal is to exploit these principles to analyze, design and build imaging modalities by developing new signal processing and algorithmic tools, based in particular on sampling and sparsity concepts. First, we introduce a new sampling scheme called variable pulse width, which is based on the finite rate of innovation (FRI) sampling paradigm. This new framework enables to sample and perfectly reconstruct weighted sums of Lorentzians; perfect reconstruction from sampled signals is guaranteed by a set of theorems. Second, we turn to the context of light and study its reflection, which is based on the corpuscular model of light. More precisely, we propose to use our FRI-based model to represent bidirectional reflectance distribution functions. We develop dedicated light domes to acquire reflectance functions and use the measurements obtained to demonstrate the usefulness and versatility of our model. In particular, we concentrate on the representation of specularities, which are sharp and bright components generated by the direct reflection of light on surfaces. Third, we explore the wave nature of light through Lippmann photography, a century-old photography technique that acquires the entire spectrum of visible light. This fascinating process captures interferences patterns created by the exposed scene inside the depth of a photosensitive plate. By illuminating the developed plate with a neutral light source, the reflected spectrum corresponds to that of the exposed scene. We propose a mathematical model which precisely explains the technique and demonstrate that the spectrum reproduction suffers from a number of distortions due to the finite depth of the plate and the choice of reflector. In addition to describing these artifacts, we describe an algorithm to invert them, essentially recovering the original spectrum of the exposed scene. Next, the wave nature of light is further generalized to the electromagnetic theory, which we invoke to leverage the concept of polarization of light. We also return to the topic of the representation of reflectance functions and focus this time on the separation of the specular component from the other reflections. We exploit the fact that the polarization of light is preserved in specular reflections and investigate camera designs with polarizing micro-filters with different orientations placed just in front of the camera sensor; the different polarizations of the filters create a mosaic image, from which we propose to extract the specular component. We apply our demosaicing method to several scenes and additionally demonstrate that our approach improves photometric stereo. Finally, we delve into the problem of retrieving the phase information of a sparse signal from the magnitude of its Fourier transform. We propose an algorithm that resolves the phase retrieval problem for sparse signals in three stages. Unlike traditional approaches that recover a discrete approximation of the underlying signal, our algorithm estimates the signal on a continuous domain, which makes it the first of its kind. The concluding chapter outlines several avenues for future research, like new optical devices such as displays and digital cameras, inspired by the topic of Lippmann photography

    Synthetic simulation and modeling of image intensified CCDs (IICCD)

    Get PDF
    Image intensifying cameras have been found to be extremely useful in low-light-level (LLL) scenarios including military night-vision and civilian rescue operations. These sensors utilize the available visible region photons and an amplification process to produce high-contrast imagery. Today\u27s image intensifiers are usually attached to a CCD and incorporate a microchannel plate (MCP) for amplification purposes. These devices are commonly referred to as image intensified CCDs (IICCD). To date, there has not been much work in the area of still-frame, low-light-level simulations with radiometric accuracy in mind. Most work has been geared toward real-time simulations where the emphasis is on situational awareness. This research proposes that a high fidelity simulation environment capable of producing radiometrically correct multi-band imagery for low-light-level conditions can be an extremely useful tool for sensor design engineers and image analysts. The Digital Imaging and Remote Sensing (DIRS) laboratory\u27s Image Generation (DIRSIG) model has evolved to respond to such modeling requirements. The presented work demonstrates a low-light-level simulation environment (DIRSIG) which incorporates man-made secondary sources and exoatmospheric sources such as the moon and starlight. Similarly, a user-defined IICCD camera model has been developed that takes into account parameters such as MTF and noise

    Realistic visualisation of cultural heritage objects

    Get PDF
    This research investigation used digital photography in a hemispherical dome, enabling a set of 64 photographic images of an object to be captured in perfect pixel register, with each image illuminated from a different direction. This representation turns out to be much richer than a single 2D image, because it contains information at each point about both the 3D shape of the surface (gradient and local curvature) and the directionality of reflectance (gloss and specularity). Thereby it enables not only interactive visualisation through viewer software, giving the illusion of 3D, but also the reconstruction of an actual 3D surface and highly realistic rendering of a wide range of materials. The following seven outcomes of the research are claimed as novel and therefore as representing contributions to knowledge in the field: A method for determining the geometry of an illumination dome; An adaptive method for finding surface normals by bounded regression; Generating 3D surfaces from photometric stereo; Relationship between surface normals and specular angles; Modelling surface specularity by a modified Lorentzian function; Determining the optimal wavelengths of colour laser scanners; Characterising colour devices by synthetic reflectance spectra

    Proceedings of the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    This book is a collection of 15 reviewed technical reports summarizing the presentations at the 2011 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory. The covered topics include image processing, optical signal processing, visual inspection, pattern recognition and classification, human-machine interaction, world and situation modeling, autonomous system localization and mapping, information fusion, and trust propagation in sensor networks
    • …
    corecore