631 research outputs found

    Self-correction of 3D reconstruction from multi-view stereo images

    Get PDF
    We present a self-correction approach to improving the 3D reconstruction of a multi-view 3D photogrammetry system. The self-correction approach has been able to repair the reconstructed 3D surface damaged by depth discontinuities. Due to self-occlusion, multi-view range images have to be acquired and integrated into a watertight nonredundant mesh model in order to cover the extended surface of an imaged object. The integrated surface often suffers from “dent” artifacts produced by depth discontinuities in the multi-view range images. In this paper we propose a novel approach to correcting the 3D integrated surface such that the dent artifacts can be repaired automatically. We show examples of 3D reconstruction to demonstrate the improvement that can be achieved by the self-correction approach. This self-correction approach can be extended to integrate range images obtained from alternative range capture devices

    Saliency-guided integration of multiple scans

    Get PDF
    we present a novel method..

    The Iray Light Transport Simulation and Rendering System

    Full text link
    While ray tracing has become increasingly common and path tracing is well understood by now, a major challenge lies in crafting an easy-to-use and efficient system implementing these technologies. Following a purely physically-based paradigm while still allowing for artistic workflows, the Iray light transport simulation and rendering system allows for rendering complex scenes by the push of a button and thus makes accurate light transport simulation widely available. In this document we discuss the challenges and implementation choices that follow from our primary design decisions, demonstrating that such a rendering system can be made a practical, scalable, and efficient real-world application that has been adopted by various companies across many fields and is in use by many industry professionals today

    Multimodal enhancement-fusion technique for natural images.

    Get PDF
    Masters Degree. University of KwaZulu-Natal, Durban.This dissertation presents a multimodal enhancement-fusion (MEF) technique for natural images. The MEF is expected to contribute value to machine vision applications and personal image collections for the human user. Image enhancement techniques and the metrics that are used to assess their performance are prolific, and each is usually optimised for a specific objective. The MEF proposes a framework that adaptively fuses multiple enhancement objectives into a seamless pipeline. Given a segmented input image and a set of enhancement methods, the MEF applies all the enhancers to the image in parallel. The most appropriate enhancement in each image segment is identified, and finally, the differentially enhanced segments are seamlessly fused. To begin with, this dissertation studies targeted contrast enhancement methods and performance metrics that can be utilised in the proposed MEF. It addresses a selection of objective assessment metrics for contrast-enhanced images and determines their relationship with the subjective assessment of human visual systems. This is to identify which objective metrics best approximate human assessment and may therefore be used as an effective replacement for tedious human assessment surveys. A subsequent human visual assessment survey is conducted on the same dataset to ascertain image quality as perceived by a human observer. The interrelated concepts of naturalness and detail were found to be key motivators of human visual assessment. Findings show that when assessing the quality or accuracy of these methods, no single quantitative metric correlates well with human perception of naturalness and detail, however, a combination of two or more metrics may be used to approximate the complex human visual response. Thereafter, this dissertation proposes the multimodal enhancer that adaptively selects the optimal enhancer for each image segment. MEF focusses on improving chromatic irregularities such as poor contrast distribution. It deploys a concurrent enhancement pathway that subjects an image to multiple image enhancers in parallel, followed by a fusion algorithm that creates a composite image that combines the strengths of each enhancement path. The study develops a framework for parallel image enhancement, followed by parallel image assessment and selection, leading to final merging of selected regions from the enhanced set. The output combines desirable attributes from each enhancement pathway to produce a result that is superior to each path taken alone. The study showed that the proposed MEF technique performs well for most image types. MEF is subjectively favourable to a human panel and achieves better performance for objective image quality assessment compared to other enhancement methods

    The application of neural network data mining algorithm into mixed pixel classification in geographic information system environment

    Get PDF
    With the rapid growth of satellite technology and the increasing of spatial resolution, hyperspectral imaging sensor is frequently used for research and development as well as in some semi-operational scenarios. The hyperspectral image also offers unique applications such as terrain delimitations, object detection, material identification, and atmospheric characterization. However, hyperspectral image systems produce large data sets that are not easily interpretable by visual analysis and therefore require automated processing algorithm. The challenging of pattern recognition associated with hyperspectral images is very complex processing due to the presence of considerable number of mixed pixels. This , paper discusses the development of data mining and pattern recognition algorithm to handle the complexity of hyperspectral remote sensing images in Geographical Information Systems environment. Region growing segmentation and radial basis function algorithms are considered a powerful tool to minimize the mixed pixel classification error

    Range Tracing

    Get PDF
    In this report, we tackle the problem of merging an arbitrary number of range scans (depth images) into a single surface mesh. The mesh-based representation is superior to point-based approaches since it contains important connectivity information. Most previous mesh-based merge methods, however, lose surface details by using simplifying intermediate surface representations (e.g.\ implicit functions). Such details are essential for further processing steps, especially for feature-preserving reconstruction methods. Our method preserves all information (connectivity and the original measurement positions) as edges and vertices of a merged surface mesh. It avoids aliasing and smoothing artifacts, adapts to the local scanner sampling and is independent of the overlap size of the input range scans. The algorithm consists of only two basic operations and is therefore simple to implement. We evaluate the performance of our approach on highly detailed real-world scans acquired with different devices

    Graphene-enabled adaptive infrared textiles

    Get PDF
    Interactive clothing requires sensing and display functionalities to be embedded on textiles. Despite the significant progress of electronic textiles, the integration of optoelectronic materials on fabrics remains as an outstanding challenge. In this Letter, using the electro-optical tunability of graphene, we report adaptive optical textiles with electrically controlled reflectivity and emissivity covering the infrared and near-infrared wavelengths. We achieve electro-optical modulation by reversible intercalation of ions into graphene layers laminated on fabrics. We demonstrate a new class of infrared textile devices including display, yarn, and stretchable devices using natural and synthetic textiles. To show the promise of our approach, we fabricated an active device directly onto a t-shirt, which enables long-wavelength infrared communication via modulation of the thermal radiation from the human body. The results presented here provide complementary technologies which could leverage the ubiquitous use of functional textiles

    A Survey of Ocean Simulation and Rendering Techniques in Computer Graphics

    Get PDF
    This paper presents a survey of ocean simulation and rendering methods in computer graphics. To model and animate the ocean's surface, these methods mainly rely on two main approaches: on the one hand, those which approximate ocean dynamics with parametric, spectral or hybrid models and use empirical laws from oceanographic research. We will see that this type of methods essentially allows the simulation of ocean scenes in the deep water domain, without breaking waves. On the other hand, physically-based methods use Navier-Stokes Equations (NSE) to represent breaking waves and more generally ocean surface near the shore. We also describe ocean rendering methods in computer graphics, with a special interest in the simulation of phenomena such as foam and spray, and light's interaction with the ocean surface
    corecore