34 research outputs found

    Factored axis-aligned filtering for rendering multiple distribution effects

    Get PDF
    Monte Carlo (MC) ray-tracing for photo-realistic rendering often requires hours to render a single image due to the large sampling rates needed for convergence. Previous methods have attempted to filter sparsely sampled MC renders but these methods have high reconstruction overheads. Recent work has shown fast performance for individual effects, like soft shadows and indirect illumination, using axis-aligned filtering. While some components of light transport such as indirect or area illumination are smooth, they are often multiplied by high-frequency components such as texture, which prevents their sparse sampling and reconstruction. We propose an approach to adaptively sample and filter for simultaneously rendering primary (defocus blur) and secondary (soft shadows and indirect illumination) distribution effects, based on a multi-dimensional frequency analysis of the direct and indirect illumination light fields. We describe a novel approach of factoring texture and irradiance in the presence of defocus blur, which allows for pre-filtering noisy irradiance when the texture is not noisy. Our approach naturally allows for different sampling rates for primary and secondary effects, further reducing the overall ray count. While the theory considers only Lambertian surfaces, we obtain promising results for moderately glossy surfaces. We demonstrate 30x sampling rate reduction compared to equal quality noise-free MC. Combined with a GPU implementation and low filtering over-head, we can render scenes with complex geometry and diffuse and glossy BRDFs in a few seconds.National Science Foundation (U.S.) (Grant CGV 1115242)National Science Foundation (U.S.) (Grant CGV 1116303)Intel Corporation (Science and Technology Center for Visual Computing

    3D Object Recognition Based On Constrained 2D Views

    Get PDF
    The aim of the present work was to build a novel 3D object recognition system capable of classifying man-made and natural objects based on single 2D views. The approach to this problem has been one motivated by recent theories on biological vision and multiresolution analysis. The project's objectives were the implementation of a system that is able to deal with simple 3D scenes and constitutes an engineering solution to the problem of 3D object recognition, allowing the proposed recognition system to operate in a practically acceptable time frame. The developed system takes further the work on automatic classification of marine phytoplank- (ons, carried out at the Centre for Intelligent Systems, University of Plymouth. The thesis discusses the main theoretical issues that prompted the fundamental system design options. The principles and the implementation of the coarse data channels used in the system are described. A new multiresolution representation of 2D views is presented, which provides the classifier module of the system with coarse-coded descriptions of the scale-space distribution of potentially interesting features. A multiresolution analysis-based mechanism is proposed, which directs the system's attention towards potentially salient features. Unsupervised similarity-based feature grouping is introduced, which is used in coarse data channels to yield feature signatures that are not spatially coherent and provide the classifier module with salient descriptions of object views. A simple texture descriptor is described, which is based on properties of a special wavelet transform. The system has been tested on computer-generated and natural image data sets, in conditions where the inter-object similarity was monitored and quantitatively assessed by human subjects, or the analysed objects were very similar and their discrimination constituted a difficult task even for human experts. The validity of the above described approaches has been proven. The studies conducted with various statistical and artificial neural network-based classifiers have shown that the system is able to perform well in all of the above mentioned situations. These investigations also made possible to take further and generalise a number of important conclusions drawn during previous work carried out in the field of 2D shape (plankton) recognition, regarding the behaviour of multiple coarse data channels-based pattern recognition systems and various classifier architectures. The system possesses the ability of dealing with difficult field-collected images of objects and the techniques employed by its component modules make possible its extension to the domain of complex multiple-object 3D scene recognition. The system is expected to find immediate applicability in the field of marine biota classification

    New algorithms for the analysis of live-cell images acquired in phase contrast microscopy

    Get PDF
    La détection et la caractérisation automatisée des cellules constituent un enjeu important dans de nombreux domaines de recherche tels que la cicatrisation, le développement de l'embryon et des cellules souches, l’immunologie, l’oncologie, l'ingénierie tissulaire et la découverte de nouveaux médicaments. Étudier le comportement cellulaire in vitro par imagerie des cellules vivantes et par le criblage à haut débit implique des milliers d'images et de vastes quantités de données. Des outils d'analyse automatisés reposant sur la vision numérique et les méthodes non-intrusives telles que la microscopie à contraste de phase (PCM) sont nécessaires. Comme les images PCM sont difficiles à analyser en raison du halo lumineux entourant les cellules et de la difficulté à distinguer les cellules individuelles, le but de ce projet était de développer des algorithmes de traitement d'image PCM dans Matlab® afin d’en tirer de l’information reliée à la morphologie cellulaire de manière automatisée. Pour développer ces algorithmes, des séries d’images de myoblastes acquises en PCM ont été générées, en faisant croître les cellules dans un milieu avec sérum bovin (SSM) ou dans un milieu sans sérum (SFM) sur plusieurs passages. La surface recouverte par les cellules a été estimée en utilisant un filtre de plage de valeurs, un seuil et une taille minimale de coupe afin d'examiner la cinétique de croissance cellulaire. Les résultats ont montré que les cellules avaient des taux de croissance similaires pour les deux milieux de culture, mais que celui-ci diminue de façon linéaire avec le nombre de passages. La méthode de transformée par ondelette continue combinée à l’analyse d'image multivariée (UWT-MIA) a été élaborée afin d’estimer la distribution de caractéristiques morphologiques des cellules (axe majeur, axe mineur, orientation et rondeur). Une analyse multivariée réalisée sur l’ensemble de la base de données (environ 1 million d’images PCM) a montré d'une manière quantitative que les myoblastes cultivés dans le milieu SFM étaient plus allongés et plus petits que ceux cultivés dans le milieu SSM. Les algorithmes développés grâce à ce projet pourraient être utilisés sur d'autres phénotypes cellulaires pour des applications de criblage à haut débit et de contrôle de cultures cellulaires.Automated cell detection and characterization is important in many research fields such as wound healing, embryo development, immune system studies, cancer research, parasite spreading, tissue engineering, stem cell research and drug research and testing. Studying in vitro cellular behavior via live-cell imaging and high-throughput screening involves thousands of images and vast amounts of data, and automated analysis tools relying on machine vision methods and non-intrusive methods such as phase contrast microscopy (PCM) are a necessity. However, there are still some challenges to overcome, since PCM images are difficult to analyze because of the bright halo surrounding the cells and blurry cell-cell boundaries when they are touching. The goal of this project was to develop image processing algorithms to analyze PCM images in an automated fashion, capable of processing large datasets of images to extract information related to cellular viability and morphology. To develop these algorithms, a large dataset of myoblasts images acquired in live-cell imaging (in PCM) was created, growing the cells in either a serum-supplemented (SSM) or a serum-free (SFM) medium over several passages. As a result, algorithms capable of computing the cell-covered surface and cellular morphological features were programmed in Matlab®. The cell-covered surface was estimated using a range filter, a threshold and a minimum cut size in order to look at the cellular growth kinetics. Results showed that the cells were growing at similar paces for both media, but their growth rate was decreasing linearly with passage number. The undecimated wavelet transform multivariate image analysis (UWT-MIA) method was developed, and was used to estimate cellular morphological features distributions (major axis, minor axis, orientation and roundness distributions) on a very large PCM image dataset using the Gabor continuous wavelet transform. Multivariate data analysis performed on the whole database (around 1 million PCM images) showed in a quantitative manner that myoblasts grown in SFM were more elongated and smaller than cells grown in SSM. The algorithms developed through this project could be used in the future on other cellular phenotypes for high-throughput screening and cell culture control applications

    Pixel-level Image Fusion Algorithms for Multi-camera Imaging System

    Get PDF
    This thesis work is motivated by the potential and promise of image fusion technologies in the multi sensor image fusion system and applications. With specific focus on pixel level image fusion, the process after the image registration is processed, we develop graphic user interface for multi-sensor image fusion software using Microsoft visual studio and Microsoft Foundation Class library. In this thesis, we proposed and presented some image fusion algorithms with low computational cost, based upon spatial mixture analysis. The segment weighted average image fusion combines several low spatial resolution data source from different sensors to create high resolution and large size of fused image. This research includes developing a segment-based step, based upon stepwise divide and combine process. In the second stage of the process, the linear interpolation optimization is used to sharpen the image resolution. Implementation of these image fusion algorithms are completed based on the graphic user interface we developed. Multiple sensor image fusion is easily accommodated by the algorithm, and the results are demonstrated at multiple scales. By using quantitative estimation such as mutual information, we obtain the experiment quantifiable results. We also use the image morphing technique to generate fused image sequence, to simulate the results of image fusion. While deploying our pixel level image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also makes it hard to become deployed in system and applications that require real-time feedback, high flexibility and low computation abilit
    corecore