14,649 research outputs found

    Performance and quality analysis of convolution-based volume illumination

    Get PDF
    Convolution-based techniques for volume rendering are among the fastest in the on-the-fly volumetric illumination category. Such methods, however, are still considerably slower than conventional local illumination techniques. In this paper we describe how to adapt two commonly used strategies for reducing aliasing artifacts, namely pre-integration and supersampling, to such techniques. These strategies can help reduce the sampling rate of the lighting information (thus the number of convolutions), bringing considerable performance benefits. We present a comparative analysis of their effectiveness in offering performance improvements. We also analyze the (negligible) differences they introduce when comparing their output to the reference method. These strategies can be highly beneficial in setups where direct volume rendering of continuously streaming data is desired and continuous recomputation of full lighting information is too expensive, or where memory constraints make it preferable not to keep additional precomputed volumetric data in memory. In such situations these strategies make single pass, convolution-based volumetric illumination models viable for a broader range of applications, and this paper provides practical guidelines for using and tuning such strategies to specific use cases

    Efficient Bayesian-based Multi-View Deconvolution

    Full text link
    Light sheet fluorescence microscopy is able to image large specimen with high resolution by imaging the sam- ples from multiple angles. Multi-view deconvolution can significantly improve the resolution and contrast of the images, but its application has been limited due to the large size of the datasets. Here we present a Bayesian- based derivation of multi-view deconvolution that drastically improves the convergence time and provide a fast implementation utilizing graphics hardware.Comment: 48 pages, 20 figures, 1 table, under review at Nature Method

    Color Constancy Using CNNs

    Full text link
    In this work we describe a Convolutional Neural Network (CNN) to accurately predict the scene illumination. Taking image patches as input, the CNN works in the spatial domain without using hand-crafted features that are employed by most previous methods. The network consists of one convolutional layer with max pooling, one fully connected layer and three output nodes. Within the network structure, feature learning and regression are integrated into one optimization process, which leads to a more effective model for estimating scene illumination. This approach achieves state-of-the-art performance on a standard dataset of RAW images. Preliminary experiments on images with spatially varying illumination demonstrate the stability of the local illuminant estimation ability of our CNN.Comment: Accepted at DeepVision: Deep Learning in Computer Vision 2015 (CVPR 2015 workshop

    Unconstrained Face Verification using Deep CNN Features

    Full text link
    In this paper, we present an algorithm for unconstrained face verification based on deep convolutional features and evaluate it on the newly released IARPA Janus Benchmark A (IJB-A) dataset. The IJB-A dataset includes real-world unconstrained faces from 500 subjects with full pose and illumination variations which are much harder than the traditional Labeled Face in the Wild (LFW) and Youtube Face (YTF) datasets. The deep convolutional neural network (DCNN) is trained using the CASIA-WebFace dataset. Extensive experiments on the IJB-A dataset are provided
    corecore