222 research outputs found

    Generative Image Modeling Using Spatial LSTMs

    Full text link
    Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multi-dimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting

    In All Likelihood, Deep Belief Is Not Enough

    Full text link
    Statistical models of natural stimuli provide an important tool for researchers in the fields of machine learning and computational neuroscience. A canonical way to quantitatively assess and compare the performance of statistical models is given by the likelihood. One class of statistical models which has recently gained increasing popularity and has been applied to a variety of complex data are deep belief networks. Analyses of these models, however, have been typically limited to qualitative analyses based on samples due to the computationally intractable nature of the model likelihood. Motivated by these circumstances, the present article provides a consistent estimator for the likelihood that is both computationally tractable and simple to apply in practice. Using this estimator, a deep belief network which has been suggested for the modeling of natural image patches is quantitatively investigated and compared to other models of natural image patches. Contrary to earlier claims based on qualitative results, the results presented in this article provide evidence that the model under investigation is not a particularly good model for natural image

    A Generative Model of Natural Texture Surrogates

    Full text link
    Natural images can be viewed as patchworks of different textures, where the local image statistics is roughly stationary within a small neighborhood but otherwise varies from region to region. In order to model this variability, we first applied the parametric texture algorithm of Portilla and Simoncelli to image patches of 64X64 pixels in a large database of natural images such that each image patch is then described by 655 texture parameters which specify certain statistics, such as variances and covariances of wavelet coefficients or coefficient magnitudes within that patch. To model the statistics of these texture parameters, we then developed suitable nonlinear transformations of the parameters that allowed us to fit their joint statistics with a multivariate Gaussian distribution. We find that the first 200 principal components contain more than 99% of the variance and are sufficient to generate textures that are perceptually extremely close to those generated with all 655 components. We demonstrate the usefulness of the model in several ways: (1) We sample ensembles of texture patches that can be directly compared to samples of patches from the natural image database and can to a high degree reproduce their perceptual appearance. (2) We further developed an image compression algorithm which generates surprisingly accurate images at bit rates as low as 0.14 bits/pixel. Finally, (3) We demonstrate how our approach can be used for an efficient and objective evaluation of samples generated with probabilistic models of natural images.Comment: 34 pages, 9 figure

    A note on the evaluation of generative models

    Full text link
    Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided

    Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet

    Full text link
    Recent results suggest that state-of-the-art saliency models perform far from optimal in predicting fixations. This lack in performance has been attributed to an inability to model the influence of high-level image features such as objects. Recent seminal advances in applying deep neural networks to tasks like object recognition suggests that they are able to capture this kind of structure. However, the enormous amount of training data necessary to train these networks makes them difficult to apply directly to saliency prediction. We present a novel way of reusing existing neural networks that have been pretrained on the task of object recognition in models of fixation prediction. Using the well-known network of Krizhevsky et al. (2012), we come up with a new saliency model that significantly outperforms all state-of-the-art models on the MIT Saliency Benchmark. We show that the structure of this network allows new insights in the psychophysics of fixation selection and potentially their neural implementation. To train our network, we build on recent work on the modeling of saliency as point processes

    Inducing an optical Feshbach resonance via stimulated Raman coupling

    Full text link
    We demonstrate a novel method of inducing an optical Feshbach resonance based on a coherent free-bound stimulated Raman transition. In our experiment atoms in a Rb87 Bose-Einstein condensate are exposed to two phase-locked Raman laser beams which couple pairs of colliding atoms to a molecular ground state. By controlling the power and relative detuning of the two laser beams, we can change the atomic scattering length considerably. The dependence of scattering length on these parameters is studied experimentally and modelled theoretically.Comment: 8 pages, 8 figures, submitted to PR

    Inference and Mixture Modeling with the Elliptical Gamma Distribution

    Full text link
    We study modeling and inference with the Elliptical Gamma Distribution (EGD). We consider maximum likelihood (ML) estimation for EGD scatter matrices, a task for which we develop new fixed-point algorithms. Our algorithms are efficient and converge to global optima despite nonconvexity. Moreover, they turn out to be much faster than both a well-known iterative algorithm of Kent & Tyler (1991) and sophisticated manifold optimization algorithms. Subsequently, we invoke our ML algorithms as subroutines for estimating parameters of a mixture of EGDs. We illustrate our methods by applying them to model natural image statistics---the proposed EGD mixture model yields the most parsimonious model among several competing approaches.Comment: 23 pages, 11 figure

    Mixtures of conditional Gaussian scale mixtures applied to multiscale image representations

    Get PDF
    We present a probabilistic model for natural images which is based on Gaussian scale mixtures and a simple multiscale representation. In contrast to the dominant approach to modeling whole images focusing on Markov random fields, we formulate our model in terms of a directed graphical model. We show that it is able to generate images with interesting higher-order correlations when trained on natural images or samples from an occlusion based model. More importantly, the directed model enables us to perform a principled evaluation. While it is easy to generate visually appealing images, we demonstrate that our model also yields the best performance reported to date when evaluated with respect to the cross-entropy rate, a measure tightly linked to the average log-likelihood

    PSMA-PET/CT in Patients with Recurrent Clear Cell Renal Cell Carcinoma: Histopathological Correlations of Imaging Findings

    Get PDF
    PET/CT with prostate-specific membrane antigen (PSMA)-targeted tracers has been used in the diagnosis and staging of patients with clear cell renal cell carcinoma (ccRCC). For ccRCC primary tumors, PET parameters were shown to predict histologic grade and features. The aim of this study was to correlate PSMA PET/CT with histopathological findings in patients with metastatic recurrence of ccRCC. Patients with ccRCC who underwent PSMA-targeted PET/CT and subsequent histopathological evaluation of suspicious lesions were included. Specimens underwent immunohistochemical marking. Lesion diameter, volume and tracer uptake were correlated with the extent and intensity of molecular PSMA expression and with clinical findings. Twelve PET-positive lesions of nine patients were evaluated. Eleven ccRCC metastases and one prostate carcinoma were detected histopathologically. Molecular PSMA expression was detected in all lesions, which intensity and distribution did not correlate with PET parameters. PSMA-targeted PET/CT is a feasible tool for the evaluation of patients with ccRCC but cannot reliably predict histologic features of metastases. PSMA may also be expressed in malignant lesions other than ccRCC, leading to incidental detection of these tumors
    • …
    corecore