46 research outputs found

    On the generation of high dynamic range images: theory and practice from a statistical perspective

    Get PDF
    This dissertation studies the problem of high dynamic range (HDR) image generation from a statistical perspective. A thorough analysis of the camera acquisition process leads to a simplified yet realistic statistical model describing raw pixel values. The analysis and methods then proposed are based on this model. First, the theoretical performance bound of the problem is computed for the static case, where the acquisition conditions are controlled. Furthermore, a new method is proposed that, unlike previous methods, improves the reconstructed HDR image by taking into account the information carried by saturated samples. From a more practical perspective, two methods are proposed to generate HDR images in the more realistic and complex case where both objects and camera may exhibit motion. The first one is a multi-image, patch-based method, that simultaneously estimates and denoises the HDR image. The other is a single image approach that makes use of a general restoration method to generate the HDR image. This general restoration method, applicable to a wide range of problems, constitutes the last contribution of this dissertation

    A study of the image formation model and noise characterization in SPECT imaging. Applications to denoising and epileptic foci localization

    Get PDF
    La epilepsia es una enfermedad neurolĂłgica que produce de forma espontanea repetidas alteraciones del funcionamiento normal del cerebro. La epilepsia refractaria es un tipo de epilepsia que no puede ser controlada con medicacion. Dichos pacientes se ven imposibilitados de llevar una vida normal por la elevada frecuencia de sus crisis. En particular, los pacientes pediatricos pueden tener consecuencias severas sobre el neurodesarrollo. En estos casos se considera la cirugĂ­a para remover las celulas anormales causantes de las crisis. Esta tecnica requiere una localizacion previa precisa de la region del cerebro donde se origina las crisis. Imagenes SPECT de la actividad cerebral, durante y entre crisis, son obtenidas utilizando radiotrazadores que se acumulan y quedan fijos de forma proporcional al flujo sanguĂ­neo cerebral local al momento de su administracion. La tecnica mas utilizada para detectar los focos epileptogenos es umbralizar la diferencia de estas imagenes, corregistrada y normalizada. Este metodo ha demostrado gran utilidad, pero presenta algunas desventajas: los resultados dependen fuertemente del umbral elegido y presenta un alto numero de falsas detecciones. Ademas, la eleccion del umbral no tiene una solida base estadistica. En esta tesis se presenta un modelo matematico de la formacion de las imagenes de SPECT y una caracterizacion estadistica de las mismas. El modelo estadistico y las hipotesis realizadas son validadas por medio de tests estadisticos no parametricos. Dicho modelo es luego aplicado al problema de la localizacion de focos epileptogenos utilizando un metodo basado en la teoria de a-contrario y el mejoramiento de la calidad de las imagenes de SPECT a traves de la remocion de ruido en las mismas. Ambas tecnicas, la propuesta para realizar la deteccion y la remocion de ruido, son evaluadas en fantomas y casos reales y validadas por un medico experto con profundo conocimiento de la historia clinica de los pacientes. Los resultados son prometedores: la localizacion de los focos epileptogenos muestra mejores resultados que la tecnica clasica de umbralizacion, y el metodo de remocion de ruido parece mejorar globalmente la calidad de las imagenes de SPECT

    A Bayesian Hyperprior Approach for Joint Image Denoising and Interpolation, with an Application to HDR Imaging

    Full text link
    Recently, impressive denoising results have been achieved by Bayesian approaches which assume Gaussian models for the image patches. This improvement in performance can be attributed to the use of per-patch models. Unfortunately such an approach is particularly unstable for most inverse problems beyond denoising. In this work, we propose the use of a hyperprior to model image patches, in order to stabilize the estimation procedure. There are two main advantages to the proposed restoration scheme: Firstly it is adapted to diagonal degradation matrices, and in particular to missing data problems (e.g. inpainting of missing pixels or zooming). Secondly it can deal with signal dependent noise models, particularly suited to digital cameras. As such, the scheme is especially adapted to computational photography. In order to illustrate this point, we provide an application to high dynamic range imaging from a single image taken with a modified sensor, which shows the effectiveness of the proposed scheme.Comment: Some figures are reduced to comply with arxiv's size constraints. Full size images are available as HAL technical report hal-01107519v5, IEEE Transactions on Computational Imaging, 201

    Sur la génération d'images à grande gamme dynamique. Théorie et pratique : une perspective statistique

    Get PDF
    This dissertation studies the problem of high dynamic range (HDR) image generation from a statistical perspective. A thorough analysis of the camera acquisition process leads to a simplified yet realistic statistical model describing raw pixel values. The analysis and methods then proposed are based on this model. First, the theoretical performance bound of the problem is computed for the static case, where the acquisition conditions are controlled. Furthermore, a new method is proposed that, unlike previous methods, improves the reconstructed HDR image by taking into account the information carried by saturated samples. From a more practical perspective, two methods are proposed to generate HDR images in the more realistic and complex case where both objects and camera may exhibit motion. The first one is a multi-image, patch-based method, that simultaneously estimates and denoises the HDR image. The other is a single image approach that makes use of a general restoration method to generate the HDR image. This general restoration method, applicable to a wide range of problems, constitutes the last contribution of this dissertation.Cette thèse porte sur le problème de la génération d'images à grande gamme dynamique (HDR pour l'anglais High Dynamic Range). Une analyse approfondie du processus d'acquisition de la caméra conduit tout d'abord à un modèle statistique simplifié mais réaliste décrivant les valeurs brutes des pixels. Les analyses et méthodes proposées par la suite sont fondées sur ce modèle.Nous posons le problème de l'estimation de l'irradiance comme un problème d'estimation statistique et en calculons la borne de performance. Les performances des estimateurs d'irradiance classiques sont comparées à cette borne. Les résultats obtenus justifient l'introduction d'un nouvel estimateur qui, au contraire des méthodes de la littérature, prend en compte les échantillons saturés.D'un point de vue plus pratique, deux méthodes sont proposées pour générer des images HDR dans le cas plus réaliste et complexe de scènes dynamiques. Nous proposons tout d'abord une méthode multi-image qui utilise des voisinages (patches) pour estimer et débruiter l'image HDR de façon simultanée. Nous proposons également une approche qui repose sur l'acquisition d'une seule image. Cette approche repose sur une méthode générique, par patches, de résolution des problèmes inverses pour génerer l'image HDR. Cette méthode de restauration, d'un point de vue plus général et pour une large gamme d'applications, constitue la dernière contribution de cette thèse

    Single Shot High Dynamic Range Imaging Using Piecewise Linear Estimators

    Get PDF
    International audienceBuilding high dynamic range (HDR) images by combining photographs captured with different exposure times present several drawbacks, such as the need for global alignment and motion estimation in order to avoid ghosting artifacts. The concept of spatially varying pixel exposures (SVE) proposed by Nayar et al. enables to capture in only one shot a very large range of exposures while avoiding these limitations. In this paper, we propose a novel approach to generate HDR images from a single shot acquired with spatially varying pixel exposures. The proposed method makes use of the assumption stating that the distribution of patches in an image is well represented by a Gaussian Mixture Model. Drawing on a precise modeling of the camera acquisition noise, we extend the piecewise linear estimation strategy developed by Yu et al. for image restoration. The proposed method permits to reconstruct an irradiance image by simultaneously estimating saturated and under-exposed pixels and denoising existing ones, showing significant improvements over existing approaches

    Best algorithms for HDR image generation. A study of performance bounds

    Get PDF
    Since the seminal work of Mann and Picard in 1995, the standard way to build high dynamic range (HDR) images from regular cameras has been to combine a reduced number of photographs captured with different exposure times. The algorithms proposed in the literature differ in the strategy used to combine these frames. Several experimental studies comparing their performances have been reported, showing in particular that a maximum likelihood estimation yields the best results in terms of mean squared error. However, no theoretical study aiming at establishing the performance limits of the HDR estimation problem has been conducted. Another common aspect of all HDR estimation approaches is that they discard saturated values. In this paper, we address these two issues. More precisely, we derive theoretical bounds for the performance of unbiased estimators for the HDR estimation problem. The unbiasedness hypothesis is motivated by the fact that most of the existing estimators, among them the best performing and most well known, are nearly unbiased. Moreover, we show that, even with a small number of photographs, the maximum likelihood estimator performs extremely close to these bounds. As a second contribution, we propose a general strategy for integrating the information provided by saturated pixels in the estimation process, hence improving the estimation results. Finally, we analyze the sensitivity of the HDR estimation process to camera parameters, and we show that small errors in the camera calibration process may severely degrade the estimation result

    Similarity search in the blink of an eye with compressed indices

    Full text link
    Nowadays, data is represented by vectors. Retrieving those vectors, among millions and billions, that are similar to a given query is a ubiquitous problem of relevance for a wide range of applications. In this work, we present new techniques for creating faster and smaller indices to run these searches. To this end, we introduce a novel vector compression method, Locally-adaptive Vector Quantization (LVQ), that simultaneously reduces memory footprint and improves search performance, with minimal impact on search accuracy. LVQ is designed to work optimally in conjunction with graph-based indices, reducing their effective bandwidth while enabling random-access-friendly fast similarity computations. Our experimental results show that LVQ, combined with key optimizations for graph-based indices in modern datacenter systems, establishes the new state of the art in terms of performance and memory footprint. For billions of vectors, LVQ outcompetes the second-best alternatives: (1) in the low-memory regime, by up to 20.7x in throughput with up to a 3x memory footprint reduction, and (2) in the high-throughput regime by 5.8x with 1.4x less memory

    Beyond Failure: The 2nd LAK Failathon Poster

    Get PDF
    This poster will be a chance for a wider LAK audience to engage with the 2nd LAK Failathon workshop. Both of these will build on the successful Failathon event in 2016 and extend beyond discussing individual experiences of failure to exploring how the field can improve, particularly regarding the creation and use of evidence. Failure in research is an increasingly hot topic, with high-profile crises of confidence in the published research literature in medicine and psychology. Among the major factors in this research crisis are the many incentives to report and publish only positive findings. These incentives prevent the field in general from learning from negative findings, and almost entirely preclude the publication of mistakes and errors. Thus providing an alternative forum for practitioners and researchers to learn from each other’s failures can be very productive. The first LAK Failathon, held in 2016, provided just such an opportunity for researchers and practitioners to share their failures and negative findings in a lower-stakes environment, to help participants learn from each other’s mistakes. It was very successful, and there was strong support for running it as an annual event. The 2nd LAK Failathon workshop will build on that success, with twin objectives to provide an environment for individuals to learn from each other’s failures, and also to co-develop plans for how we as a field can better build and deploy our evidence base. This poster is an opportunity for wider feedback on the plans developed in the workshop, with interactive use of sticky notes to add new ideas and coloured dots to illustrate prioritisation. This broadens the participant base in this important work, which should improve the quality of the plans and the commitment of the community to delivering them

    Probabilistic Fluorescence-Based Synapse Detection

    Get PDF
    Brain function results from communication between neurons connected by complex synaptic networks. Synapses are themselves highly complex and diverse signaling machines, containing protein products of hundreds of different genes, some in hundreds of copies, arranged in precise lattice at each individual synapse. Synapses are fundamental not only to synaptic network function but also to network development, adaptation, and memory. In addition, abnormalities of synapse numbers or molecular components are implicated in most mental and neurological disorders. Despite their obvious importance, mammalian synapse populations have so far resisted detailed quantitative study. In human brains and most animal nervous systems, synapses are very small and very densely packed: there are approximately 1 billion synapses per cubic millimeter of human cortex. This volumetric density poses very substantial challenges to proteometric analysis at the critical level of the individual synapse. The present work describes new probabilistic image analysis methods for single-synapse analysis of synapse populations in both animal and human brains.Comment: Current awaiting peer revie
    corecore