262 research outputs found

    Progressive Transient Photon Beams

    Get PDF
    In this work, we introduce a novel algorithm for transient rendering in participating media. Our method is consistent, robust and is able to generate animations of time-resolved light transport featuring complex caustic light paths in media. We base our method on the observation that the spatial continuity provides an increased coverage of the temporal domain, and generalize photon beams to transient-state. We extend stead-state photon beam radiance estimates to include the temporal domain. Then, we develop a progressive variant of our approach which provably converges to the correct solution using finite memory by averaging independent realizations of the estimates with progressively reduced kernel bandwidths. We derive the optimal convergence rates accounting for space and time kernels, and demonstrate our method against previous consistent transient rendering methods for participating media

    Photon Parameterisation for Robust Relaxation Constraints

    Get PDF
    This paper presents a novel approach to detecting and preserving fine illumination structure within photon maps. Data derived from each photon's primal trajectory is encoded and used to build a high-dimensional kd-tree. Incorporation of these new parameters allows for precise differentiation between intersecting ray envelopes, thus minimizing detail degradation when combined with photon relaxation. We demonstrate how parameter-aware querying is beneficial in both detecting and removing noise. We also propose a more robust structure descriptor based on principal components analysis that better identifies anisotropic detail at the sub-kernel level. We illustrate the effectiveness of our approach in several example scenes and show significant improvements when rendering complex caustics compared to previous methods

    Efficient Storage and Importance Sampling for Fluorescent Reflectance

    Get PDF
    We propose a technique for efficient storage and importance sampling of fluorescent spectral data. Fluorescence is fully described by a re-radiation matrix, which for a given input wavelength indicates how much energy is re-emitted at other wavelengths. However, such representation has a considerable memory footprint. To significantly reduce memory requirements, we propose the use of Gaussian mixture models for the representation of re-radiation matrices. Instead of the full-resolution matrix, we work with a set of Gaussian parameters that also allow direct importance sampling. Furthermore, if accuracy is of concern, a re-radiation matrix can be used jointly with efficient importance sampling provided by the Gaussian mixture. In this paper, we present our pipeline for efficient storage of bispectral data and provide its extensive evaluation on a large set of bispectral measurements. We show that our method is robust and colour accurate even with its comparably minor memory requirements and that it can be seamlessly integrated into a standard Monte Carlo path tracer

    Rapid Segmentation Techniques for Cardiac and Neuroimage Analysis

    Get PDF
    Recent technological advances in medical imaging have allowed for the quick acquisition of highly resolved data to aid in diagnosis and characterization of diseases or to guide interventions. In order to to be integrated into a clinical work flow, accurate and robust methods of analysis must be developed which manage this increase in data. Recent improvements in in- expensive commercially available graphics hardware and General-Purpose Programming on Graphics Processing Units (GPGPU) have allowed for many large scale data analysis problems to be addressed in meaningful time and will continue to as parallel computing technology improves. In this thesis we propose methods to tackle two clinically relevant image segmentation problems: a user-guided segmentation of myocardial scar from Late-Enhancement Magnetic Resonance Images (LE-MRI) and a multi-atlas segmentation pipeline to automatically segment and partition brain tissue from multi-channel MRI. Both methods are based on recent advances in computer vision, in particular max-flow optimization that aims at solving the segmentation problem in continuous space. This allows for (approximately) globally optimal solvers to be employed in multi-region segmentation problems, without the particular drawbacks of their discrete counterparts, graph cuts, which typically present with metrication artefacts. Max-flow solvers are generally able to produce robust results, but are known for being computationally expensive, especially with large datasets, such as volume images. Additionally, we propose two new deformable registration methods based on Gauss-Newton optimization and smooth the resulting deformation fields via total-variation regularization to guarantee the problem is mathematically well-posed. We compare the performance of these two methods against four highly ranked and well-known deformable registration methods on four publicly available databases and are able to demonstrate a highly accurate performance with low run times. The best performing variant is subsequently used in a multi-atlas segmentation pipeline for the segmentation of brain tissue and facilitates fast run times for this computationally expensive approach. All proposed methods are implemented using GPGPU for a substantial increase in computational performance and so facilitate deployment into clinical work flows. We evaluate all proposed algorithms in terms of run times, accuracy, repeatability and errors arising from user interactions and we demonstrate that these methods are able to outperform established methods. The presented approaches demonstrate high performance in comparison with established methods in terms of accuracy and repeatability while largely reducing run times due to the employment of GPU hardware

    Computational Light Transport for Forward and Inverse Problems.

    Get PDF
    El transporte de luz computacional comprende todas las técnicas usadas para calcular el flujo de luz en una escena virtual. Su uso es ubicuo en distintas aplicaciones, desde entretenimiento y publicidad, hasta diseño de producto, ingeniería y arquitectura, incluyendo el generar datos validados para técnicas basadas en imagen por ordenador. Sin embargo, simular el transporte de luz de manera precisa es un proceso costoso. Como consecuencia, hay que establecer un balance entre la fidelidad de la simulación física y su coste computacional. Por ejemplo, es común asumir óptica geométrica o una velocidad de propagación de la luz infinita, o simplificar los modelos de reflectancia ignorando ciertos fenómenos. En esta tesis introducimos varias contribuciones a la simulación del transporte de luz, dirigidas tanto a mejorar la eficiencia del cálculo de la misma, como a expandir el rango de sus aplicaciones prácticas. Prestamos especial atención a remover la asunción de una velocidad de propagación infinita, generalizando el transporte de luz a su estado transitorio. Respecto a la mejora de eficiencia, presentamos un método para calcular el flujo de luz que incide directamente desde luminarias en un sistema de generación de imágenes por Monte Carlo, reduciendo significativamente la variancia de las imágenes resultantes usando el mismo tiempo de ejecución. Asimismo, introducimos una técnica basada en estimación de densidad en el estado transitorio, que permite reusar mejor las muestras temporales en un medio parcipativo. En el dominio de las aplicaciones, también introducimos dos nuevos usos del transporte de luz: Un modelo para simular un tipo especial de pigmentos gonicromáticos que exhiben apariencia perlescente, con el objetivo de proveer una forma de edición intuitiva para manufactura, y una técnica de imagen sin línea de visión directa usando información del tiempo de vuelo de la luz, construida sobre un modelo de propagación de la luz basado en ondas.<br /

    Computerized tools : a substitute or a supplement when diagnosing Alzheimer's disease?

    Get PDF
    Alzheimer’s disease (AD) is the most common form of dementia in the elderly characterized by difficulties in memory, disturbances in language, changes in behavior, and impairments in daily life activities. By the time cognitive impairment manifests, substantial synaptic and neuronal degeneration has already occurred. Therefore, patients need to be diagnosed as early as possible at a preclinical or presymptomatic stage. This will be important when disease-modifying treatments exist in the future. The main focus of this thesis is on the study of structural neuroimaging in AD and in prodromal stages of the disease. We emphasize the use of statistical learning for the analysis of structural neuroimaging data to achieve individual prediction of disease status and conversion from prodromal stages. The main aims of the thesis were to develop and validate computerized tools to identify patterns of atrophy with the potential of becoming markers of AD pathology using structural magnetic resonance imaging (sMRI) data and to develop a segmentation tool for Computed Tomography (CT). Using automated neuroanatomical software we measured multiple brain structures that were given to statistical learning techniques to create discriminative models for prediction of presence of disease and conversion from prodromal stages. Building statistical models based on sMRI data we investigated optimal normalization strategies for the combination of structural measures such as cortical thickness, cortical and subcortical volumes (Study I). A baseline model was created based on the optimal normalization strategy and combination of structural measures. This model was used to compare the discrimination ability of different statistical learning algorithms (decision trees, artificial neural networks, support vector machines and orthogonal partial least squares (OPLS)). Additionally, the addition of age, years of education and APOE phenotype was added to the baseline model to assess the impact on discrimination ability (Study II). The OPLS classification algorithm was trained on the baseline model to produce a structural index reflecting information about AD-like patterns of atrophy from each individual’s sMRI data. Additional longitudinal information at one-year follow-up was used to characterize the temporal evolution of the derived index (Study III). Since total intracranial volume (ICV) remains a morphological measure of interest and CT is today widely used in routine clinical investigations, we developed and validated an automated segmentation algorithm to estimate ICV from CT scans (Study IV). We believe computerized tools (automated neuroimaging software and statistical discriminative algorithms) have significantly enriched our knowledge and understanding of associated neurodegenerative pathology, its effects on cognition and interaction with age. These tools were mainly developed for research purposes but we believe all accumulated knowledge and insights could be translated into clinical settings, however, that is a challenge that remains open for future studies

    Quantitation in MRI : application to ageing and epilepsy

    No full text
    Multi-atlas propagation and label fusion techniques have recently been developed for segmenting the human brain into multiple anatomical regions. In this thesis, I investigate possible adaptations of these current state-of-the-art methods. The aim is to study ageing on the one hand, and on the other hand temporal lobe epilepsy as an example for a neurological disease. Overall effects are a confounding factor in such anatomical analyses. Intracranial volume (ICV) is often preferred to normalize for global effects as it allows to normalize for estimated maximum brain size and is hence independent of global brain volume loss, as seen in ageing and disease. I describe systematic differences in ICV measures obtained at 1.5T versus 3T, and present an automated method of measuring intracranial volume, Reverse MNI Brain Masking (RBM), based on tissue probability maps in MNI standard space. I show that this is comparable to manual measurements and robust against field strength differences. Correct and robust segmentation of target brains which show gross abnormalities, such as ventriculomegaly, is important for the study of ageing and disease. We achieved this with incorporating tissue classification information into the image registration process. The best results in elderly subjects, patients with TLE and healthy controls were achieved using a new approach using multi-atlas propagation with enhanced registration (MAPER). I then applied MAPER to the problem of automatically distinguishing patients with TLE with (TLE-HA) and without (TLE-N) hippocampal atrophy on MRI from controls, and determine the side of seizure onset. MAPER-derived structural volumes were used for a classification step consisting of selecting a set of discriminatory structures and applying support vector machine on the structural volumes as well as morphological similarity information such as volume difference obtained with spectral analysis. Acccuracies were 91-100 %, indicating that the method might be clinically useful. Finally, I used the methods developed in the previous chapters to investigate brain regional volume changes across the human lifespan in over 500 healthy subjects between 20 to 90 years of age, using data from three different scanners (2x 1.5T, 1x 3T), using the IXI database. We were able to confirm several known changes, indicating the veracity of the method. In addition, we describe the first multi-region, whole-brain database of normal ageing
    • …
    corecore