71 research outputs found

    Generalized Wishart processes for interpolation over diffusion tensor fields

    Get PDF
    Diffusion Magnetic Resonance Imaging (dMRI) is a non-invasive tool for watching the microstructure of fibrous nerve and muscle tissue. From dMRI, it is possible to estimate 2-rank diffusion tensors imaging (DTI) fields, that are widely used in clinical applications: tissue segmentation, fiber tractography, brain atlas construction, brain conductivity models, among others. Due to hardware limitations of MRI scanners, DTI has the difficult compromise between spatial resolution and signal noise ratio (SNR) during acquisition. For this reason, the data are often acquired with very low resolution. To enhance DTI data resolution, interpolation provides an interesting software solution. The aim of this work is to develop a methodology for DTI interpolation that enhance the spatial resolution of DTI fields. We assume that a DTI field follows a recently introduced stochastic process known as a generalized Wishart process (GWP), which we use as a prior over the diffusion tensor field. For posterior inference, we use Markov Chain Monte Carlo methods. We perform experiments in toy and real data. Results of GWP outperform other methods in the literature, when compared in different validation protocols

    Anisotropy Across Fields and Scales

    Get PDF
    This open access book focuses on processing, modeling, and visualization of anisotropy information, which are often addressed by employing sophisticated mathematical constructs such as tensors and other higher-order descriptors. It also discusses adaptations of such constructs to problems encountered in seemingly dissimilar areas of medical imaging, physical sciences, and engineering. Featuring original research contributions as well as insightful reviews for scientists interested in handling anisotropy information, it covers topics such as pertinent geometric and algebraic properties of tensors and tensor fields, challenges faced in processing and visualizing different types of data, statistical techniques for data processing, and specific applications like mapping white-matter fiber tracts in the brain. The book helps readers grasp the current challenges in the field and provides information on the techniques devised to address them. Further, it facilitates the transfer of knowledge between different disciplines in order to advance the research frontiers in these areas. This multidisciplinary book presents, in part, the outcomes of the seventh in a series of Dagstuhl seminars devoted to visualization and processing of tensor fields and higher-order descriptors, which was held in Dagstuhl, Germany, on October 28–November 2, 2018

    Anisotropy Across Fields and Scales

    Get PDF
    This open access book focuses on processing, modeling, and visualization of anisotropy information, which are often addressed by employing sophisticated mathematical constructs such as tensors and other higher-order descriptors. It also discusses adaptations of such constructs to problems encountered in seemingly dissimilar areas of medical imaging, physical sciences, and engineering. Featuring original research contributions as well as insightful reviews for scientists interested in handling anisotropy information, it covers topics such as pertinent geometric and algebraic properties of tensors and tensor fields, challenges faced in processing and visualizing different types of data, statistical techniques for data processing, and specific applications like mapping white-matter fiber tracts in the brain. The book helps readers grasp the current challenges in the field and provides information on the techniques devised to address them. Further, it facilitates the transfer of knowledge between different disciplines in order to advance the research frontiers in these areas. This multidisciplinary book presents, in part, the outcomes of the seventh in a series of Dagstuhl seminars devoted to visualization and processing of tensor fields and higher-order descriptors, which was held in Dagstuhl, Germany, on October 28–November 2, 2018

    Probabilistic modeling of tensorial data for enhancing spatial resolution in magnetic resonance imaging.

    Get PDF
    Las imágenes médicas usan los principios de la Resonancia Magnética (IRM) para medir de forma no invasiva las propiedades de este movimiento. Cuando se aplica al cerebro humano, proporciona información única sobre la conectividad del tejido, lo que hace que la resonancia magnética sea una de las tecnologías clave en un esfuerzo científico continuo a gran escala para mapear el conector del cerebro humano. En consecuencia, es un tema de investigación oportuno e importante para crear modelos matemáticos que infieren parámetros biológicamente significativos a partir de dichos datos. La MRI y la difusión-MRI (dMRI) se han utilizado en aplicaciones que abarcan desde el procesamiento de señales, la visión por computadora y las neurociencias. Aunque los protocolos clínicos actuales permiten adquisiciones rápidas en un número diferente de cortes en varios planos, la resolución espacial no es lo suficientemente alta en muchos casos para el diagnóstico clínico. El principal problema ocurre debido a las limitaciones de hardware en los escáneres de adquisición. Por lo tanto, MRI y dMRI tienen un compromiso difícil entre una buena resolución espacial y una relación de ruido de señal (SNR). Esto conduce a adquisiciones de datos con baja resolución espacial. Se convierte en un problema serio para el análisis clínico por dos razones principales. Primero, una baja resolución espacial en datos visuales reduce la calidad en procesos médicos importantes tales como: diagnóstico de enfermedades, segmentación (tejido, nervios y hueso), construcción anatómica de atlas, reconstrucción detallada de fibras (tractografía), modelos de conductividad cerebral, etc. Segundo, para obtener imágenes de alta resolución se requiere una adquisición a largo plazo. Sin embargo, los protocolos clínicos actuales no permiten una exposición prolongada de la radiación (MRI y dMRI) en sujetos humanos

    Proceedings of the First International Workshop on Mathematical Foundations of Computational Anatomy (MFCA'06) - Geometrical and Statistical Methods for Modelling Biological Shape Variability

    Get PDF
    International audienceNon-linear registration and shape analysis are well developed research topic in the medical image analysis community. There is nowadays a growing number of methods that can faithfully deal with the underlying biomechanical behaviour of intra-subject shape deformations. However, it is more difficult to relate the anatomical shape of different subjects. The goal of computational anatomy is to analyse and to statistically model this specific type of geometrical information. In the absence of any justified physical model, a natural attitude is to explore very general mathematical methods, for instance diffeomorphisms. However, working with such infinite dimensional space raises some deep computational and mathematical problems. In particular, one of the key problem is to do statistics. Likewise, modelling the variability of surfaces leads to rely on shape spaces that are much more complex than for curves. To cope with these, different methodological and computational frameworks have been proposed. The goal of the workshop was to foster interactions between researchers investigating the combination of geometry and statistics for modelling biological shape variability from image and surfaces. A special emphasis was put on theoretical developments, applications and results being welcomed as illustrations. Contributions were solicited in the following areas: * Riemannian and group theoretical methods on non-linear transformation spaces * Advanced statistics on deformations and shapes * Metrics for computational anatomy * Geometry and statistics of surfaces 26 submissions of very high quality were recieved and were reviewed by two members of the programm committee. 12 papers were finally selected for oral presentations and 8 for poster presentations. 16 of these papers are published in these proceedings, and 4 papers are published in the proceedings of MICCAI'06 (for copyright reasons, only extended abstracts are provided here)

    Depth Estimation Using 2D RGB Images

    Get PDF
    Single image depth estimation is an ill-posed problem. That is, it is not mathematically possible to uniquely estimate the 3rd dimension (or depth) from a single 2D image. Hence, additional constraints need to be incorporated in order to regulate the solution space. As a result, in the first part of this dissertation, the idea of constraining the model for more accurate depth estimation by taking advantage of the similarity between the RGB image and the corresponding depth map at the geometric edges of the 3D scene is explored. Although deep learning based methods are very successful in computer vision and handle noise very well, they suffer from poor generalization when the test and train distributions are not close. While, the geometric methods do not have the generalization problem since they benefit from temporal information in an unsupervised manner. They are sensitive to noise, though. At the same time, explicitly modeling of a dynamic scenes as well as flexible objects in traditional computer vision methods is a big challenge. Considering the advantages and disadvantages of each approach, a hybrid method, which benefits from both, is proposed here by extending traditional geometric models’ abilities to handle flexible and dynamic objects in the scene. This is made possible by relaxing geometric computer vision rules from one motion model for some areas of the scene into one for every pixel in the scene. This enables the model to detect even small, flexible, floating debris in a dynamic scene. However, it makes the optimization under-constrained. To change the optimization from under-constrained to over-constrained while maintaining the model’s flexibility, ”moving object detection loss” and ”synchrony loss” are designed. The algorithm is trained in an unsupervised fashion. The primary results are in no way comparable to the current state of the art. Because the training process is so slow, it is difficult to compare it to the current state of the art. Also, the algorithm lacks stability. In addition, the optical flow model is extremely noisy and naive. At the end, some solutions are suggested to address these issues
    corecore