270 research outputs found

    Image Space Tensor Field Visualization Using a LIC-like Method

    Get PDF
    Tensors are of great interest to many applications in engineering and in medical imaging, but a proper analysis and visualization remains challenging. Physics-based visualization of tensor fields has proven to show the main features of symmetric second-order tensor fields, while still displaying the most important information of the data, namely the main directions in medical diffusion tensor data using texture and additional attributes using color-coding, in a continuous representation. Nevertheless, its application and usability remains limited due to its computational expensive and sensitive nature. We introduce a novel approach to compute a fabric-like texture pattern from tensor fields on arbitrary non-selfintersecting surfaces that is motivated by image space line integral convolution (LIC). Our main focus lies on regaining three-dimensionality of the data under user interaction, such as rotation and scaling. We employ a multi-pass rendering approach to estimate proper modification of the LIC noise input texture to support the three-dimensional perception during user interactions

    Applied Visualization in the Neurosciences and the Enhancement of Visualization through Computer Graphics

    Get PDF
    The complexity and size of measured and simulated data in many fields of science is increasing constantly. The technical evolution allows for capturing smaller features and more complex structures in the data. To make this data accessible by the scientists, efficient and specialized visualization techniques are required. Maximum efficiency and value for the user can only be achieved by adapting visualization to the specific application area and the specific requirements of the scientific field. Part I: In the first part of my work, I address the visualization in the neurosciences. The neuroscience tries to understand the human brain; beginning at its smallest parts, up to its global infrastructure. To achieve this ambitious goal, the neuroscience uses a combination of three-dimensional data from a myriad of sources, like MRI, CT, or functional MRI. To handle this diversity of different data types and sources, the neuroscience need specialized and well evaluated visualization techniques. As a start, I will introduce an extensive software called \"OpenWalnut\". It forms the common base for developing and using visualization techniques with our neuroscientific collaborators. Using OpenWalnut, standard and novel visualization approaches are available to the neuroscientific researchers too. Afterwards, I am introducing a very specialized method to illustrate the causal relation of brain areas, which was, prior to that, only representable via abstract graph models. I will finalize the first part of my work with an evaluation of several standard visualization techniques in the context of simulated electrical fields in the brain. The goal of this evaluation was clarify the advantages and disadvantages of the used visualization techniques to the neuroscientific community. We exemplified these, using clinically relevant scenarios. Part II: Besides the data preprocessing, which plays a tremendous role in visualization, the final graphical representation of the data is essential to understand structure and features in the data. The graphical representation of data can be seen as the interface between the data and the human mind. The second part of my work is focused on the improvement of structural and spatial perception of visualization -- the improvement of the interface. Unfortunately, visual improvements using computer graphics methods of the computer game industry is often seen sceptically. In the second part, I will show that such methods can be applied to existing visualization techniques to improve spatiality and to emphasize structural details in the data. I will use a computer graphics paradigm called \"screen space rendering\". Its advantage, amongst others, is its seamless applicability to nearly every visualization technique. I will start with two methods that improve the perception of mesh-like structures on arbitrary surfaces. Those mesh structures represent second-order tensors and are generated by a method named \"TensorMesh\". Afterwards I show a novel approach to optimally shade line and point data renderings. With this technique it is possible for the first time to emphasize local details and global, spatial relations in dense line and point data.In vielen Bereichen der Wissenschaft nimmt die Größe und Komplexität von gemessenen und simulierten Daten zu. Die technische Entwicklung erlaubt das Erfassen immer kleinerer Strukturen und komplexerer Sachverhalte. Um solche Daten dem Menschen zugänglich zu machen, benötigt man effiziente und spezialisierte Visualisierungswerkzeuge. Nur die Anpassung der Visualisierung auf ein Anwendungsgebiet und dessen Anforderungen erlaubt maximale Effizienz und Nutzen für den Anwender. Teil I: Im ersten Teil meiner Arbeit befasse ich mich mit der Visualisierung im Bereich der Neurowissenschaften. Ihr Ziel ist es, das menschliche Gehirn zu begreifen; von seinen kleinsten Teilen bis hin zu seiner Gesamtstruktur. Um dieses ehrgeizige Ziel zu erreichen nutzt die Neurowissenschaft vor allem kombinierte, dreidimensionale Daten aus vielzähligen Quellen, wie MRT, CT oder funktionalem MRT. Um mit dieser Vielfalt umgehen zu können, benötigt man in der Neurowissenschaft vor allem spezialisierte und evaluierte Visualisierungsmethoden. Zunächst stelle ich ein umfangreiches Softwareprojekt namens \"OpenWalnut\" vor. Es bildet die gemeinsame Basis für die Entwicklung und Nutzung von Visualisierungstechniken mit unseren neurowissenschaftlichen Kollaborationspartnern. Auf dieser Basis sind klassische und neu entwickelte Visualisierungen auch für Neurowissenschaftler zugänglich. Anschließend stelle ich ein spezialisiertes Visualisierungsverfahren vor, welches es ermöglicht, den kausalen Zusammenhang zwischen Gehirnarealen zu illustrieren. Das war vorher nur durch abstrakte Graphenmodelle möglich. Den ersten Teil der Arbeit schließe ich mit einer Evaluation verschiedener Standardmethoden unter dem Blickwinkel simulierter elektrischer Felder im Gehirn ab. Das Ziel dieser Evaluation war es, der neurowissenschaftlichen Gemeinde die Vor- und Nachteile bestimmter Techniken zu verdeutlichen und anhand klinisch relevanter Fälle zu erläutern. Teil II: Neben der eigentlichen Datenvorverarbeitung, welche in der Visualisierung eine enorme Rolle spielt, ist die grafische Darstellung essenziell für das Verständnis der Strukturen und Bestandteile in den Daten. Die grafische Repräsentation von Daten bildet die Schnittstelle zum Gehirn des Menschen. Der zweite Teile meiner Arbeit befasst sich mit der Verbesserung der strukturellen und räumlichen Wahrnehmung in Visualisierungsverfahren -- mit der Verbesserung der Schnittstelle. Leider werden viele visuelle Verbesserungen durch Computergrafikmethoden der Spieleindustrie mit Argwohn beäugt. Im zweiten Teil meiner Arbeit werde ich zeigen, dass solche Methoden in der Visualisierung angewendet werden können um den räumlichen Eindruck zu verbessern und Strukturen in den Daten hervorzuheben. Dazu nutze ich ein in der Computergrafik bekanntes Paradigma: das \"Screen Space Rendering\". Dieses Paradigma hat den Vorteil, dass es auf nahezu jede existierende Visualiserungsmethode als Nachbearbeitunsgschritt angewendet werden kann. Zunächst führe ich zwei Methoden ein, die die Wahrnehmung von gitterartigen Strukturen auf beliebigen Oberflächen verbessern. Diese Gitter repräsentieren die Struktur von Tensoren zweiter Ordnung und wurden durch eine Methode namens \"TensorMesh\" erzeugt. Anschließend zeige ich eine neuartige Technik für die optimale Schattierung von Linien und Punktdaten. Mit dieser Technik ist es erstmals möglich sowohl lokale Details als auch globale räumliche Zusammenhänge in dichten Linien- und Punktdaten zu erfassen

    Structural connectivity-based segmentation of the human entorhinal cortex

    Get PDF
    The medial (MEC) and lateral entorhinal cortex (LEC), widely studied in rodents, are well defined and characterized. In humans, however, the exact locations of their homologues remain uncertain. Previous functional magnetic resonance imaging (fMRI) studies have subdivided the human EC into posteromedial (pmEC) and anterolateral (alEC) parts, but uncertainty remains about the choice of imaging modality and seed regions, in particular in light of a substantial revision of the classical model of EC connectivity based on novel insights from rodent anatomy. Here, we used structural, not functional imaging, namely diffusion tensor imaging (DTI) and probabilistic tractography to segment the human EC based on differential connectivity to other brain regions known to project selectively to MEC or LEC. We defined MEC as more strongly connected with presubiculum and retrosplenial cortex (RSC), and LEC as more strongly connected with distal CA1 and proximal subiculum (dCA1pSub) and lateral orbitofrontal cortex (OFC). Although our DTI segmentation had a larger medial-lateral component than in the previous fMRI studies, our results show that the human MEC and LEC homologues have a border oriented both towards the posterior-anterior and medial-lateral axes, supporting the differentiation between pmEC and alEC

    Visualization and analysis of diffusion tensor fields

    Get PDF
    technical reportThe power of medical imaging modalities to measure and characterize biological tissue is amplified by visualization and analysis methods that help researchers to see and understand the structures within their data. Diffusion tensor magnetic resonance imaging can measure microstructural properties of biological tissue, such as the coherent linear organization of white matter of the central nervous system, or the fibrous texture of muscle tissue. This dissertation describes new methods for visualizing and analyzing the salient structure of diffusion tensor datasets. Glyphs from superquadric surfaces and textures from reactiondiffusion systems facilitate inspection of data properties and trends. Fiber tractography based on vector-tensor multiplication allows major white matter pathways to be visualized. The generalization of direct volume rendering to tensor data allows large-scale structures to be shaded and rendered. Finally, a mathematical framework for analyzing the derivatives of tensor values, in terms of shape and orientation change, enables analytical shading in volume renderings, and a method of feature detection important for feature-preserving filtering of tensor fields. Together, the combination of methods enhances the ability of diffusion tensor imaging to provide insight into the local and global structure of biological tissue

    Artificial intelligence for classification of temporal lobe epilepsy with ROI-level MRI data: A worldwide ENIGMA-Epilepsy study

    Get PDF
    Artificial intelligence has recently gained popularity across different medical fields to aid in the detection of diseases based on pathology samples or medical imaging findings. Brain magnetic resonance imaging (MRI) is a key assessment tool for patients with temporal lobe epilepsy (TLE). The role of machine learning and artificial intelligence to increase detection of brain abnormalities in TLE remains inconclusive. We used support vector machine (SV) and deep learning (DL) models based on region of interest (ROI-based) structural (n = 336) and diffusion (n = 863) brain MRI data from patients with TLE with (“lesional”) and without (“non-lesional”) radiographic features suggestive of underlying hippocampal sclerosis from the multinational (multi-center) ENIGMA-Epilepsy consortium. Our data showed that models to identify TLE performed better or similar (68–75%) compared to models to lateralize the side of TLE (56–73%, except structural-based) based on diffusion data with the opposite pattern seen for structural data (67–75% to diagnose vs. 83% to lateralize). In other aspects, structural and diffusion-based models showed similar classification accuracies. Our classification models for patients with hippocampal sclerosis were more accurate (68–76%) than models that stratified non-lesional patients (53–62%). Overall, SV and DL models performed similarly with several instances in which SV mildly outperformed DL. We discuss the relative performance of these models with ROI-level data and the implications for future applications of machine learning and artificial intelligence in epilepsy care

    On connectivity in the central nervous systeem : a magnetic resonance imaging study

    Get PDF
    Brain function has long been the realm of philosophy, psychology and psychiatry and since the mid 1800s, of histopathology. Through the advent of magnetic imaging in the end of the last century, an in vivo visualization of the human brain became available. This thesis describes the development of two unique techniques, imaging of diffusion of water protons and manganese enhanced imaging, that both allow for the depiction of white matter tracts. The reported studies show, that these techniques can be used for a three-dimensional depiction of fiber bundles and that quantitative measures reflecting fiber integrity and neuronal function can be extracted from such data. In clinical applications, the potential use of the developed methods is illustrated in human gliomas, as measure for fiber infiltration, and in spinal cord injury, to monitor potential neuroprotective and __regenerative medication.UBL - phd migration 201

    Reconstruction from Spatio-Spectrally Coded Multispectral Light Fields

    Get PDF
    In dieser Arbeit werden spektral kodierte multispektrale Lichtfelder untersucht, wie sie von einer Lichtfeldkamera mit einem spektral kodierten Mikrolinsenarray aufgenommen werden. Für die Rekonstruktion der kodierten Lichtfelder werden zwei Methoden entwickelt, eine basierend auf den Prinzipien des Compressed Sensing sowie eine Deep Learning Methode. Anhand neuartiger synthetischer und realer Datensätze werden die vorgeschlagenen Rekonstruktionsansätze im Detail evaluiert
    • …
    corecore