7 research outputs found

    Anisotropy Across Fields and Scales

    Get PDF
    This open access book focuses on processing, modeling, and visualization of anisotropy information, which are often addressed by employing sophisticated mathematical constructs such as tensors and other higher-order descriptors. It also discusses adaptations of such constructs to problems encountered in seemingly dissimilar areas of medical imaging, physical sciences, and engineering. Featuring original research contributions as well as insightful reviews for scientists interested in handling anisotropy information, it covers topics such as pertinent geometric and algebraic properties of tensors and tensor fields, challenges faced in processing and visualizing different types of data, statistical techniques for data processing, and specific applications like mapping white-matter fiber tracts in the brain. The book helps readers grasp the current challenges in the field and provides information on the techniques devised to address them. Further, it facilitates the transfer of knowledge between different disciplines in order to advance the research frontiers in these areas. This multidisciplinary book presents, in part, the outcomes of the seventh in a series of Dagstuhl seminars devoted to visualization and processing of tensor fields and higher-order descriptors, which was held in Dagstuhl, Germany, on October 28–November 2, 2018

    Anisotropy Across Fields and Scales

    Get PDF
    This open access book focuses on processing, modeling, and visualization of anisotropy information, which are often addressed by employing sophisticated mathematical constructs such as tensors and other higher-order descriptors. It also discusses adaptations of such constructs to problems encountered in seemingly dissimilar areas of medical imaging, physical sciences, and engineering. Featuring original research contributions as well as insightful reviews for scientists interested in handling anisotropy information, it covers topics such as pertinent geometric and algebraic properties of tensors and tensor fields, challenges faced in processing and visualizing different types of data, statistical techniques for data processing, and specific applications like mapping white-matter fiber tracts in the brain. The book helps readers grasp the current challenges in the field and provides information on the techniques devised to address them. Further, it facilitates the transfer of knowledge between different disciplines in order to advance the research frontiers in these areas. This multidisciplinary book presents, in part, the outcomes of the seventh in a series of Dagstuhl seminars devoted to visualization and processing of tensor fields and higher-order descriptors, which was held in Dagstuhl, Germany, on October 28–November 2, 2018

    Applied Visualization in the Neurosciences and the Enhancement of Visualization through Computer Graphics

    Get PDF
    The complexity and size of measured and simulated data in many fields of science is increasing constantly. The technical evolution allows for capturing smaller features and more complex structures in the data. To make this data accessible by the scientists, efficient and specialized visualization techniques are required. Maximum efficiency and value for the user can only be achieved by adapting visualization to the specific application area and the specific requirements of the scientific field. Part I: In the first part of my work, I address the visualization in the neurosciences. The neuroscience tries to understand the human brain; beginning at its smallest parts, up to its global infrastructure. To achieve this ambitious goal, the neuroscience uses a combination of three-dimensional data from a myriad of sources, like MRI, CT, or functional MRI. To handle this diversity of different data types and sources, the neuroscience need specialized and well evaluated visualization techniques. As a start, I will introduce an extensive software called \"OpenWalnut\". It forms the common base for developing and using visualization techniques with our neuroscientific collaborators. Using OpenWalnut, standard and novel visualization approaches are available to the neuroscientific researchers too. Afterwards, I am introducing a very specialized method to illustrate the causal relation of brain areas, which was, prior to that, only representable via abstract graph models. I will finalize the first part of my work with an evaluation of several standard visualization techniques in the context of simulated electrical fields in the brain. The goal of this evaluation was clarify the advantages and disadvantages of the used visualization techniques to the neuroscientific community. We exemplified these, using clinically relevant scenarios. Part II: Besides the data preprocessing, which plays a tremendous role in visualization, the final graphical representation of the data is essential to understand structure and features in the data. The graphical representation of data can be seen as the interface between the data and the human mind. The second part of my work is focused on the improvement of structural and spatial perception of visualization -- the improvement of the interface. Unfortunately, visual improvements using computer graphics methods of the computer game industry is often seen sceptically. In the second part, I will show that such methods can be applied to existing visualization techniques to improve spatiality and to emphasize structural details in the data. I will use a computer graphics paradigm called \"screen space rendering\". Its advantage, amongst others, is its seamless applicability to nearly every visualization technique. I will start with two methods that improve the perception of mesh-like structures on arbitrary surfaces. Those mesh structures represent second-order tensors and are generated by a method named \"TensorMesh\". Afterwards I show a novel approach to optimally shade line and point data renderings. With this technique it is possible for the first time to emphasize local details and global, spatial relations in dense line and point data.In vielen Bereichen der Wissenschaft nimmt die Größe und Komplexität von gemessenen und simulierten Daten zu. Die technische Entwicklung erlaubt das Erfassen immer kleinerer Strukturen und komplexerer Sachverhalte. Um solche Daten dem Menschen zugänglich zu machen, benötigt man effiziente und spezialisierte Visualisierungswerkzeuge. Nur die Anpassung der Visualisierung auf ein Anwendungsgebiet und dessen Anforderungen erlaubt maximale Effizienz und Nutzen für den Anwender. Teil I: Im ersten Teil meiner Arbeit befasse ich mich mit der Visualisierung im Bereich der Neurowissenschaften. Ihr Ziel ist es, das menschliche Gehirn zu begreifen; von seinen kleinsten Teilen bis hin zu seiner Gesamtstruktur. Um dieses ehrgeizige Ziel zu erreichen nutzt die Neurowissenschaft vor allem kombinierte, dreidimensionale Daten aus vielzähligen Quellen, wie MRT, CT oder funktionalem MRT. Um mit dieser Vielfalt umgehen zu können, benötigt man in der Neurowissenschaft vor allem spezialisierte und evaluierte Visualisierungsmethoden. Zunächst stelle ich ein umfangreiches Softwareprojekt namens \"OpenWalnut\" vor. Es bildet die gemeinsame Basis für die Entwicklung und Nutzung von Visualisierungstechniken mit unseren neurowissenschaftlichen Kollaborationspartnern. Auf dieser Basis sind klassische und neu entwickelte Visualisierungen auch für Neurowissenschaftler zugänglich. Anschließend stelle ich ein spezialisiertes Visualisierungsverfahren vor, welches es ermöglicht, den kausalen Zusammenhang zwischen Gehirnarealen zu illustrieren. Das war vorher nur durch abstrakte Graphenmodelle möglich. Den ersten Teil der Arbeit schließe ich mit einer Evaluation verschiedener Standardmethoden unter dem Blickwinkel simulierter elektrischer Felder im Gehirn ab. Das Ziel dieser Evaluation war es, der neurowissenschaftlichen Gemeinde die Vor- und Nachteile bestimmter Techniken zu verdeutlichen und anhand klinisch relevanter Fälle zu erläutern. Teil II: Neben der eigentlichen Datenvorverarbeitung, welche in der Visualisierung eine enorme Rolle spielt, ist die grafische Darstellung essenziell für das Verständnis der Strukturen und Bestandteile in den Daten. Die grafische Repräsentation von Daten bildet die Schnittstelle zum Gehirn des Menschen. Der zweite Teile meiner Arbeit befasst sich mit der Verbesserung der strukturellen und räumlichen Wahrnehmung in Visualisierungsverfahren -- mit der Verbesserung der Schnittstelle. Leider werden viele visuelle Verbesserungen durch Computergrafikmethoden der Spieleindustrie mit Argwohn beäugt. Im zweiten Teil meiner Arbeit werde ich zeigen, dass solche Methoden in der Visualisierung angewendet werden können um den räumlichen Eindruck zu verbessern und Strukturen in den Daten hervorzuheben. Dazu nutze ich ein in der Computergrafik bekanntes Paradigma: das \"Screen Space Rendering\". Dieses Paradigma hat den Vorteil, dass es auf nahezu jede existierende Visualiserungsmethode als Nachbearbeitunsgschritt angewendet werden kann. Zunächst führe ich zwei Methoden ein, die die Wahrnehmung von gitterartigen Strukturen auf beliebigen Oberflächen verbessern. Diese Gitter repräsentieren die Struktur von Tensoren zweiter Ordnung und wurden durch eine Methode namens \"TensorMesh\" erzeugt. Anschließend zeige ich eine neuartige Technik für die optimale Schattierung von Linien und Punktdaten. Mit dieser Technik ist es erstmals möglich sowohl lokale Details als auch globale räumliche Zusammenhänge in dichten Linien- und Punktdaten zu erfassen

    Higher-Order Tensors and Differential Topology in Diffusion MRI Modeling and Visualization

    Get PDF
    Diffusion Weighted Magnetic Resonance Imaging (DW-MRI) is a noninvasive method for creating three-dimensional scans of the human brain. It originated mostly in the 1970s and started its use in clinical applications in the 1980s. Due to its low risk and relatively high image quality it proved to be an indispensable tool for studying medical conditions as well as for general scientific research. For example, it allows to map fiber bundles, the major neuronal pathways through the brain. But all evaluation of scanned data depends on mathematical signal models that describe the raw signal output and map it to biologically more meaningful values. And here we find the most potential for improvement. In this thesis we first present a new multi-tensor kurtosis signal model for DW-MRI. That means it can detect multiple overlapping fiber bundles and map them to a set of tensors. Compared to other already widely used multi-tensor models, we also add higher order kurtosis terms to each fiber. This gives a more detailed quantification of fibers. These additional values can also be estimated by the Diffusion Kurtosis Imaging (DKI) method, but we show that these values are drastically affected by fiber crossings in DKI, whereas our model handles them as intrinsic properties of fiber bundles. This reduces the effects of fiber crossings and allows a more direct examination of fibers. Next, we take a closer look at spherical deconvolution. It can be seen as a generalization of multi-fiber signal models to a continuous distribution of fiber directions. To this approach we introduce a novel mathematical constraint. We show, that state-of-the-art methods for estimating the fiber distribution become more robust and gain accuracy when enforcing our constraint. Additionally, in the context of our own deconvolution scheme, it is algebraically equivalent to enforcing that the signal can be decomposed into fibers. This means, tractography and other methods that depend on identifying a discrete set of fiber directions greatly benefit from our constraint. Our third major contribution to DW-MRI deals with macroscopic structures of fiber bundle geometry. In recent years the question emerged, whether or not, crossing bundles form two-dimensional surfaces inside the brain. Although not completely obvious, there is a mathematical obstacle coming from differential topology, that prevents general tangential planes spanned by fiber directions at each point to be connected into consistent surfaces. Research into how well this constraint is fulfilled in our brain is hindered by the high precision and complexity needed by previous evaluation methods. This is why we present a drastically simpler method that negates the need for precisely finding fiber directions and instead only depends on the simple diffusion tensor method (DTI). We then use our new method to explore and improve streamsurface visualization.<br /

    Probabilistic Ordinary Differential Equation Solvers - Theory and Applications

    Get PDF
    Ordinary differential equations are ubiquitous in science and engineering, as they provide mathematical models for many physical processes. However, most practical purposes require the temporal evolution of a particular solution. Many relevant ordinary differential equations are known to lack closed-form solutions in terms of simple analytic functions. Thus, users rely on numerical algorithms to compute discrete approximations. Numerical methods replace the intractable, and thus inaccessible, solution by an approximating model with known computational strategies. This is akin to a process in statistics where an unknown true relationship is modeled with access to instances of said relationship. One branch of statistics, Bayesian modeling, expresses degrees of uncertainty with probability distributions. In recent years, this idea has gained traction for the design and study of numerical algorithms which established probabilistic numerics as a research field in its own right. The theory part of this thesis is concerned with bridging the gap between classical numerical methods for ordinary differential equations and probabilistic numerics. To this end, an algorithm is presented based on Gaussian processes, a general and versatile model for Bayesian regression. This algorithm is compared to two standard frameworks for the solution of initial value problems. It is shown that the maximum a-posteriori estimator of certain Gaussian process regressors coincide with certain multistep formulae. Furthermore, a particular initialization scheme based on an improper prior model coincides with a Runge-Kutta method for the first discretization step. This analysis provides a higher-order probabilistic numerical algorithm for initial value problems. Based on the probabilistic description, an estimator of the local integration error is presented, which is used in a step size adaptation scheme. The completed algorithm is evaluated on a benchmark on initial value problems, confirming empirically the theoretically predicted error rates and displaying particularly efficient performance on domains with low accuracy requirements. To establish the practical benefit of the probabilistic solution, a probabilistic boundary value problem solver is applied to a medical imaging problem. In tractography, diffusion-weighted magnetic resonance imaging data is used to infer connectivity of neural fibers. The first application of the probabilistic solver shows how the quantification of the discretization error can be used in subsequent estimation of fiber density. The second application additionally incorporates the measurement noise of the imaging data into the tract estimation model. These two extensions of the shortest-path tractography method give more faithful data, modeling and algorithmic uncertainty representations in neural connectivity studies.Gewöhnliche Differentialgleichungen sind allgegenwärtig in Wissenschaft und Technik, da sie die mathematische Beschreibung vieler physikalischen Vorgänge sind. Jedoch benötigt ein Großteil der praktischen Anwendungen die zeitliche Entwicklung einer bestimmten Lösung. Es ist bekannt, dass viele relevante gewöhnliche Differentialgleichungen keine geschlossene Lösung als Ausdrücke einfacher analytischer Funktion besitzen. Daher verlassen sich Anwender auf numerische Algorithmen, um diskrete Annäherungen zu berechnen. Numerische Methoden ersetzen die unauswertbare, und daher unzugängliche, Lösung durch eine Annäherung mit bekannten Rechenverfahren. Dies ähnelt einem Vorgang in der Statistik, wobei ein unbekanntes wahres Verhältnis mittels Zugang zu Beispielen modeliert wird. Eine Unterdisziplin der Statistik, Bayes’sche Modellierung, stellt graduelle Unsicherheit mittels Wahrscheinlichkeitsverteilungen dar. In den letzten Jahren hat diese Idee an Zugkraft für die Konstruktion und Analyse von numerischen Algorithmen gewonnen, was zur Etablierung von probabilistischer Numerik als eigenständiges Forschungsgebiet führte. Der Theorieteil dieser Dissertation schlägt eine Brücke zwischen herkömmlichen numerischen Verfahren zur Lösung gewöhnlicher Differentialgleichungen und probabilistischer Numerik. Ein auf Gauß’schen Prozessen basierender Algorithmus wird vorgestellt, welche ein generelles und vielseitiges Modell der Bayesschen Regression sind. Dieser Algorithmus wird verglichen mit zwei Standardansätzen für die Lösung von Anfangswertproblemen. Es wird gezeigt, dass der Maximum-a-posteriori-Schätzer bestimmter Gaußprozess-Regressoren übereinstimmt mit bestimmten Mehrschrittverfahren. Weiterhin stimmt ein besonderes Initialisierungsverfahren basierend auf einer uneigentlichen A-priori-Wahrscheinlichkeit überein mit einer Runge-Kutta Methode im ersten Rechenschritt. Diese Analyse führt zu einer probabilistisch-numerischen Methode höherer Ordnung zur Lösung von Anfangswertproblemen. Basierend auf der probabilistischen Beschreibung wird ein Schätzer des lokalen Integrationfehlers präsentiert, welcher in einem Schrittweitensteuerungsverfahren verwendet wird. Der vollständige Algorithmus wird auf einem Satz standardisierter Anfangswertprobleme ausgewertet, um empirisch den von der Theorie vorhergesagten Fehler zu bestätigen. Der Test weist dem Verfahren einen besonders effizienten Rechenaufwand im Bereich der niedrigen Genauigkeitsanforderungen aus. Um den praktischen Nutzen der probabilistischen Lösung nachzuweisen, wird ein probabilistischer Löser für Randwertprobleme auf eine Fragestellung der medizinischen Bildgebung angewandt. In der Traktografie werden die Daten der diffusionsgewichteten Magnetresonanzbildgebung verwendet, um die Konnektivität neuronaler Fasern zu bestimmen. Die erste Anwendung des probabilistische Lösers demonstriert, wie die Quantifizierung des Diskretisierungsfehlers in einer nachgeschalteten Schätzung der Faserdichte verwendet werden kann. Die zweite Anwendung integriert zusätzlich das Messrauschen der Bildgebungsdaten in das Strangschätzungsmodell. Diese beiden Erweiterungen der Kürzesten-Pfad-Traktografie repräsentieren die Daten-, Modellierungs- und algorithmische Unsicherheit abbildungstreuer in neuronalen Konnektivitätsstudien

    Visual exploration of semantic-web-based knowledge structures

    Get PDF
    Humans have a curious nature and seek a better understanding of the world. Data, in- formation, and knowledge became assets of our modern society through the information technology revolution in the form of the internet. However, with the growing size of accumulated data, new challenges emerge, such as searching and navigating in these large collections of data, information, and knowledge. The current developments in academic and industrial contexts target the corresponding challenges using Semantic Web techno- logies. The Semantic Web is an extension of the Web and provides machine-readable representations of knowledge for various domains. These machine-readable representations allow intelligent machine agents to understand the meaning of the data and information; and enable additional inference of new knowledge. Generally, the Semantic Web is designed for information exchange and its processing and does not focus on presenting such semantically enriched data to humans. Visualizations support exploration, navigation, and understanding of data by exploiting humans’ ability to comprehend complex data through visual representations. In the context of Semantic- Web-Based knowledge structures, various visualization methods and tools are available, and new ones are being developed every year. However, suitable visualizations are highly dependent on individual use cases and targeted user groups. In this thesis, we investigate visual exploration techniques for Semantic-Web-Based knowledge structures by addressing the following challenges: i) how to engage various user groups in modeling such semantic representations; ii) how to facilitate understanding using customizable visual representations; and iii) how to ease the creation of visualizations for various data sources and different use cases. The achieved results indicate that visual modeling techniques facilitate the engagement of various user groups in ontology modeling. Customizable visualizations enable users to adjust visualizations to the current needs and provide different views on the data. Additionally, customizable visualization pipelines enable rapid visualization generation for various use cases, data sources, and user group

    Visualizing Uncertainty in HARDI Tractography Using Superquadric Streamtubes

    No full text
    Standard streamtubes for the visualization of diffusion MRI data are rendered either with a circular or with an elliptic cross section whose aspect ratio indicates the relative magnitudes of the medium and minor eigenvalues. Inspired by superquadric tensor glyphs, we propose to render streamtubes with a superquadric cross section, which develops sharp edges to more clearly convey the orientation of the second and third eigenvectors where they are uniquely defined, while maintaining a circular shape when the smaller two eigenvalues are equal. As a second contribution, we apply our novel superquadric streamtubes to visualize uncertainty in the tracking direction of HARDI tractography, which we represent using a novel propagation uncertainty tensor
    corecore