5 research outputs found

    Interactive real time simulation of cardiac radio-frequency ablation

    Get PDF
    Best Paper AwardInternational audienceVirtual reality based therapy simulation meets a growing interest from the medical community due to its potential impact for the training of medical residents and the planning of therapies. In this paper, we describe a prototype for rehearsing radio-frequency ablation of the myocardium in the context of cardiac arrhythmia. Our main focus has been on the real-time modeling of electrophysiology which is suitable for representing simple cases of arrhythmia (ectopic focus, ventricular tachycardia). To this end, we use an anisotropic multi-front fast marching method to simulate transmembrane potential propagation in cardiac tissues. The electric propagation is coupled with a pre-recorded beating heart model. Thanks to a 3D user interface, the user can interactively measure the local extracellular potential, pace locally the myocardium or simulate the burning of cardiac tissue as done in radiofrequency ablation interventions. To illustrate this work, we show the simulation of various arrhythmias cases built from patient speciïŹc medical images including the right and left ventricles, the ïŹber orientation and the location of ischemic regions

    Improving the Tractography Pipeline: on Evaluation, Segmentation, and Visualization

    Get PDF
    Recent advances in tractography allow for connectomes to be constructed in vivo. These have applications for example in brain tumor surgery and understanding of brain development and diseases. The large size of the data produced by these methods lead to a variety problems, including how to evaluate tractography outputs, development of faster processing algorithms for tractography and clustering, and the development of advanced visualization methods for verification and exploration. This thesis presents several advances in these fields. First, an evaluation is presented for the robustness to noise of multiple commonly used tractography algorithms. It employs a Monte–Carlo simulation of measurement noise on a constructed ground truth dataset. As a result of this evaluation, evidence for obustness of global tractography is found, and algorithmic sources of uncertainty are identified. The second contribution is a fast clustering algorithm for tractography data based on k–means and vector fields for representing the flow of each cluster. It is demonstrated that this algorithm can handle large tractography datasets due to its linear time and memory complexity, and that it can effectively integrate interrupted fibers that would be rejected as outliers by other algorithms. Furthermore, a visualization for the exploration of structural connectomes is presented. It uses illustrative rendering techniques for efficient presentation of connecting fiber bundles in context in anatomical space. Visual hints are employed to improve the perception of spatial relations. Finally, a visualization method with application to exploration and verification of probabilistic tractography is presented, which improves on the previously presented Fiber Stippling technique. It is demonstrated that the method is able to show multiple overlapping tracts in context, and correctly present crossing fiber configurations

    Multivariate Pointwise Information-Driven Data Sampling and Visualization

    Full text link
    With increasing computing capabilities of modern supercomputers, the size of the data generated from the scientific simulations is growing rapidly. As a result, application scientists need effective data summarization techniques that can reduce large-scale multivariate spatiotemporal data sets while preserving the important data properties so that the reduced data can answer domain-specific queries involving multiple variables with sufficient accuracy. While analyzing complex scientific events, domain experts often analyze and visualize two or more variables together to obtain a better understanding of the characteristics of the data features. Therefore, data summarization techniques are required to analyze multi-variable relationships in detail and then perform data reduction such that the important features involving multiple variables are preserved in the reduced data. To achieve this, in this work, we propose a data sub-sampling algorithm for performing statistical data summarization that leverages pointwise information theoretic measures to quantify the statistical association of data points considering multiple variables and generates a sub-sampled data that preserves the statistical association among multi-variables. Using such reduced sampled data, we show that multivariate feature query and analysis can be done effectively. The efficacy of the proposed multivariate association driven sampling algorithm is presented by applying it on several scientific data sets.Comment: 25 page

    Visualization of Tensor Fields in Mechanics

    Get PDF
    Tensors are used to describe complex physical processes in many applications. Examples include the distribution of stresses in technical materials, acting forces during seismic events, or remodeling of biological tissues. While tensors encode such complex information mathematically precisely, the semantic interpretation of a tensor is challenging. Visualization can be beneficial here and is frequently used by domain experts. Typical strategies include the use of glyphs, color plots, lines, and isosurfaces. However, data complexity is nowadays accompanied by the sheer amount of data produced by large-scale simulations and adds another level of obstruction between user and data. Given the limitations of traditional methods, and the extra cognitive effort of simple methods, more advanced tensor field visualization approaches have been the focus of this work. This survey aims to provide an overview of recent research results with a strong application-oriented focus, targeting applications based on continuum mechanics, namely the fields of structural, bio-, and geomechanics. As such, the survey is complementing and extending previously published surveys. Its utility is twofold: (i) It serves as basis for the visualization community to get an overview of recent visualization techniques. (ii) It emphasizes and explains the necessity for further research for visualizations in this context

    Applied Visualization in the Neurosciences and the Enhancement of Visualization through Computer Graphics

    Get PDF
    The complexity and size of measured and simulated data in many fields of science is increasing constantly. The technical evolution allows for capturing smaller features and more complex structures in the data. To make this data accessible by the scientists, efficient and specialized visualization techniques are required. Maximum efficiency and value for the user can only be achieved by adapting visualization to the specific application area and the specific requirements of the scientific field. Part I: In the first part of my work, I address the visualization in the neurosciences. The neuroscience tries to understand the human brain; beginning at its smallest parts, up to its global infrastructure. To achieve this ambitious goal, the neuroscience uses a combination of three-dimensional data from a myriad of sources, like MRI, CT, or functional MRI. To handle this diversity of different data types and sources, the neuroscience need specialized and well evaluated visualization techniques. As a start, I will introduce an extensive software called \"OpenWalnut\". It forms the common base for developing and using visualization techniques with our neuroscientific collaborators. Using OpenWalnut, standard and novel visualization approaches are available to the neuroscientific researchers too. Afterwards, I am introducing a very specialized method to illustrate the causal relation of brain areas, which was, prior to that, only representable via abstract graph models. I will finalize the first part of my work with an evaluation of several standard visualization techniques in the context of simulated electrical fields in the brain. The goal of this evaluation was clarify the advantages and disadvantages of the used visualization techniques to the neuroscientific community. We exemplified these, using clinically relevant scenarios. Part II: Besides the data preprocessing, which plays a tremendous role in visualization, the final graphical representation of the data is essential to understand structure and features in the data. The graphical representation of data can be seen as the interface between the data and the human mind. The second part of my work is focused on the improvement of structural and spatial perception of visualization -- the improvement of the interface. Unfortunately, visual improvements using computer graphics methods of the computer game industry is often seen sceptically. In the second part, I will show that such methods can be applied to existing visualization techniques to improve spatiality and to emphasize structural details in the data. I will use a computer graphics paradigm called \"screen space rendering\". Its advantage, amongst others, is its seamless applicability to nearly every visualization technique. I will start with two methods that improve the perception of mesh-like structures on arbitrary surfaces. Those mesh structures represent second-order tensors and are generated by a method named \"TensorMesh\". Afterwards I show a novel approach to optimally shade line and point data renderings. With this technique it is possible for the first time to emphasize local details and global, spatial relations in dense line and point data.In vielen Bereichen der Wissenschaft nimmt die GrĂ¶ĂŸe und KomplexitĂ€t von gemessenen und simulierten Daten zu. Die technische Entwicklung erlaubt das Erfassen immer kleinerer Strukturen und komplexerer Sachverhalte. Um solche Daten dem Menschen zugĂ€nglich zu machen, benötigt man effiziente und spezialisierte Visualisierungswerkzeuge. Nur die Anpassung der Visualisierung auf ein Anwendungsgebiet und dessen Anforderungen erlaubt maximale Effizienz und Nutzen fĂŒr den Anwender. Teil I: Im ersten Teil meiner Arbeit befasse ich mich mit der Visualisierung im Bereich der Neurowissenschaften. Ihr Ziel ist es, das menschliche Gehirn zu begreifen; von seinen kleinsten Teilen bis hin zu seiner Gesamtstruktur. Um dieses ehrgeizige Ziel zu erreichen nutzt die Neurowissenschaft vor allem kombinierte, dreidimensionale Daten aus vielzĂ€hligen Quellen, wie MRT, CT oder funktionalem MRT. Um mit dieser Vielfalt umgehen zu können, benötigt man in der Neurowissenschaft vor allem spezialisierte und evaluierte Visualisierungsmethoden. ZunĂ€chst stelle ich ein umfangreiches Softwareprojekt namens \"OpenWalnut\" vor. Es bildet die gemeinsame Basis fĂŒr die Entwicklung und Nutzung von Visualisierungstechniken mit unseren neurowissenschaftlichen Kollaborationspartnern. Auf dieser Basis sind klassische und neu entwickelte Visualisierungen auch fĂŒr Neurowissenschaftler zugĂ€nglich. Anschließend stelle ich ein spezialisiertes Visualisierungsverfahren vor, welches es ermöglicht, den kausalen Zusammenhang zwischen Gehirnarealen zu illustrieren. Das war vorher nur durch abstrakte Graphenmodelle möglich. Den ersten Teil der Arbeit schließe ich mit einer Evaluation verschiedener Standardmethoden unter dem Blickwinkel simulierter elektrischer Felder im Gehirn ab. Das Ziel dieser Evaluation war es, der neurowissenschaftlichen Gemeinde die Vor- und Nachteile bestimmter Techniken zu verdeutlichen und anhand klinisch relevanter FĂ€lle zu erlĂ€utern. Teil II: Neben der eigentlichen Datenvorverarbeitung, welche in der Visualisierung eine enorme Rolle spielt, ist die grafische Darstellung essenziell fĂŒr das VerstĂ€ndnis der Strukturen und Bestandteile in den Daten. Die grafische ReprĂ€sentation von Daten bildet die Schnittstelle zum Gehirn des Menschen. Der zweite Teile meiner Arbeit befasst sich mit der Verbesserung der strukturellen und rĂ€umlichen Wahrnehmung in Visualisierungsverfahren -- mit der Verbesserung der Schnittstelle. Leider werden viele visuelle Verbesserungen durch Computergrafikmethoden der Spieleindustrie mit Argwohn beĂ€ugt. Im zweiten Teil meiner Arbeit werde ich zeigen, dass solche Methoden in der Visualisierung angewendet werden können um den rĂ€umlichen Eindruck zu verbessern und Strukturen in den Daten hervorzuheben. Dazu nutze ich ein in der Computergrafik bekanntes Paradigma: das \"Screen Space Rendering\". Dieses Paradigma hat den Vorteil, dass es auf nahezu jede existierende Visualiserungsmethode als Nachbearbeitunsgschritt angewendet werden kann. ZunĂ€chst fĂŒhre ich zwei Methoden ein, die die Wahrnehmung von gitterartigen Strukturen auf beliebigen OberflĂ€chen verbessern. Diese Gitter reprĂ€sentieren die Struktur von Tensoren zweiter Ordnung und wurden durch eine Methode namens \"TensorMesh\" erzeugt. Anschließend zeige ich eine neuartige Technik fĂŒr die optimale Schattierung von Linien und Punktdaten. Mit dieser Technik ist es erstmals möglich sowohl lokale Details als auch globale rĂ€umliche ZusammenhĂ€nge in dichten Linien- und Punktdaten zu erfassen
    corecore