32 research outputs found

    Task-based Augmented Contour Trees with Fibonacci Heaps

    Full text link
    This paper presents a new algorithm for the fast, shared memory, multi-core computation of augmented contour trees on triangulations. In contrast to most existing parallel algorithms our technique computes augmented trees, enabling the full extent of contour tree based applications including data segmentation. Our approach completely revisits the traditional, sequential contour tree algorithm to re-formulate all the steps of the computation as a set of independent local tasks. This includes a new computation procedure based on Fibonacci heaps for the join and split trees, two intermediate data structures used to compute the contour tree, whose constructions are efficiently carried out concurrently thanks to the dynamic scheduling of task parallelism. We also introduce a new parallel algorithm for the combination of these two trees into the output global contour tree. Overall, this results in superior time performance in practice, both in sequential and in parallel thanks to the OpenMP task runtime. We report performance numbers that compare our approach to reference sequential and multi-threaded implementations for the computation of augmented merge and contour trees. These experiments demonstrate the run-time efficiency of our approach and its scalability on common workstations. We demonstrate the utility of our approach in data segmentation applications

    Object-based representation and analysis of light and electron microscopic volume data using Blender

    Get PDF
    This is the final version of the article. Available from the publisher via the DOI in this record.BACKGROUND: Rapid improvements in light and electron microscopy imaging techniques and the development of 3D anatomical atlases necessitate new approaches for the visualization and analysis of image data. Pixel-based representations of raw light microscopy data suffer from limitations in the number of channels that can be visualized simultaneously. Complex electron microscopic reconstructions from large tissue volumes are also challenging to visualize and analyze. RESULTS: Here we exploit the advanced visualization capabilities and flexibility of the open-source platform Blender to visualize and analyze anatomical atlases. We use light-microscopy-based gene expression atlases and electron microscopy connectome volume data from larval stages of the marine annelid Platynereis dumerilii. We build object-based larval gene expression atlases in Blender and develop tools for annotation and coexpression analysis. We also represent and analyze connectome data including neuronal reconstructions and underlying synaptic connectivity. CONCLUSIONS: We demonstrate the power and flexibility of Blender for visualizing and exploring complex anatomical atlases. The resources we have developed for Platynereis will facilitate data sharing and the standardization of anatomical atlases for this species. The flexibility of Blender, particularly its embedded Python application programming interface, means that our methods can be easily extended to other organisms.The research leading to these results received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP7/2007-2013)/European Research Council Grant Agreement 260821

    General Purpose Flow Visualization at the Exascale

    Get PDF
    Exascale computing, i.e., supercomputers that can perform 1018 math operations per second, provide significant opportunity for improving the computational sciences. That said, these machines can be difficult to use efficiently, due to their massive parallelism, due to the use of accelerators, and due to the diversity of accelerators used. All areas of the computational science stack need to be reconsidered to address these problems. With this dissertation, we consider flow visualization, which is critical for analyzing vector field data from simulations. We specifically consider flow visualization techniques that use particle advection, i.e., tracing particle trajectories, which presents performance and implementation challenges. The dissertation makes four primary contributions. First, it synthesizes previous work on particle advection performance and introduces a high-level analytical cost model. Second, it proposes an approach for performance portability across accelerators. Third, it studies expected speedups based on using accelerators, including the importance of factors such as duration, particle count, data set, and others. Finally, it proposes an exascale-capable particle advection system that addresses diversity in many dimensions, including accelerator type, parallelism approach, analysis use case, underlying vector field, and more

    Advanced Endoscopic Navigation:Surgical Big Data,Methodology,and Applications

    Get PDF
    随着科学技术的飞速发展,健康与环境问题日益成为人类面临的最重大问题之一。信息科学、计算机技术、电子工程与生物医学工程等学科的综合应用交叉前沿课题,研究现代工程技术方法,探索肿瘤癌症等疾病早期诊断、治疗和康复手段。本论文综述了计算机辅助微创外科手术导航、多模态医疗大数据、方法论及其临床应用:从引入微创外科手术导航概念出发,介绍了医疗大数据的术前与术中多模态医学成像方法、阐述了先进微创外科手术导航的核心流程包括计算解剖模型、术中实时导航方案、三维可视化方法及交互式软件技术,归纳了各类微创外科手术方法的临床应用。同时,重点讨论了全球各种手术导航技术在临床应用中的优缺点,分析了目前手术导航领域内的最新技术方法。在此基础上,提出了微创外科手术方法正向数字化、个性化、精准化、诊疗一体化、机器人化以及高度智能化的发展趋势。【Abstract】Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.X.L. acknowledges funding from the Fundamental Research Funds for the Central Universities. T.M.P. acknowledges funding from the Canadian Foundation for Innovation, the Canadian Institutes for Health Research, the National Sciences and Engineering Research Council of Canada, and a grant from Intuitive Surgical Inc

    Hypersweeps, Convective Clouds and Reeb Spaces

    Get PDF
    Isosurfaces are one of the most prominent tools in scientific data visualisation. An isosurface is a surface that defines the boundary of a feature of interest in space for a given threshold. This is integral in analysing data from the physical sciences which observe and simulate three or four dimensional phenomena. However it is time consuming and impractical to discover surfaces of interest by manually selecting different thresholds. The systematic way to discover significant isosurfaces in data is with a topological data structure called the contour tree. The contour tree encodes the connectivity and shape of each isosurface at all possible thresholds. The first part of this work has been devoted to developing algorithms that use the contour tree to discover significant features in data using high performance computing systems. Those algorithms provided a clear speedup over previous methods and were used to visualise physical plasma simulations. A major limitation of isosurfaces and contour trees is that they are only applicable when a single property is associated with data points. However scientific data sets often take multiple properties into account. A recent breakthrough generalised isosurfaces to fiber surfaces. Fiber surfaces define the boundary of a feature where the threshold is defined in terms of multiple parameters, instead of just one. In this work we used fiber surfaces together with isosurfaces and the contour tree to create a novel application that helps atmosphere scientists visualise convective cloud formation. Using this application, they were able to, for the first time, visualise the physical properties of certain structures that trigger cloud formation. Contour trees can also be generalised to handle multiple parameters. The natural extension of the contour tree is called the Reeb space and it comes from the pure mathematical field of fiber topology. The Reeb space is not yet fully understood mathematically and algorithms for computing it have significant practical limitations. A key difficulty is that while the contour tree is a traditional one dimensional data structure made up of points and lines between them, the Reeb space is far more complex. The Reeb space is made up of two dimensional sheets, attached to each other in intricate ways. The last part of this work focuses on understanding the structure of Reeb spaces and the rules that are followed when sheets are combined. This theory builds towards developing robust combinatorial algorithms to compute and use Reeb spaces for practical data analysis

    Current Challenges in the Application of Algorithms in Multi-institutional Clinical Settings

    Get PDF
    The Coronavirus disease pandemic has highlighted the importance of artificial intelligence in multi-institutional clinical settings. Particularly in situations where the healthcare system is overloaded, and a lot of data is generated, artificial intelligence has great potential to provide automated solutions and to unlock the untapped potential of acquired data. This includes the areas of care, logistics, and diagnosis. For example, automated decision support applications could tremendously help physicians in their daily clinical routine. Especially in radiology and oncology, the exponential growth of imaging data, triggered by a rising number of patients, leads to a permanent overload of the healthcare system, making the use of artificial intelligence inevitable. However, the efficient and advantageous application of artificial intelligence in multi-institutional clinical settings faces several challenges, such as accountability and regulation hurdles, implementation challenges, and fairness considerations. This work focuses on the implementation challenges, which include the following questions: How to ensure well-curated and standardized data, how do algorithms from other domains perform on multi-institutional medical datasets, and how to train more robust and generalizable models? Also, questions of how to interpret results and whether there exist correlations between the performance of the models and the characteristics of the underlying data are part of the work. Therefore, besides presenting a technical solution for manual data annotation and tagging for medical images, a real-world federated learning implementation for image segmentation is introduced. Experiments on a multi-institutional prostate magnetic resonance imaging dataset showcase that models trained by federated learning can achieve similar performance to training on pooled data. Furthermore, Natural Language Processing algorithms with the tasks of semantic textual similarity, text classification, and text summarization are applied to multi-institutional, structured and free-text, oncology reports. The results show that performance gains are achieved by customizing state-of-the-art algorithms to the peculiarities of the medical datasets, such as the occurrence of medications, numbers, or dates. In addition, performance influences are observed depending on the characteristics of the data, such as lexical complexity. The generated results, human baselines, and retrospective human evaluations demonstrate that artificial intelligence algorithms have great potential for use in clinical settings. However, due to the difficulty of processing domain-specific data, there still exists a performance gap between the algorithms and the medical experts. In the future, it is therefore essential to improve the interoperability and standardization of data, as well as to continue working on algorithms to perform well on medical, possibly, domain-shifted data from multiple clinical centers

    Towards Data-Driven Large Scale Scientific Visualization and Exploration

    Get PDF
    Technological advances have enabled us to acquire extremely large datasets but it remains a challenge to store, process, and extract information from them. This dissertation builds upon recent advances in machine learning, visualization, and user interactions to facilitate exploration of large-scale scientific datasets. First, we use data-driven approaches to computationally identify regions of interest in the datasets. Second, we use visual presentation for effective user comprehension. Third, we provide interactions for human users to integrate domain knowledge and semantic information into this exploration process. Our research shows how to extract, visualize, and explore informative regions on very large 2D landscape images, 3D volumetric datasets, high-dimensional volumetric mouse brain datasets with thousands of spatially-mapped gene expression profiles, and geospatial trajectories that evolve over time. The contribution of this dissertation include: (1) We introduce a sliding-window saliency model that discovers regions of user interest in very large images; (2) We develop visual segmentation of intensity-gradient histograms to identify meaningful components from volumetric datasets; (3) We extract boundary surfaces from a wealth of volumetric gene expression mouse brain profiles to personalize the reference brain atlas; (4) We show how to efficiently cluster geospatial trajectories by mapping each sequence of locations to a high-dimensional point with the kernel distance framework. We aim to discover patterns, relationships, and anomalies that would lead to new scientific, engineering, and medical advances. This work represents one of the first steps toward better visual understanding of large-scale scientific data by combining machine learning and human intelligence

    Interacciones en Visualización

    Get PDF
    En la actualidad, el crecimiento vertiginoso de la cantidad de información genera volúmenes de datos cada vez más grandes y difíciles de comprender y analizar. El aporte de la visualización a la exploración y entendimiento de estos grandes conjuntos de datos resulta altamente significativo. Es frecuente que distintos dominios de aplicación requieran representaciones visuales diferentes; sin embargo, varios de ellos comparten estados intermedios de los datos, transformaciones, y/o requieren manipulaciones similares a nivel de vistas. Al analizar estos denominadores comunes se plantea la necesidad de contar con un modelo de visualización consistente para todas las áreas de visualización que sea válido para distintos dominios de aplicación. En este contexto se define el Modelo Unificado de Visualización (MUV), un modelo de estados representado como un flujo entre los distintos estados que asumen los datos a lo largo del proceso. Las características del proceso de visualización determinan que el usuario deba poder interactuar con los datos y sus representaciones intermedias, controlar las transformaciones y manipular las visualizaciones. En este contexto, la definición de una taxonomía de las interacciones en el área de visualización es sumamente necesaria para lograr un mejor entendimiento del espacio de diseño de las interacciones. El objetivo general de esta tesis consiste en establecer tanto las interacciones como una clasificación de las mismas en el área de visualización que sea válida en los distintos dominios de aplicación. Las interacciones definidas deberán poder aplicarse sobre las distintas transformaciones y estados del proceso de visualización. En este contexto, surge la necesidad de definir una representación para los conjuntos de datos lo suficientemente flexible y orientada al área de visualización, que permita soportar las distintas clasificaciones de datos, atributos, conjuntos de datos y mapeos visuales presentes en la literatura de visualización. Finalmente, con el objetivo de estudiar y validar los conceptos introducidos en esta tesis, se diseñó e implementó el SpinelViz y el Spinel Explorer, dos prototipos de visualización de datos geológicos. Para cada prototipo se diseñó un conjunto de interacciones dedicadas que contribuyeron directamente a un avance significativo en el flujo de trabajo de los geólogos expertos. Además, se mostró cómo la clasificación de las interacciones y las operaciones definidas permiten ordenar y facilitar el desarrollo de un sistema de visualización en un determinado campo de aplicación.Nowadays, the vertiginous growth of information generates volumes of data that are increasingly larger and difficult to understand and analyze. The contribution of visualization to the exploration and understanding of these large data sets is highly significant. Usually, different application domains requiere different visual representations, however, several of them share intermediate states of data, transformations, and/or require similar manipulations. These common denominators suggest the need for a visualization model that is consistent for all visualization areas and valid for different application domains. In this context, the Unified Visualization Model (MUV) is defined. The MUV is a model of states represented as a flow among the different states assumed by the data throughout the process. The properties of the visualization process determine that the user should be able to interact with the data and its intermediate representations, control the transformations and manipulate the visualizations. In this context, the definition of a taxonomy of the interactions in the visualization area is extremely necessary to achieve a better understanding of the design space of the interactions. The overall goal of this thesis is to define the interactions and a classification of interactions in visualization, that is valid in different application domains. The defined interactions will be applied to the states and transformations of the visualization process. In this context, it is necesary to define a representation for the data sets involved in this process. This representation must be sufficiently flexible to support the different classifications of data, attributes, datasets and visual mappings present in the visualization literature. Finally, with the aim of studying and validating the concepts introduced in this thesis, we designed and implemented the SpinelViz and the Spinel Explorer, two prototypes for geological data visualization. For each prototype, a set of dedicated interactions that significantly improved the traditional workflow was designed. In addition, it was exposed how the presented classification and the defined operations allow to order and facilitate the development of a visualization system in a specific application field.Fil: Ganuza, María Luján. Universidad Nacional del Sur. Departamento de Ciencias e Ingeniería de la Computación. Instituto de Ciencias e Ingeniería de la Computación; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentin

    Doctor of Philosophy

    Get PDF
    dissertationA broad range of applications capture dynamic data at an unprecedented scale. Independent of the application area, finding intuitive ways to understand the dynamic aspects of these increasingly large data sets remains an interesting and, to some extent, unsolved research problem. Generically, dynamic data sets can be described by some, often hierarchical, notion of feature of interest that exists at each moment in time, and those features evolve across time. Consequently, exploring the evolution of these features is considered to be one natural way of studying these data sets. Usually, this process entails the ability to: 1) define and extract features from each time step in the data set; 2) find their correspondences over time; and 3) analyze their evolution across time. However, due to the large data sizes, visualizing the evolution of features in a comprehensible manner and performing interactive changes are challenging. Furthermore, feature evolution details are often unmanageably large and complex, making it difficult to identify the temporal trends in the underlying data. Additionally, many existing approaches develop these components in a specialized and standalone manner, thus failing to address the general task of understanding feature evolution across time. This dissertation demonstrates that interactive exploration of feature evolution can be achieved in a non-domain-specific manner so that it can be applied across a wide variety of application domains. In particular, a novel generic visualization and analysis environment that couples a multiresolution unified spatiotemporal representation of features with progressive layout and visualization strategies for studying the feature evolution across time is introduced. This flexible framework enables on-the-fly changes to feature definitions, their correspondences, and other arbitrary attributes while providing an interactive view of the resulting feature evolution details. Furthermore, to reduce the visual complexity within the feature evolution details, several subselection-based and localized, per-feature parameter value-based strategies are also enabled. The utility and generality of this framework is demonstrated by using several large-scale dynamic data sets

    ESG-CET Final Progress Title

    Full text link
    Drawing to a close after five years of funding from DOE's ASCR and BER program offices, the SciDAC-2 project called the Earth System Grid (ESG) Center for Enabling Technologies has successfully established a new capability for serving data from distributed centers. The system enables users to access, analyze, and visualize data using a globally federated collection of networks, computers and software. The ESG software - now known as the Earth System Grid Federation (ESGF) - has attracted a broad developer base and has been widely adopted so that it is now being utilized in serving the most comprehensive multi-model climate data sets in the world. The system is used to support international climate model intercomparison activities as well as high profile U.S. DOE, NOAA, NASA, and NSF projects. It currently provides more than 25,000 users access to more than half a petabyte of climate data (from models and from observations) and has enabled over a 1,000 scientific publications
    corecore