99 research outputs found

    09251 Abstracts Collection -- Scientific Visualization

    Get PDF
    From 06-14-2009 to 06-19-2009, the Dagstuhl Seminar 09251 ``Scientific Visualization \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, over 50 international participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general

    Dense and Globally Consistent Multi-View Stereo

    Get PDF
    Multi-View Stereo (MVS) aims at reconstructing dense geometry of scenes from a set of overlapping images which are captured at different viewing angles. This thesis is devoted to addressing MVS problem by estimating depth maps, since 2D-space operations are trivially parallelizable in contrast to 3D volumetric techniques. Typical setup of depth-map-based MVS approaches consists of per-view calculation and multi-view merging. Most solutions primarily aim at the most precise and complete surfaces for individual views but relaxing the global geometry consistency. Therefore, the inconsistent estimates lead to heavy processing workload in the merging stage and diminish the final reconstruction. Another issue is the textureless areas where the photo-consistency constraint can not discriminate different depths. These matching ambiguities are normally handled by incorporating plane features or the smoothness assumption, that might produce segmentation effect or depends on accuracy and completeness of the calculated object edges. This thesis deals with two kinds of input data, photo collections and high-frame-rate videos, by developing distinct MVS algorithms based on their characteristics: For the sparsely sampled photos, we propose an advanced PatchMatch system that alternates between patch-based correlation maximization and pixel-based optimization of the cross-view consistency. Thereby we get a good trade-off between the photometric and geometric constraints. Moreover, our method achieves high efficiency by combining local pixel traversal and a hierarchical framework for fast depth propagation. For the densely sampled videos, we mainly focus on recovering the homogeneous surfaces, because the redundant scene information enables ray-level correlation which can generate shape depth discontinuities. Our approach infers smooth surfaces for the enclosed areas using perspective depth interpolation, and subsequently tackles the occlusion errors connecting the fore- and background edges. In addition, our edge depth estimation is more robust by accounting for unstructured camera trajectories. Exhaustively calculating depth maps is unfeasible when modeling large scenes from videos. This thesis further improves the reconstruction scalability using an incremental scheme via content-aware view selection and clustering. Our goal is to gradually eliminate the visibility conflicts and increase the surface coverage by processing a minimum subset of views. Constructing view clusters allows us to store merged and locally consistent points with the highest resolution, thus reducing the memory requirements. All approaches presented in the thesis do not rely on high-level techniques, so they can be easily parallelized. The evaluations on various datasets and the comparisons with existing algorithms demonstrate the superiority of our methods

    Doctor of Philosophy

    Get PDF
    dissertationIn this dissertation, we advance the theory and practice of verifying visualization algorithms. We present techniques to assess visualization correctness through testing of important mathematical properties. Where applicable, these techniques allow us to distinguish whether anomalies in visualization features can be attributed to the underlying physical process or to artifacts from the implementation under verification. Such scientific scrutiny is at the heart of verifiable visualization - subjecting visualization algorithms to the same verification process that is used in other components of the scientific pipeline. The contributions of this dissertation are manifold. We derive the mathematical framework for the expected behavior of several visualization algorithms, and compare them to experimentally observed results in the selected codes. In the Computational Science & Engineering community CS&E, this technique is know as the Method of Manufactured Solution (MMS). We apply MMS to the verification of geometrical and topological properties of isosurface extraction algorithms, and direct volume rendering. We derive the convergence of geometrical properties of isosurface extraction techniques, such as function value and normals. For the verification of topological properties, we use stratified Morse theory and digital topology to design algorithms that verify topological invariants. In the case of volume rendering algorithms, we provide the expected discretization errors for three different error sources. The results of applying the MMS is another important contribution of this dissertation. We report unexpected behavior for almost all implementations tested. In some cases, we were able to find and fix bugs that prevented the correctness of the visualization algorithm. In particular, we address an almost 2 0 -year-old bug with the core disambiguation procedure of Marching Cubes 33, one of the first algorithms intended to preserve the topology of the trilinear interpolant. Finally, an important by-product of this work is a range of responses practitioners can expect to encounter with the visualization technique under verification

    Visualizing Large-Scale Uncertainty in Astrophysical Data

    Full text link

    Visualization and interpretability in probabilistic dimensionality reduction models

    Get PDF
    Over the last few decades, data analysis has swiftly evolved from being a task addressed mainly within the remit of multivariate statistics, to an endevour in which data heterogeneity, complexity and even sheer size, driven by computational advances, call for alternative strategies, such as those provided by pattern recognition and machine learning. Any data analysis process aims to extract new knowledge from data. Knowledge extraction is not a trivial task and it is not limited to the generation of data models or the recognition of patterns. The use of machine learning techniques for multivariate data analysis should in fact aim to achieve a dual target: interpretability and good performance. At best, both aspects of this target should not conflict with each other. This gap between data modelling and knowledge extraction must be acknowledged, in the sense that we can only extract knowledge from models through a process of interpretation. Exploratory information visualization is becoming a very promising tool for interpretation. When exploring multivariate data through visualization, high data dimensionality can be a big constraint, and the use of dimensionality reduction techniques is often compulsory. The need to find flexible methods for data modelling has led to the development of non-linear dimensionality reduction techniques, and many state-of-the-art approaches of this type fall in the domain of probabilistic modelling. These non-linear techniques can provide a flexible data representation and a more faithful model of the observed data compared to the linear ones, but often at the expense of model interpretability, which has an impact in the model visualization results. In manifold learning non-linear dimensionality reduction methods, when a high-dimensional space is mapped onto a lower-dimensional one, the obtained embedded manifold is subject to local geometrical distortion induced by the non-linear mapping. This kind of distortion can often lead to misinterpretations of the data set structure and of the obtained patterns. It is important to give relevance to the problem of how to quantify and visualize the distortion itself in order to interpret data in a more faithful way. The research reported in this thesis focuses on the development of methods and techniques for explicitly reintroducing the local distortion created by non-linear dimensionality reduction models into the low-dimensional visualization of the data that they produce, as well as in the definition of metrics for probabilistic geometries to address this problem. We do not only provide methods only for static data, but also for multivariate time series. The reintegration of the quantified non-linear distortion into the visualization space of the analysed non-linear dimensionality reduction methods is a goal by itself, but we go beyond it and consider alternative adequate metrics for probabilistic manifold learning. For that, we study the role of \textit{Random geometries}, that is, distributions of manifolds, in machine learning and data analysis in general. Methods for the estimation of distributions of data-supporting Riemannian manifolds as well as algorithms for computing interpolants over distributions of manifolds are defined. Experimental results show that inference made according to the random Riemannian metric leads to a more faithful generation of unobserved data.Durant les últimes dècades, l’anàlisi de dades ha evolucionat ràpidament de ser una tasca dirigida principalment dins de l’àmbit de l’estadística multivariant, a un endevour en el qual l’heterogeneïtat de les dades, la complexitat i la simple grandària, impulsats pels avanços computacionals, exigeixen estratègies alternatives, tals com les previstes en el Reconeixement de Formes i l’Aprenentatge Automàtic. Qualsevol procés d’anàlisi de dades té com a objectiu extreure nou coneixement a partir de les dades. L’extracció de coneixement no és una tasca trivial i no es limita a la generació de models de dades o el reconeixement de patrons. L’ús de tècniques d’aprenentatge automàtic per a l’anàlisi de dades multivariades, de fet, hauria de tractar d’aconseguir un objectiu doble: la interpretabilitat i un bon rendiment. En el millor dels casos els dos aspectes d’aquest objectiu no han d’entrar en conflicte entre sí. S’ha de reconèixer la bretxa entre el modelatge de dades i l’extracció de coneixement, en el sentit que només podem extreure coneixement a partir dels models a través d’un procés d’interpretació. L’exploració de la visualització d’informació s’està convertint en una eina molt prometedora per a la interpretació dels models. Quan s’exploren les dades multivariades a través de la visualització, la gran dimensionalitat de les dades pot ser un obstacle, i moltes vegades és obligatori l’ús de tècniques de reducció de dimensionalitat. La necessitat de trobar mètodes flexibles per al modelatge de dades ha portat al desenvolupament de tècniques de reducció de dimensionalitat no lineals. L’estat de l’art d’aquests enfocaments cau moltes vegades en el domini de la modelització probabilística. Aquestes tècniques no lineals poden proporcionar una representació de les dades flexible i un model de les dades més fidel comparades amb els models lineals, però moltes vegades a costa de la interpretabilitat del model, que té un impacte en els resultats de visualització. En els mètodes d’aprenentatge de varietats amb reducció de dimensionalitat no lineals, quan un espai d’alta dimensió es projecta sobre un altre de dimensió menor, la varietat immersa obtinguda està subjecta a una distorsió geomètrica local induïda per la funció no lineal. Aquest tipus de distorsió pot conduir a interpretacions errònies de l’estructura del conjunt de dades i dels patrons obtinguts. Per això, és important donar rellevància al problema de com quantificar i visualitzar aquesta distorsió en sí, amb la finalitat d’interpretar les dades d’una manera més fidel. La recerca presentada en aquesta tesi se centra en el desenvolupament de mètodes i tècniques per reintroduir de forma explícita a l’espai de visualització la distorsió local creada per la funció no lineal. Aquesta recerca se centra també en la definició de mètriques per a geometries probabilístiques per fer front al problema de la distorsió de la funció en els models de reducció de dimensionalitat no lineals. No proporcionem mètodes només per a les dades estàtiques, sinó també per a sèries temporals multivariades. La reintegració de la distorsió no lineal a l’espai de visualització dels mètodes de reducció de dimensionalitat no lineals analitzats és un objectiu en sí mateix, però aquesta anàlisi va més enllà i considera també les mètriques probabilístiques adequades a l’aprenentatge de varietats probabilístiques. Per això, estudiem el paper de les Geometries Aleatòries (distribucions de les varietats) en Aprenentatge Automàtic i anàlisi de dades en general. Es defineixen aquí els mètodes per a l’estimació de les distribucions de varietats de Riemann de suport a les dades, així com els algorismes per calcular interpolants en les distribucions de varietats. Els resultats experimentals mostren que la inferència feta segons les mètriques de les varietats Riemannianes Aleatòries dóna origen a una generació de les dades observades més fidelDurant les últimes dècades, l'anàlisi de dades ha evolucionat ràpidament de ser una tasca dirigida principalment dins de l'àmbit de l'estadística multivariant, a un endevour en el qual l'heterogeneïtat de les dades, la complexitat i la simple grandària, impulsats pels avanços computacionals, exigeixen estratègies alternatives, tals com les previstes en el Reconeixement de Formes i l'Aprenentatge Automàtic. La recerca presentada en aquesta tesi se centra en el desenvolupament de mètodes i tècniques per reintroduir de forma explícita a l'espai de visualització la distorsió local creada per la funció no lineal. Aquesta recerca se centra també en la definició de mètriques per a geometries probabilístiques per fer front al problema de la distorsió de la funció en els models de reducció de dimensionalitat no lineals. No proporcionem mètodes només per a les dades estàtiques, sinó també per a sèries temporals multivariades. La reintegració de la distorsió no lineal a l'espai de visualització dels mètodes de reducció de dimensionalitat no lineals analitzats és un objectiu en sí mateix, però aquesta anàlisi va més enllà i considera també les mètriques probabilístiques adequades a l'aprenentatge de varietats probabilístiques. Per això, estudiem el paper de les Geometries Aleatòries (distribucions de les varietats) en Aprenentatge Automàtic i anàlisi de dades en general. Es defineixen aquí els mètodes per a l'estimació de les distribucions de varietats de Riemann de suport a les dades, així com els algorismes per calcular interpolants en les distribucions de varietats. Els resultats experimentals mostren que la inferència feta segons les mètriques de les varietats Riemannianes Aleatòries dóna origen a una generació de les dades observades més fidel. Qualsevol procés d'anàlisi de dades té com a objectiu extreure nou coneixement a partir de les dades. L'extracció de coneixement no és una tasca trivial i no es limita a la generació de models de dades o el reconeixement de patrons. L'ús de tècniques d'aprenentatge automàtic per a l'anàlisi de dades multivariades, de fet, hauria de tractar d'aconseguir un objectiu doble: la interpretabilitat i un bon rendiment. En el millor dels casos els dos aspectes d'aquest objectiu no han d'entrar en conflicte entre sí. S'ha de reconèixer la bretxa entre el modelatge de dades i l'extracció de coneixement, en el sentit que només podem extreure coneixement a partir dels models a través d'un procés d'interpretació. L'exploració de la visualització d'informació s'està convertint en una eina molt prometedora per a la interpretació dels models. Quan s'exploren les dades multivariades a través de la visualització, la gran dimensionalitat de les dades pot ser un obstacle, i moltes vegades és obligatori l'ús de tècniques de reducció de dimensionalitat. La necessitat de trobar mètodes flexibles per al modelatge de dades ha portat al desenvolupament de tècniques de reducció de dimensionalitat no lineals. L'estat de l'art d'aquests enfocaments cau moltes vegades en el domini de la modelització probabilística. Aquestes tècniques no lineals poden proporcionar una representació de les dades flexible i un model de les dades més fidel comparades amb els models lineals, però moltes vegades a costa de la interpretabilitat del model, que té un impacte en els resultats de visualització. En els mètodes d'aprenentatge de varietats amb reducció de dimensionalitat no lineals, quan un espai d'alta dimensió es projecta sobre un altre de dimensió menor, la varietat immersa obtinguda està subjecta a una distorsió geomètrica local induïda per la funció no lineal. Aquest tipus de distorsió pot conduir a interpretacions errònies de l'estructura del conjunt de dades i dels patrons obtinguts. Per això, és important donar rellevància al problema de com quantificar i visualitzar aquesta distorsió en sì, amb la finalitat d'interpretar les dades d'una manera més fidel

    Visual Exploration And Information Analytics Of High-Dimensional Medical Images

    Get PDF
    Data visualization has transformed how we analyze increasingly large and complex data sets. Advanced visual tools logically represent data in a way that communicates the most important information inherent within it and culminate the analysis with an insightful conclusion. Automated analysis disciplines - such as data mining, machine learning, and statistics - have traditionally been the most dominant fields for data analysis. It has been complemented with a near-ubiquitous adoption of specialized hardware and software environments that handle the storage, retrieval, and pre- and postprocessing of digital data. The addition of interactive visualization tools allows an active human participant in the model creation process. The advantage is a data-driven approach where the constraints and assumptions of the model can be explored and chosen based on human insight and confirmed on demand by the analytic system. This translates to a better understanding of data and a more effective knowledge discovery. This trend has become very popular across various domains, not limited to machine learning, simulation, computer vision, genetics, stock market, data mining, and geography. In this dissertation, we highlight the role of visualization within the context of medical image analysis in the field of neuroimaging. The analysis of brain images has uncovered amazing traits about its underlying dynamics. Multiple image modalities capture qualitatively different internal brain mechanisms and abstract it within the information space of that modality. Computational studies based on these modalities help correlate the high-level brain function measurements with abnormal human behavior. These functional maps are easily projected in the physical space through accurate 3-D brain reconstructions and visualized in excellent detail from different anatomical vantage points. Statistical models built for comparative analysis across subject groups test for significant variance within the features and localize abnormal behaviors contextualizing the high-level brain activity. Currently, the task of identifying the features is based on empirical evidence, and preparing data for testing is time-consuming. Correlations among features are usually ignored due to lack of insight. With a multitude of features available and with new emerging modalities appearing, the process of identifying the salient features and their interdependencies becomes more difficult to perceive. This limits the analysis only to certain discernible features, thus limiting human judgments regarding the most important process that governs the symptom and hinders prediction. These shortcomings can be addressed using an analytical system that leverages data-driven techniques for guiding the user toward discovering relevant hypotheses. The research contributions within this dissertation encompass multidisciplinary fields of study not limited to geometry processing, computer vision, and 3-D visualization. However, the principal achievement of this research is the design and development of an interactive system for multimodality integration of medical images. The research proceeds in various stages, which are important to reach the desired goal. The different stages are briefly described as follows: First, we develop a rigorous geometry computation framework for brain surface matching. The brain is a highly convoluted structure of closed topology. Surface parameterization explicitly captures the non-Euclidean geometry of the cortical surface and helps derive a more accurate registration of brain surfaces. We describe a technique based on conformal parameterization that creates a bijective mapping to the canonical domain, where surface operations can be performed with improved efficiency and feasibility. Subdividing the brain into a finite set of anatomical elements provides the structural basis for a categorical division of anatomical view points and a spatial context for statistical analysis. We present statistically significant results of our analysis into functional and morphological features for a variety of brain disorders. Second, we design and develop an intelligent and interactive system for visual analysis of brain disorders by utilizing the complete feature space across all modalities. Each subdivided anatomical unit is specialized by a vector of features that overlap within that element. The analytical framework provides the necessary interactivity for exploration of salient features and discovering relevant hypotheses. It provides visualization tools for confirming model results and an easy-to-use interface for manipulating parameters for feature selection and filtering. It provides coordinated display views for visualizing multiple features across multiple subject groups, visual representations for highlighting interdependencies and correlations between features, and an efficient data-management solution for maintaining provenance and issuing formal data queries to the back end

    Doctor of Philosophy

    Get PDF
    dissertationWith modern computational resources rapidly advancing towards exascale, large-scale simulations useful for understanding natural and man-made phenomena are becoming in- creasingly accessible. As a result, the size and complexity of data representing such phenom- ena are also increasing, making the role of data analysis to propel science even more integral. This dissertation presents research on addressing some of the contemporary challenges in the analysis of vector fields--an important type of scientific data useful for representing a multitude of physical phenomena, such as wind flow and ocean currents. In particular, new theories and computational frameworks to enable consistent feature extraction from vector fields are presented. One of the most fundamental challenges in the analysis of vector fields is that their features are defined with respect to reference frames. Unfortunately, there is no single ""correct"" reference frame for analysis, and an unsuitable frame may cause features of interest to remain undetected, thus creating serious physical consequences. This work develops new reference frames that enable extraction of localized features that other techniques and frames fail to detect. As a result, these reference frames objectify the notion of ""correctness"" of features for certain goals by revealing the phenomena of importance from the underlying data. An important consequence of using these local frames is that the analysis of unsteady (time-varying) vector fields can be reduced to the analysis of sequences of steady (time- independent) vector fields, which can be performed using simpler and scalable techniques that allow better data management by accessing the data on a per-time-step basis. Nevertheless, the state-of-the-art analysis of steady vector fields is not robust, as most techniques are numerical in nature. The residing numerical errors can violate consistency with the underlying theory by breaching important fundamental laws, which may lead to serious physical consequences. This dissertation considers consistency as the most fundamental characteristic of computational analysis that must always be preserved, and presents a new discrete theory that uses combinatorial representations and algorithms to provide consistency guarantees during vector field analysis along with the uncertainty visualization of unavoidable discretization errors. Together, the two main contributions of this dissertation address two important concerns regarding feature extraction from scientific data: correctness and precision. The work presented here also opens new avenues for further research by exploring more-general reference frames and more-sophisticated domain discretizations
    • …
    corecore