104,393 research outputs found

    Fine-grained visualization pipelines and lazy functional languages

    Get PDF
    The pipeline model in visualization has evolved from a conceptual model of data processing into a widely used architecture for implementing visualization systems. In the process, a number of capabilities have been introduced, including streaming of data in chunks, distributed pipelines, and demand-driven processing. Visualization systems have invariably built on stateful programming technologies, and these capabilities have had to be implemented explicitly within the lower layers of a complex hierarchy of services. The good news for developers is that applications built on top of this hierarchy can access these capabilities without concern for how they are implemented. The bad news is that by freezing capabilities into low-level services expressive power and flexibility is lost. In this paper we express visualization systems in a programming language that more naturally supports this kind of processing model. Lazy functional languages support fine-grained demand-driven processing, a natural form of streaming, and pipeline-like function composition for assembling applications. The technology thus appears well suited to visualization applications. Using surface extraction algorithms as illustrative examples, and the lazy functional language Haskell, we argue the benefits of clear and concise expression combined with fine-grained, demand-driven computation. Just as visualization provides insight into data, functional abstraction provides new insight into visualization

    Correlative visualization techniques for multidimensional data

    Get PDF
    Critical to the understanding of data is the ability to provide pictorial or visual representation of those data, particularly in support of correlative data analysis. Despite the advancement of visualization techniques for scientific data over the last several years, there are still significant problems in bringing today's hardware and software technology into the hands of the typical scientist. For example, there are other computer science domains outside of computer graphics that are required to make visualization effective such as data management. Well-defined, flexible mechanisms for data access and management must be combined with rendering algorithms, data transformation, etc. to form a generic visualization pipeline. A generalized approach to data visualization is critical for the correlative analysis of distinct, complex, multidimensional data sets in the space and Earth sciences. Different classes of data representation techniques must be used within such a framework, which can range from simple, static two- and three-dimensional line plots to animation, surface rendering, and volumetric imaging. Static examples of actual data analyses will illustrate the importance of an effective pipeline in data visualization system

    Judge Parker and the Public Service State

    Get PDF
    The work described in this thesis is part of the Open Space project, a collaboration between Linköping University, NASA and the American Museum of Natural History. The long-term goal of Open Space is a multi-purpose, open-source scientific visualization software. The thesis covers the research and implementation of a pipeline for preparing and rendering volumetric data. The developed pipeline consists of three stages: A data formatting stage which takes data from various sources and prepares it for the rest of the pipeline, a pre-processing stage which builds a tree structure of of the raw data, and finally an interactive rendering stage which draws a volume using ray-casting. The pipeline is a fully working proof-of-concept for future development of Open Space, and can be used as-is to render space weather data using a combination of suitable data structures and an efficient data transfer pipeline. Many concepts and ideas from this work can be utilized in the larger-scale software project

    Odyssey: a semi-automated pipeline for phasing, imputation, and analysis of genome-wide genetic data

    Get PDF
    BACKGROUND: Genome imputation, admixture resolution and genome-wide association analyses are timely and computationally intensive processes with many composite and requisite steps. Analysis time increases further when building and installing the run programs required for these analyses. For scientists that may not be as versed in programing language, but want to perform these operations hands on, there is a lengthy learning curve to utilize the vast number of programs available for these analyses. RESULTS: In an effort to streamline the entire process with easy-to-use steps for scientists working with big data, the Odyssey pipeline was developed. Odyssey is a simplified, efficient, semi-automated genome-wide imputation and analysis pipeline, which prepares raw genetic data, performs pre-imputation quality control, phasing, imputation, post-imputation quality control, population stratification analysis, and genome-wide association with statistical data analysis, including result visualization. Odyssey is a pipeline that integrates programs such as PLINK, SHAPEIT, Eagle, IMPUTE, Minimac, and several R packages, to create a seamless, easy-to-use, and modular workflow controlled via a single user-friendly configuration file. Odyssey was built with compatibility in mind, and thus utilizes the Singularity container solution, which can be run on Linux, MacOS, and Windows platforms. It is also easily scalable from a simple desktop to a High-Performance System (HPS). CONCLUSION: Odyssey facilitates efficient and fast genome-wide association analysis automation and can go from raw genetic data to genome: phenome association visualization and analyses results in 3-8 h on average, depending on the input data, choice of programs within the pipeline and available computer resources. Odyssey was built to be flexible, portable, compatible, scalable, and easy to setup. Biologists less familiar with programing can now work hands on with their own big data using this easy-to-use pipeline

    Analyzing Visual Mappings of Traditional and Alternative Music Notation

    Full text link
    In this paper, we postulate that combining the domains of information visualization and music studies paves the ground for a more structured analysis of the design space of music notation, enabling the creation of alternative music notations that are tailored to different users and their tasks. Hence, we discuss the instantiation of a design and visualization pipeline for music notation that follows a structured approach, based on the fundamental concepts of information and data visualization. This enables practitioners and researchers of digital humanities and information visualization, alike, to conceptualize, create, and analyze novel music notation methods. Based on the analysis of relevant stakeholders and their usage of music notation as a mean of communication, we identify a set of relevant features typically encoded in different annotations and encodings, as used by interpreters, performers, and readers of music. We analyze the visual mappings of musical dimensions for varying notation methods to highlight gaps and frequent usages of encodings, visual channels, and Gestalt laws. This detailed analysis leads us to the conclusion that such an under-researched area in information visualization holds the potential for fundamental research. This paper discusses possible research opportunities, open challenges, and arguments that can be pursued in the process of analyzing, improving, or rethinking existing music notation systems and techniques.Comment: 5 pages including references, 3rd Workshop on Visualization for the Digital Humanities, Vis4DH, IEEE Vis 201

    Ruru: High-speed, Flow-level Latency Measurement and Visualization of Live Internet Traffic

    Get PDF
    End-to-end latency is becoming an important metric for many emerging applications (e.g., 5G low-latency services) over the Internet. To better understand end-to-end latency, we present Ruru1, a DPDK-based pipeline that exploits recent advances in high-speed packet processing and visualization. We present an operational deployment of Ruru over an international high-speed link running between Auckland and Los Angeles, and show how Ruru can be used for latency anomaly detection and network planning
    • …
    corecore