17 research outputs found

    Automatic visualization and control of arbitrary numerical simulations

    Get PDF
    Authors’ preprint version as submitted to ECCOMAS Congress 2016, Minisymposium 505 - Interactive Simulations in Computational Engineering. Abstract: Visualization of numerical simulation data has become a cornerstone for many industries and research areas today. There exists a large amount of software support, which is usually tied to specific problem domains or simulation platforms. However, numerical simulations have commonalities in the building blocks of their descriptions (e. g., dimensionality, range constraints, sample frequency). Instead of encoding these descriptions and their meaning into software architecures we propose to base their interpretation and evaluation on a data-centric model. This approach draws much inspiration from work of the IEEE Simulation Interoperability Standards Group as currently applied in distributed (military) training and simulation scenarios and seeks to extend those ideas. By using an extensible self-describing protocol format, simulation users as well as simulation-code providers would be able to express the meaning of their data even if no access to the underlying source code was available or if new and unforseen use cases emerge. A protocol definition will allow simulation-domain experts to describe constraints that can be used for automatically creating appropriate visualizations of simulation data and control interfaces. Potentially, this will enable leveraging innovations on both the simulation and visualization side of the problem continuum. We envision the design and development of algorithms and software tools for the automatic visualization of complex data from numerical simulations executed on a wide variety of platforms (e. g., remote HPC systems, local many-core or GPU-based systems). We also envisage using this automatically gathered information to control (or steer) the simulation while it is running, as well as providing the ability for fine-tuning representational aspects of the visualizations produced

    Automatic visualization and control of arbitrary numerical simulations

    Get PDF
    Authors’ preprint version as submitted to ECCOMAS Congress 2016, Minisymposium 505 - Interactive Simulations in Computational Engineering. Abstract: Visualization of numerical simulation data has become a cornerstone for many industries and research areas today. There exists a large amount of software support, which is usually tied to specific problem domains or simulation platforms. However, numerical simulations have commonalities in the building blocks of their descriptions (e. g., dimensionality, range constraints, sample frequency). Instead of encoding these descriptions and their meaning into software architecures we propose to base their interpretation and evaluation on a data-centric model. This approach draws much inspiration from work of the IEEE Simulation Interoperability Standards Group as currently applied in distributed (military) training and simulation scenarios and seeks to extend those ideas. By using an extensible self-describing protocol format, simulation users as well as simulation-code providers would be able to express the meaning of their data even if no access to the underlying source code was available or if new and unforseen use cases emerge. A protocol definition will allow simulation-domain experts to describe constraints that can be used for automatically creating appropriate visualizations of simulation data and control interfaces. Potentially, this will enable leveraging innovations on both the simulation and visualization side of the problem continuum. We envision the design and development of algorithms and software tools for the automatic visualization of complex data from numerical simulations executed on a wide variety of platforms (e. g., remote HPC systems, local many-core or GPU-based systems). We also envisage using this automatically gathered information to control (or steer) the simulation while it is running, as well as providing the ability for fine-tuning representational aspects of the visualizations produced

    Supporting Engineering Processes Utilizing Service-Oriented Grid Technology

    Get PDF
    Speeding up knowledge-intensive core processes in engineering and increas-ing the quality of their results is becoming more and more decisive, since economic pressure from national and international competitors and customers is rising. In particular, these demands exceed the organizational and infrastructural capacities of small and medium-sized enterprises (SME) by far. Hence, combining complementary core competencies across organizational boundaries is crucial for an enterprise's continuing success. Efficient and economically reasonable support of knowledge-intensive core processes in virtual organisations is therefore a predominant requirement for future IT infrastructures. The paradigm shift to service-orientation in Grid middleware opens the possibility to provide such support along the product lifecycle by employing a flexible software development approach, namely to compose applications from standard components, promising easier development and modification of Grid applications. In this paper, a service-oriented Grid computing approach is presented which aims at supporting distributed business processes in industry (see section 2 for industrial scenarios) from top level modelling, workflow design and exe-cution to actual Grid service code (presented in section 3). Parts of this gap between processes and code can be bridged by semi-automatically generated Grid service code. Orchestration of these Grid services is also automated by using a Grid-enabled workflow engine (see section 3). The feasibility of the proposed approach is demonstrated by presenting an exemplary process chain from the casting industry (see full paper)

    Volume visualization of time-varying data using parallel, multiresolution and adaptive-resolution techniques

    Get PDF
    This paper presents a parallel rendering approach that allows high-quality visualization of large time-varying volume datasets. Multiresolution and adaptive-resolution techniques are also incorporated to improve the efficiency of the rendering. Three basic steps are needed to implement this kind of an application. First we divide the task through decomposition of data. This decomposition can be either temporal or spatial or a mix of both. After data has been divided, each of the data portions is rendered by a separate processor to create sub-images or frames. Finally these sub-images or frames are assembled together into a final image or animation. After developing this application, several experiments were performed to show that this approach indeed saves time when a reasonable number of processors are used. Also, we conclude that the optimal number of processors is dependent on the size of the dataset used

    VisTrails: enabling interactive multiple-view visualizations

    Get PDF
    Journal ArticleVisTrails is a new system that enables interactive multiple-view visualizations by simplifying the creation and maintenance of visualization pipelines, and by optimizing their execution. It provides a general infrastructure that can be combined with existing visualization systems and libraries. A key component of VisTrails is the visualization trail (vistrail), a formal specification of a pipeline. Unlike existing dataflow-based systems, in VisTrails there is a clear separation between the specification of a pipeline and its execution instances. This separation enables powerful scripting capabilities and provides a scalable mechanism for generating a large number of visualizations. VisTrails also leverages the vistrail specification to identify and avoid redundant operations. This optimization is especially useful while exploring multiple visualizations. When variations of the same pipeline need to be executed, substantial speedups can be obtained by caching the results of overlapping subsequences of the pipelines. In this paper, we describe the design and implementation of VisTrails, and show its effectiveness in different application scenarios

    System Design and Algorithmic Development for Computational Steering in Distributed Environments

    Get PDF
    Supporting visualization pipelines over wide-area networks is critical to enabling large-scale scientific applications that require visual feedback to interactively steer online computations. We propose a remote computational steering system that employs analytical models to estimate the cost of computing and communication components and optimizes the overall system performance in distributed environments with heterogeneous resources. We formulate and categorize the visualization pipeline configuration problems for maximum frame rate into three classes according to the constraints on node reuse or resource sharing, namely no, contiguous, and arbitrary reuse. We prove all three problems to be NP-complete and present heuristic approaches based on a dynamic programming strategy. The superior performance of the proposed solution is demonstrated with extensive simulation results in comparison with existing algorithms and is further evidenced by experimental results collected on a prototype implementation deployed over the Internet

    The Potential and Problems in using High Performance Computing in the Arts and Humanities: the Researching e-Science Analysis of Census Holdings (ReACH) Project

    Get PDF
    e-Science and high performance computing (HPC) have the potential to allow large datasets to be searched and analysed quickly, efficiently, and in complex and novel ways. Little application has been made of the processing power of grid technologies to humanities data, due to lack of available large-scale datasets, and little understanding of or access to e-Science technologies. The Researching e-Science Analysis of Census Holdings (ReACH) scoping study, an AHRC-funded e-science workshop series, was established to investigate the potential application of grid computing to a large dataset of interest to historians, humanists, digital consumers, and the general public: historical census records. Consisting of three one-day workshops held at UCL in Summer 2006, the workshop series brought together expertise across different domains to ascertain how useful, possible, or feasible it would be to analyse datasets from Ancestry and The National Archives using the HPC facilities available at UCL. This article details the academic, technical, managerial, and legal issues highlighted in the project when attempting to apply HPC to historical data sets. Additionally, generic issues facing humanities researchers attempting to utilise HPC technologies in their research are presented

    Adaptive remote visualization system with optimized network performance for large scale scientific data

    Get PDF
    This dissertation discusses algorithmic and implementation aspects of an automatically configurable remote visualization system, which optimally decomposes and adaptively maps the visualization pipeline to a wide-area network. The first node typically serves as a data server that generates or stores raw data sets and a remote client resides on the last node equipped with a display device ranging from a personal desktop to a powerwall. Intermediate nodes can be located anywhere on the network and often include workstations, clusters, or custom rendering engines. We employ a regression model-based network daemon to estimate the effective bandwidth and minimal delay of a transport path using active traffic measurement. Data processing time is predicted for various visualization algorithms using block partition and statistical technique. Based on the link measurements, node characteristics, and module properties, we strategically organize visualization pipeline modules such as filtering, geometry generation, rendering, and display into groups, and dynamically assign them to appropriate network nodes to achieve minimal total delay for post-processing or maximal frame rate for streaming applications. We propose polynomial-time algorithms using the dynamic programming method to compute the optimal solutions for the problems of pipeline decomposition and network mapping under different constraints. A parallel based remote visualization system, which comprises a logical group of autonomous nodes that cooperate to enable sharing, selection, and aggregation of various types of resources distributed over a network, is implemented and deployed at geographically distributed nodes for experimental testing. Our system is capable of handling a complete spectrum of remote visualization tasks expertly including post processing, computational steering and wireless sensor network monitoring. Visualization functionalities such as isosurface, ray casting, streamline, linear integral convolution (LIC) are supported in our system. The proposed decomposition and mapping scheme is generic and can be applied to other network-oriented computation applications whose computing components form a linear arrangement

    Concept-driven visualization for terascale data analytics

    Get PDF
    Over the past couple of decades the amount of scientific data sets has exploded. The science community has since been facing the common problem of being drowned in data, and yet starved of information. Identification and extraction of meaningful features from large data sets has become one of the central problems of scientific research, for both simulation as well as sensory data sets. The problems at hand are multifold and need to be addressed concurrently to provide scientists with the necessary tools, methods, and systems. Firstly, the underlying data structures and management need to be optimized for the kind of data most commonly used in scientific research, i.e. terascale time-varying, multi-dimensional, multi-variate, and potentially non-uniform grids. This implies avoidance of data duplication, utilization of a transparent query structure, and use of sophisticated underlying data structures and algorithms.Secondly, in the case of scientific data sets, simplistic queries are not a sufficient method to describe subsets or features. For time-varying data sets, many features can generally be described as local events, i.e. spatially and temporally limited regions with characteristic properties in value space. While most often scientists know quite well what they are looking for in a data set, at times they cannot formally or definitively describe their concept well to computer science experts, especially when based on partially substantiated knowledge. Scientists need to be enabled to query and extract such features or events directly and without having to rewrite their hypothesis into an inadequately simple query language. Thirdly, tools to analyze the quality and sensitivity of these event queries itself are required. Understanding local data sensitivity is a necessity for enabling scientists to refine query parameters as needed to produce more meaningful findings.Query sensitivity analysis can also be utilized to establish trends for event-driven queries, i.e. how does the query sensitivity differ between locations and over a series of data sets. In this dissertation, we present an approach to apply these interdependent measures to aid scientists in better understanding their data sets. An integrated system containing all of the above tools and system parts is presented
    corecore