82,357 research outputs found

    Improving Safety-Critical Systems by Visual Analysis

    Get PDF
    The importance analysis provides a means of analyzing the contribution of potential low-level system failures to identify and assess vulnerabilities of safety-critical systems. Common approaches attempt to enhance the system safety by addressing vulnerabilities using an iterative analysis process, while considering relevant constraints, e.g., cost, for optimizing the improvements. Typically, data regarding the analysis process is presented across several views with few interactive associations among them. Consequently, this hampers the identification of meaningful information supporting the decision making process. In this paper, we propose a visualization system that visually supports engineers in identifying proper solutions. The visualization integrates a decision tree with a plot representing the cause-effect relationship between the improvement ideas of vulnerabilities and the resulting risk reduction of system. Associating a component fault tree view with the plot allows to maintain helpful context information. The introduced visualization approach enables system and safety engineers to identify and analyze optimal solutions facilitating the improvement of the overall system safety

    Doctor of Philosophy

    Get PDF
    dissertationCorrelation is a powerful relationship measure used in many fields to estimate trends and make forecasts. When the data are complex, large, and high dimensional, correlation identification is challenging. Several visualization methods have been proposed to solve these problems, but they all have limitations in accuracy, speed, or scalability. In this dissertation, we propose a methodology that provides new visual designs that show details when possible and aggregates when necessary, along with robust interactive mechanisms that together enable quick identification and investigation of meaningful relationships in large and high-dimensional data. We propose four techniques using this methodology. Depending on data size and dimensionality, the most appropriate visualization technique can be provided to optimize the analysis performance. First, to improve correlation identification tasks between two dimensions, we propose a new correlation task-specific visualization method called correlation coordinate plot (CCP). CCP transforms data into a powerful coordinate system for estimating the direction and strength of correlations among dimensions. Next, we propose three visualization designs to optimize correlation identification tasks in large and multidimensional data. The first is snowflake visualization (Snowflake), a focus+context layout for exploring all pairwise correlations. The next proposed design is a new interactive design for representing and exploring data relationships in parallel coordinate plots (PCPs) for large data, called data scalable parallel coordinate plots (DSPCP). Finally, we propose a novel technique for storing and accessing the multiway dependencies through visualization (MultiDepViz). We evaluate these approaches by using various use cases, compare them to prior work, and generate user studies to demonstrate how our proposed approaches help users explore correlation in large data efficiently. Our results confirmed that CCP/Snowflake, DSPCP, and MultiDepViz methods outperform some current visualization techniques such as scatterplots (SCPs), PCPs, SCP matrix, Corrgram, Angular Histogram, and UntangleMap in both accuracy and timing. Finally, these approaches are applied in real-world applications such as a debugging tool, large-scale code performance data, and large-scale climate data

    i-JEN: Visual interactive Malaysia crime news retrieval system

    Get PDF
    Supporting crime news investigation involves a mechanism to help monitor the current and past status of criminal events. We believe this could be well facilitated by focusing on the user interfaces and the event crime model aspects. In this paper we discuss on a development of Visual Interactive Malaysia Crime News Retrieval System (i-JEN) and describe the approach, user studies and planned, the system architecture and future plan. Our main objectives are to construct crime-based event; investigate the use of crime-based event in improving the classification and clustering; develop an interactive crime news retrieval system; visualize crime news in an effective and interactive way; integrate them into a usable and robust system and evaluate the usability and system performance. The system will serve as a news monitoring system which aims to automatically organize, retrieve and present the crime news in such a way as to support an effective monitoring, searching, and browsing for the target users groups of general public, news analysts and policemen or crime investigators. The study will contribute to the better understanding of the crime data consumption in the Malaysian context as well as the developed system with the visualisation features to address crime data and the eventual goal of combating the crimes

    Updates in metabolomics tools and resources: 2014-2015

    Get PDF
    Data processing and interpretation represent the most challenging and time-consuming steps in high-throughput metabolomic experiments, regardless of the analytical platforms (MS or NMR spectroscopy based) used for data acquisition. Improved machinery in metabolomics generates increasingly complex datasets that create the need for more and better processing and analysis software and in silico approaches to understand the resulting data. However, a comprehensive source of information describing the utility of the most recently developed and released metabolomics resourcesā€”in the form of tools, software, and databasesā€”is currently lacking. Thus, here we provide an overview of freely-available, and open-source, tools, algorithms, and frameworks to make both upcoming and established metabolomics researchers aware of the recent developments in an attempt to advance and facilitate data processing workflows in their metabolomics research. The major topics include tools and researches for data processing, data annotation, and data visualization in MS and NMR-based metabolomics. Most in this review described tools are dedicated to untargeted metabolomics workflows; however, some more specialist tools are described as well. All tools and resources described including their analytical and computational platform dependencies are summarized in an overview Table

    Interactive visualization and topology-based analysis of large-scale time-varying remote-sensing data: challenges and opportunities

    Get PDF
    Over the last few years, the amount of large and complex data in the public domain has increased enormously and new challenges arose in the representation, analysis and visualization of such data. Considering the number of space missions that provided and will provide remote sensing data, there is still the need of a system that can be dispatched in several remote repositories and being accessible from a single client of commodity hardware. To tackle this challenge, at the DLR Institute for Software Technology we have defined a dual backend frontend system, enabling the interactive analysis and visualization of large-scale remote sensing data. The basis for all visualization and interaction approaches is CosmoScout VR, a visualization tool developed internally at DLR, and publicly available on Github, that allows the visualization of complex planetary data and large simulation data in real-time. The dual component of this system is based on an MPI framework, called Viracocha, that enables the analysis of large data remotely, and allows the efficient network usage about sending compact and partial results for interactive visualization in CosmoScout as soon as they are computed. A node-based interface is defined within the visualization tool, and this lets a domain expert to easily define customized pipelines for processing and visualizing the remote data. Each ā€œnodeā€ of this interface is either linked with a feature extraction module, defined in Viracocha, or to a rendering module defined directly in CosmoScout. Being this interface completely customizable by a user, multiple pipelines can be defined over the same dataset to enhance even more the visualization feedback for analysis purposes. Being an ongoing project, on top of these tools, as a novel strategy in EO data processing and visualization, we plan to define and implement strategies based on Topological Data Analysis (TDA). TDA is an emerging set of technique for processing the data considering its topological features. These include both the geometric information associated to a point, as well all the non-geometric scalar values, like temperature and pressure, to name a few, that can be captured during a monitoring mission. One of the major theories behind TDA is Discrete Morse Theory, that, given a scalar value, is used to define a gradient on such function, extract the critical points, identify the region-of-influence of each critical point, and so on. This strategy is parameter free and enables a domain scientist to process large datasets without a prior knowledge of it. An interesting research question, that it will be investigated during this project is the correlation of changes of critical points at different time steps, and the identification of deformation (or changes) across time in the original dataset

    Visual Integration of Data and Model Space in Ensemble Learning

    Full text link
    Ensembles of classifier models typically deliver superior performance and can outperform single classifier models given a dataset and classification task at hand. However, the gain in performance comes together with the lack in comprehensibility, posing a challenge to understand how each model affects the classification outputs and where the errors come from. We propose a tight visual integration of the data and the model space for exploring and combining classifier models. We introduce a workflow that builds upon the visual integration and enables the effective exploration of classification outputs and models. We then present a use case in which we start with an ensemble automatically selected by a standard ensemble selection algorithm, and show how we can manipulate models and alternative combinations.Comment: 8 pages, 7 picture
    • ā€¦
    corecore