187 research outputs found

    Dagstuhl News January - December 2011

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic

    Using parametric model order reduction for inverse analysis of large nonlinear cardiac simulations

    Get PDF
    Predictive high-fidelity finite element simulations of human cardiac mechanics commonly require a large number of structural degrees of freedom. Additionally, these models are often coupled with lumped-parameter models of hemodynamics. High computational demands, however, slow down model calibration and therefore limit the use of cardiac simulations in clinical practice. As cardiac models rely on several patient-specific parameters, just one solution corresponding to one specific parameter set does not at all meet clinical demands. Moreover, while solving the nonlinear problem, 90% of the computation time is spent solving linear systems of equations. We propose to reduce the structural dimension of a monolithically coupled structure-Windkessel system by projection onto a lower-dimensional subspace. We obtain a good approximation of the displacement field as well as of key scalar cardiac outputs even with very few reduced degrees of freedom, while achieving considerable speedups. For subspace generation, we use proper orthogonal decomposition of displacement snapshots. Following a brief comparison of subspace interpolation methods, we demonstrate how projection-based model order reduction can be easily integrated into a gradient-based optimization. We demonstrate the performance of our method in a real-world multivariate inverse analysis scenario. Using the presented projection-based model order reduction approach can significantly speed up model personalization and could be used for many-query tasks in a clinical setting

    Energy Savings for Cellular Network with Evaluation of Impact on Data Traffic Performance

    Get PDF
    We present a concrete methodology for saving energy in future and contemporary cellular networks. It is based on re-arranging the user-cell association so as to allow shutting down under-utilized parts of the network. We consider a hypothetical static case where we have complete knowledge of stationary user locations and thus the results represent an upper bound of potential energy savings. We formulate the problem as a binary integer programming problem, thus it is NP-hard, and we present a heuristic approximation method. We simulate the methodology on an example real cellular network topology with traffic- and user distribution generated according to recently measured patterns. Further, we evaluate the energy savings, using realistic energy profiles, and the impact on the user-perceived network performance, represented by delay and throughput, at various times of day. The general findings conclude that up to 50% energy may be saved in less busy periods, while the performance effects remain limited. We conclude that practical, real-time user-cell re-allocation methodology, taking into account user mobility predictions, may thus be feasible and bring significant energy savings at acceptable performance impact

    Measuring and improving the readability of network visualizations

    Get PDF
    Network data structures have been used extensively for modeling entities and their ties across such diverse disciplines as Computer Science, Sociology, Bioinformatics, Urban Planning, and Archeology. Analyzing networks involves understanding the complex relationships between entities as well as any attributes, statistics, or groupings associated with them. The widely used node-link visualization excels at showing the topology, attributes, and groupings simultaneously. However, many existing node-link visualizations are difficult to extract meaning from because of (1) the inherent complexity of the relationships, (2) the number of items designers try to render in limited screen space, and (3) for every network there are many potential unintelligible or even misleading visualizations. Automated layout algorithms have helped, but frequently generate ineffective visualizations even when used by expert analysts. Past work, including my own described herein, have shown there can be vast improvements in network visualizations, but no one can yet produce readable and meaningful visualizations for all networks. Since there is no single way to visualize all networks effectively, in this dissertation I investigate three complimentary strategies. First, I introduce a technique called motif simplification that leverages the repeating patterns or motifs in a network to reduce visual complexity. I replace common, high-payoff motifs with easily understandable glyphs that require less screen space, can reveal otherwise hidden relationships, and improve user performance on many network analysis tasks. Next, I present new Group-in-a-Box layouts that subdivide large, dense networks using attribute- or topology-based groupings. These layouts take group membership into account to more clearly show the ties within groups as well as the aggregate relationships between groups. Finally, I develop a set of readability metrics to measure visualization effectiveness and localize areas needing improvement. I detail optimization recommendations for specific user tasks, in addition to leveraging the readability metrics in a user-assisted layout optimization technique. This dissertation contributes an understanding of why some node-link visualizations are difficult to read, what measures of readability could help guide designers and users, and several promising strategies for improving readability which demonstrate that progress is possible. This work also opens several avenues of research, both technical and in user education

    Exact Scalable Sensitivity Analysis for the Next Release Problem

    Get PDF
    The nature of the requirements analysis problem, based as it is on uncertain and often inaccurate estimates of costs and effort, makes sensitivity analysis important. Sensitivity analysis allows the decision maker to identify those requirements and budgets that are particularly sensitive to misestimation. However, finding scalable sensitivity analysis techniques is not easy because the underlying optimization problem is NP-hard. This article introduces an approach to sensitivity analysis based on exact optimization. We implemented this approach as a tool, OATSAC, which allowed us to experimentally evaluate the scalability and applicability of Requirements Sensitivity Analysis (RSA). Our results show that OATSAC scales sufficiently well for practical applications in Requirements Sensitivity Analysis. We also show how the sensitivity analysis can yield insights into difficult and otherwise obscure interactions between budgets, requirements costs, and estimate inaccuracies using a real-world case study

    Identifying and Using Driver Nodes in Temporal Networks

    Get PDF
    In many approaches developed for defining complex networks, the main assumption is that the network is in a relatively stable state that can be approximated with a fixed topology. However, in several applications, this approximation is not adequate because a) the system modeled is dynamic by nature, and b) the changes are an essential characteristic that cannot be approximated. Temporal networks capture changes in the topology of networks by including the temporal information associated with their structural connections, i.e., links or edges. We focus here on controllability of temporal networks, that is, the study of steering the state of a network to any desired state at deadline tf within Δt=tf−t0 steps through stimulating key nodes called driver nodes. Recent studies provided analytical approaches to find a maximum controllable subspace for an arbitrary set of driver nodes. However, finding the minimum number of driver nodes Nc required to reach full control is computationally prohibitive. In this work, we propose a heuristic algorithm that quickly finds a suboptimal set of driver nodes with size Ns \u3e Nc . We conduct experiments on synthetic and real-world temporal networks induced from ant colonies and e-mail communications of a manufacturing company. The empirical results in both cases show the heuristic algorithm efficiently identifies a small set of driver nodes that can fully control the networks. Also, as shown in the case of ants’ interactions networks, the driver nodes tend to have a large degree in temporal networks. Furthermore, we analyze the behavior of driver nodes within the context of their datasets, through which, we observe that queen ants tend to avoid becoming a driver node

    Improving Model-Based Software Synthesis: A Focus on Mathematical Structures

    Get PDF
    Computer hardware keeps increasing in complexity. Software design needs to keep up with this. The right models and abstractions empower developers to leverage the novelties of modern hardware. This thesis deals primarily with Models of Computation, as a basis for software design, in a family of methods called software synthesis. We focus on Kahn Process Networks and dataflow applications as abstractions, both for programming and for deriving an efficient execution on heterogeneous multicores. The latter we accomplish by exploring the design space of possible mappings of computation and data to hardware resources. Mapping algorithms are not at the center of this thesis, however. Instead, we examine the mathematical structure of the mapping space, leveraging its inherent symmetries or geometric properties to improve mapping methods in general. This thesis thoroughly explores the process of model-based design, aiming to go beyond the more established software synthesis on dataflow applications. We starting with the problem of assessing these methods through benchmarking, and go on to formally examine the general goals of benchmarks. In this context, we also consider the role modern machine learning methods play in benchmarking. We explore different established semantics, stretching the limits of Kahn Process Networks. We also discuss novel models, like Reactors, which are designed to be a deterministic, adaptive model with time as a first-class citizen. By investigating abstractions and transformations in the Ohua language for implicit dataflow programming, we also focus on programmability. The focus of the thesis is in the models and methods, but we evaluate them in diverse use-cases, generally centered around Cyber-Physical Systems. These include the 5G telecommunication standard, automotive and signal processing domains. We even go beyond embedded systems and discuss use-cases in GPU programming and microservice-based architectures

    Operational Research: Methods and Applications

    Get PDF
    Throughout its history, Operational Research has evolved to include a variety of methods, models and algorithms that have been applied to a diverse and wide range of contexts. This encyclopedic article consists of two main sections: methods and applications. The first aims to summarise the up-to-date knowledge and provide an overview of the state-of-the-art methods and key developments in the various subdomains of the field. The second offers a wide-ranging list of areas where Operational Research has been applied. The article is meant to be read in a nonlinear fashion. It should be used as a point of reference or first-port-of-call for a diverse pool of readers: academics, researchers, students, and practitioners. The entries within the methods and applications sections are presented in alphabetical order. The authors dedicate this paper to the 2023 Turkey/Syria earthquake victims. We sincerely hope that advances in OR will play a role towards minimising the pain and suffering caused by this and future catastrophes
    corecore